id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
221656254
|
pes2o/s2orc
|
v3-fos-license
|
Drug-Coated Balloon for De Novo Coronary Artery Lesions: A Systematic Review and Trial Sequential Meta-analysis of Randomized Controlled Trials
Objective To investigate the efficacy of drug-coated balloon (DCB) treatment for de novo coronary artery lesions in randomized controlled trials (RCTs). Background DCB was an effective therapy for patients with in-stent restenosis. However, the efficacy of DCB in patients with de novo coronary artery lesions is still unknown. Methods Eligible studies were searched on PubMed, Web of Science, EMBASE, and Cochrane Library Database. Systematic review and meta-analyses of RCTs were performed comparing DCB with non-DCB devices (such as plain old balloon angioplasty (POBA), bare-metal stents (BMS), or drug-eluting stents (DES)) for the treatment of de novo lesions. Trial sequential meta-analysis (TSA) was performed to assess the false positive and false negative errors. Results A total of 2,137 patients enrolled in 12 RCTs were analyzed. Overall, no significant difference in target lesion revascularization (TLR) was found, but there were numerically lower rates after DCB treatment at 6 to 12 months follow-up (RR: 0.69; 95% CI: 0.47 to 1.01; P = 0.06; TSA-adjusted CI: 0.41 to 1.16). TSA showed that at least 1,000 more randomized patients are needed to conclude the effect on TLR. A subgroup analysis from high bleeding risk patients revealed that DCB treatment was associated with lower rate of TLR (RR: 0.10; 95% CI: 0.01 to 0.78; P = 0.03). The systematic review illustrated that the rate of bailout stenting was lower and decreased gradually. Conclusions DCB treatment was associated with a trend toward lower TLR when compared with controls. For patients at bleeding risk, DCB treatment was superior to BMS in TLR.
Introduction
Stent implantation is the recommended strategy for majority of coronary artery lesions intended for percutaneous coronary intervention (PCI) [1]. However, stent implantation has several limitations. Long-term follow-up results up to 16 years showed that stent implantation was associated with higher vessel thrombosis and myocardial infarction when compared with plain old balloon angioplasty (POBA) only [2]. Even with the latest generation stent, the rate of major adverse cardiac events was as high as 6.1%, and accompanied with a 2% annual rate thereafter [3]. The persistence of metal material in the vessel wall has been considered one of the causes of adverse events [4]. Therefore, the exploration of a stentless strategy is persistently on the way. Drug-coated balloon (DCB) is an alternative stentless strategy, which was a combination of angioplasty along with local drug delivery. In 2006, DCB was first introduced to the treatment of in-stent restenosis (ISR) in clinical because it did not involve implanting additional metal layers [5][6][7]. Afterwards, many studies demonstrated promising results of DCB in the treatment of ISR [8]. In the latest European myocardial revascularization guideline, DCB angioplasty was a Class I recommendation for the treatment of ISR with Level A evidence [1].
Following the successful treatment of ISR, DCB was investigated for its efficacy and safety in de novo coronary artery lesions, based on the hypothesis that foregoing metallic stent implantation in coronary arteries could improve the clinical events [9][10][11][12][13]. Recently, a patient-level meta-analysis compared DCB with non-DCB devices in both de novo and coronary ISR lesions; DCB was associated with a trend toward lower mortality [14]. However, several studies evaluating the DCB for the treatment of de novo coronary artery disease yielded controversial results. Aside from the mixed results, these studies were also not strong enough to conclude the value of DCB in the use of de novo coronary artery lesions. The purpose of the present systematic review and meta-analysis was to evaluate the efficacy of the use of DCB for de novo lesions in different randomized controlled trials (RCTs).
Materials and Methods
The present systematic review and meta-analysis was performed following the recommendations of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement [15].
Study Protocol.
The present study only included RCTs which analyzed PCI with DCB (without stent implantation) versus implantation of bare-metal stents (BMS) and/or drug-eluting stents (DES) or POBA for de novo coronary artery lesions in different clinical settings; this included patients with acute coronary syndrome, stable angina pectoris, high bleeding risk (HBR),or small vessel disease (SVD). The duration of follow-up was 6 months or more. The sample sizes of the studies were not limited.
Several studies were strictly excluded based on the following criteria: (1) studies assessing the efficacy of DCB for the treatment of ISR, (2) studies that analyzed the intervention of DCB in patients with peripheral artery disease or treatment of dysfunctional hemodialysis arteriovenous fistulas, The following terms were used to perform the PubMed search: (Sirolimus-coated balloon) or (Paclitaxel-coated Balloon) or (drug-coated Balloon) or (Drug-eluting balloon) or (Drug-eluting balloons) or (Drug-coated balloons) and (coronary) or myocardial infarction) and (randomized) or (randomised) or (randomisation) or (randomization). Additional filters, such as the article type and publications dates, were also used. Moreover, we also performed a manual search by scanning the references of the identified articles to find potentially missing studies.
Selection Process and Data Extraction.
All potentially relevant studies were independently screened by two authors (WL and MZ). A lot of ineligible studies were excluded according to their titles and abstracts, while the potentially eligible studies had their full texts reviewed. A consensus between the two screening authors needed to be reached in order to determine eligible studies. Any discrepancies were resolved by discussion. The selection process strictly followed the inclusion and exclusion criteria.
Data extraction was independently implemented by the same two authors (WL and MZ). Relevant information from eligible studies was extracted using a prespecified table which contained the relevant items. The following items were extracted from the included studies: comparators (DCB versus DES/BMS/POBA), type of DCB, sample sizes, designation, indication for PCI (acute myocardial infarction [AMI], HBR, SVD, or de novo lesions), duration of follow-up, baseline characteristics (age, gender, and medical history), relevant clinical outcomes, and angiographic outcomes.
2.4. Assessment of Study Quality and Risk of Bias. The quality of included studies, which was assessed independently by two investigators (WL and GP C), was evaluated using the Jadad scale. The Jadad scale consists of three items pertaining to descriptions of randomization (0-2 points), double blinding (0-2 points), and dropouts and withdrawals (0-1 point) for a total of five points, with a higher score indicating better quality. Trials scoring 3 or more were considered to be high quality. The Collaboration's Risk of Bias tool was used to assess the risk of bias in included studies.
Statistical
Analysis. This study compared both clinical events (TLR, myocardial infarction, and mortality of all causes) and angiographic results (in-segment late lumen loss (LLL) and percent diameter stenosis) for patients treated with DCB versus non-DCB devices. The present study defined TLR as the primary outcome. Risk ratio (RR) or risk difference with a 95% confidence interval (95% CI) was used as a measure of relative risk for the categorical data, such as TLR, mortality of all causes, and myocardial infarction. Mean difference (MD) with the 95% CI was calculated as the effect size for endpoints with continuous data. Either the fixed (Mantel-Haenszel, Rothman-Boice) model or the random effects (inverse-variance) model was adopted to pool the data from each trial, as deemed appropriate. The I 2 statistic and Cochran's Q test were used to test statistical heterogeneity. Relevant statistical heterogeneity was considered as Cochran's Q test P < 0:05 and I 2 > 50%. The fixed effects model was applied to pool the effect sizes if the heterogeneity criteria were not met. Otherwise, the random effects model was used. All the included trials reported events at 6 to 12 months, while only two trials reported events at a 3-year follow-up. Meta-analyses were performed by using data from 6 to 12 months of follow-up, while the events at the 3-year follow-up were depicted qualitatively.
Subgroup analyses were performed based on the comparators (DCB versus uncoated devices and DCB versus DES) and indications of PCI (SVD, HBR, AMI, and de novo lesions). Sensitive analyses were also performed using a leaving-one-out approach. Trial sequential meta-analysis (TSA) was performed to assess the false positive (type I errors) and false negative errors (type II errors).
All meta-analyses were pooled based on the Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0. All statistical analyses were conducted by using Review Manager software version 5.3 (2014, The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark), and TSA were conducted using TSA software (version 0.9.5.10 Beta). The P values less than 0.05 were considered as significant using the 2-sided test. Figure 1 illustrates the details of the study search and selection process. Initially, our search yielded 1,378 studies, and 1,254 studies were excluded based on the titles and abstracts. A total of 124 potentially eligible papers further had their full text reviewed. Finally, 12 trials fulfilling the predefined criteria for inclusion were included in thestudy [9][10][11][12][13][16][17][18][19][20][21][22].
Results
In total, 12 RCTs including 2,137 patients were analyzed. All the DCB were paclitaxel coated. Among these, seven trials with 1,482 patients compared DCB and DES, one trial with 210 patients compared DCB and either BMS or DES, and the other four trials compared DCB and uncoated devices (two with POBA and two with BMS). In the present study, we only included patients undergoing PCI with de novo lesions. The clinical presentations of the patients varied. The most common indication was a small vessel lesion seen in five trials, followed by HRB seen in two trials. Majority of trials had 6 to 12 months follow-up, except for the BELLO and DEBUT trials, wherein the duration of follow-ups was 3 years [10,23]. The baseline characteristics of included trials were summarized in Table 1.
Study Quality and Risk of Bias.
All the included trials were of high quality, with Jadad scores of 3 points or more (Table 1). A summary assessment of the risk of bias is illustrated in Figure 2. The quality was "high" because most information is from RCTs with a low risk of bias.
3.2. The Incidence of Target Lesion Revascularization. TLR was evaluated in all the 12 included trials with a total of 2,137 patients. Among these, 1,090 patients were grouped into DCB treatment, and the other 1,047 patients were treated with non-DCB devices. The pooled result showed that there was no significant difference in the incidence of TLR between DCB and non-DCB treatment at 6 to 12 months of follow-up. But DCB treatment was associated with a numerically lower TLR risk ( Figure 2. RR: 0.69; 95% CI: 0.47 to 1.01; P = 0:06; TSA-adjusted CI: 0.41 to 1.16).
A subgroup analysis of DCB versus uncoated devices (POBA or BMS) revealed that DCB treatment yielded better TLR compared with uncoated devices (Figure 2; RR: 0.22; 95% CI: 0.08 to 0.60; P = 0:003). Another subgroup analysis including the DEBUT trial and the study by Shin et al. revealed that DCB treatment was associated with a lower incidence of TLR (RR: 0.10; 95% CI: 0.01 to 0.78; P = 0:03) in patients with HBR compared with BMS.
Sensitive analysis after excluding the PICCOLETO study showed that the incidence rate of TLR was significantly lower in DCB treatment compared with non-DCB devices (RR: 0.57; 95% CI: 0.37 to 0.86; P = 0:007), which hinted that the PICCOLETO study caused the discrepancy. This was possibly because the PICCOLETO study was prematurely stopped due to high major adverse cardiac event rates in the DCB group. In this study, the inferior results of DCB compared to DES were attributed to the first-generation Dior DCB (Eurocor Tech, Bonn, Germany), with a lower concentration of paclitaxel coated on the balloon [13].
TSA of all trials (type I error 5%; power 80%, relative risk reduction 30%) showed that the required information size was 3,374, which meant that 1,000 more patients need to be randomized before firm conclusions can be drawn regarding the effect on TLR (Figure 3).
3.3. The Impact of DCB Treatment on Mortality of All Causes and Myocardial Infarction. At 6 to 12 months of follow-up, no significant differences were observed between DCB and non-DCB devices in mortality of all causes (12 RCTs with 2,137 patients, RD: -0.00; 95% CI: -0.02 to 0.01; P = 0:52). A subgroup analysis revealed that DCB treatment was associated with lower mortality of all causes compared to uncoated devices (RD: -0.03; 95% CI: -0.06 to 0.00; P = 0:05). Mortality of all causes was similar between the DCB and DES groups. Another subgroup analysis showed that mortality of all causes was concordant in the DCB and non-DCB groups for patients with SVD, HBR, AMI, and de novo lesions. The direction of the results remained unchanged when removing individual studies from the analysis.
The Qualitative Description of Clinical Results at 3-Year
Follow-Up. Only the BELLO and DEBUT studies reported the clinical events at 3-year follow-up, and quantitative analyses were not conducted. The BELLO study enrolled 163 patients with lesions located in the small vessels (reference diameter < 2:8 mm). It found that the use of DCB appears to be associated with lower incidence of major adverse cardiac events (MACE) when compared with DES treatment at 3 years, while no significant differences were observed on TLR. In the DEBUT study, 210 patients with HBR and an ischemic de novo lesion in either the coronary artery or a bypass graft were included. The outcomes showed the proportion of MACE in the DCB group was lower than in the BMS group at 3-year follow-up.
Angiographic
Results at Follow-Up. The durations of angiographic follow-up were 6 to 9 months in all the included studies. LLL was reported in nine trials with 1,002 patients. The pooled result showed that DCB treatment was superior to non-DCB devices in terms of LLL with a MD of 3 Cardiovascular Therapeutics -0.17 mm with significant heterogeneity (Figure 4; MD: -0.17; 95% CI: -0.29 to -0.06; P = 0:003; I 2 = 86%). Subgroup analyses revealed that LLL was significantly lower in DCB treatment compared with uncoated devices (MD: -0.52; 95% CI -0.84 to -0.20; P = 0:002), but no difference of LLL was observed between the DCB and DES groups (MD: -0.06; 95% CI -0.15 to 0.03; P = 0:17).
Eight trials with 864 patients compared the percent diameter stenosis between the DCB and non-DCB groups. A similar percent diameter stenosis was identified between the DCB and non-DCB groups. Significant heterogeneity was also identified between trials, with I 2 = 87% (MD: -1.55; 95% CI: -8.34 to 5.24; P = 0:65; I 2 = 87%). In a subgroup analysis, DCB treatment had a significant benefit when compared with uncoated devices but was inferior to DES.
Sensitive analyses using a leave-one-out approach showed that the overall results of our study for LLL and percent diameter stenosis remained stable.
3.6. The Rates of Bailout Stenting in Patients Undergoing DCB Treatment. The rates of bailout stenting varied from 0% to 34.5% among studies (Table 1). Interestingly, we found that the rates of bailout stenting were higher in earlier studies, such as the PICCOLETO and BELLO trials [12,13], than those in later studies, and gradually decreased ( Figure 5). In the recent 3 years, the rate of bailout stenting was less than 5%, and studies in patients with AMI also had higher bailout stenting ( Figure 5). These data might display the obvious learning curve of the operation for DCB treatment.
Discussion
The efficacy and safety of DCB have been demonstrated in the treatment of ISR, and it is recommended as the firstline treatment for ISR in the latest ESC guidelines [1]. Emerging evidence also suggests that DCB may also be useful in de novo coronary artery lesions in patients with SVD and HBR. However, a security crisis of DCB was raised by a recent meta-analysis including 28 RCTs which showed an increased mortality following the application of paclitaxel-coated balloons and stents in the femoropopliteal artery of the lower limbs [24]. Interestingly, in this meta-analysis, the mortality was not high during the first 12 months, but only afterwards. Recently, an individual patient data meta-analysis further confirmed the risk of DCB in lower limbs, with an absolute 4.6% increased mortality risk after 5 years [25]. Fi y-six articles were review articles or meta-analysis or study design report (n = 48).
Cardiovascular Therapeutics
In contrast to the usage of DCB in lower limbs, the outcomes from our study showed that DCB treatment for de novo coronary lesions did not raise the incidence rates of TLR, mortality of all causes, and myocardial infarction. In fact, our study exhibited a numerical reduction of TLR in patients treated with DCB at 6 to 12 months follow-up, when compared to controls. A subgroup analysis showed that DCB treatment was associated with a lower rate of TLR compared with those treated with uncoated devices (BMS or POBA) and with similar TLR compared to DES treatment. For patients with HBR, the pooled results from DEBUT trial and study by Shin et al. showed that DCB treatment was superior to BMS in terms of TLR rate. Furthermore, the rate of myocardial infarction was also decreased in patients treated with DCB. Angiographic results showed the LLL was significantly reduced in patients treated with DCB. These results reassure the safety of DCB when used in de novo coronary lesions. Since the meta-analysis was inconclusive according to the TSA, we should cautiously interpret its results, and more studies are needed to draw more definite conclusions.
The use of PCI for coronary artery disease (CAD) has had a history of more than 40 years. In 1977, Grüntzig performed the first human percutaneous transluminal coronary balloon angioplasty [26]. The use of POBA, as it is now called, was the first step towards modern coronary interventions. However, the following studies found that the occurrence of the arterial recoiling process, acute closure due to arterial dissection, and renarrowing of the dilated segment after balloon dilatation were apparent in CAD patients treated with POBA [27]. To address the aforementioned problems, BMS and DES were successively introduced to treat patients with coronary stenosis [28]. Currently, the second-generation DES is widely used and has a relatively lower restenosis and MACE compared with POBA, BMS, and first-generation DES [29]. Nevertheless, patients treated with second-generation DES will have the mental and polymer material remain in the vessel wall, both of which could promote chronic inflammation, neoatherosclerosis within the stent, and impaired vasomotor function [4]. Therefore, the concept of leaving nothing behind has been brought up. 6 Cardiovascular Therapeutics
Risk of bias legend
The present study showed that DCB was a useful strategy for leaving nothing behind, but aside from this, bioresorbable vascular scaffolds (BVS) are another potential approach to achieve the same goal. BVS provide a temporary scaffolding effect and are then absorbed within a certain period. However, existing evidence indicates that BVS is not applicable for the treatment of CAD so far. The recent ABSORB III trial showed that the adverse event rates, particularly target vessel myocardial infarction (8.6% vs. 5.9%; P = 0:03) and device thrombosis (2.3% vs. 0.7%; P = 0:01), were higher with BVS than everolimus-eluting stents (EES) at the 3-year follow-up [30]. In accordance with this trial, a recent meta-analysis including 10,510 patients showed that BVS were associated with higher rates of target lesion failure and scaffold thrombosis between 1 7 Cardiovascular Therapeutics and 3 years and cumulatively through 3 years of follow-up compared with EES [31]. Accordingly, the FDA has recently issued an alarm about the use of BVS, citing stent thrombosis as the main concern. The present study, by highlighting the safety of DCB, confers a positive impact and better expectations regarding the stentless strategy.
Several advantages of DCB treatment for de novo coronary lesions have been mentioned. These advantages consist of (1) avoiding the persistence of metal material in the vessel wall, (2) reducing the duration of dual antiplatelet therapy, and (3) allowing for repeatability of the procedure. Because of these advantages, plenty of patients with de novo coronary artery lesions have received DCB treatment. A real-world observational study from the SCAAR registry found that treatment with DCB was associated with significantly lower risk for target lesion thrombosis (adjusted HR: 0.18; 95% CI: 0.04 to 0.82, P = 0:03) using a propensity-matched analysis compared to new-generation DES [32]. However, the possible vascular elastic recoil and dissections are still concerns regarding DCB treatment. Notably, our study showed that DCB treatment was associated with a reduced LLL, which meant that vascular elastic recoil and dissections might not be evident. Furthermore, our study reviewing 12 RCTs systematically showed that the rate of bailout stenting was lower, and gradually decreased by the year, with a less than 5% rate of bailout stenting in the past three years for patients without AMI. An interesting phenomenon found in our present study was that the rate of bailout stenting was higher in patients with AMI compared to those without. The possible reasons were as follows: (1) the target local lesion was more vulnerable and unstable in AMI patients, and (2) the PCI procedure was more emergent and urgent, and operators had less time to perform the elaborate operation. Therefore, due to the improvements in operation skills for PCI, DCB treatment, and intravascular imaging technology, the incidence of vascular elastic recoil and dissections which cause bailout stenting was comparatively lower and more acceptable in the current PCI era.
The advancements of DCB technologies facilitated the treatment of DCB for patients with de novo coronary lesions. The sensitive analysis result from our study showed that the PICCOLETO study affected the overall effect significantly. After omitting this study [13], the rate of TLR was lower in the DCB group than in the non-DCB group. The PICCO-LETO study used the first-generation Dior DCB (Eurocor Tech, Bonn, Germany), which had a lower concentration of paclitaxel covered on the balloon; this was considered the reason why DCB yielded inferior results compared to DES in this study [13]. Furthermore, several newer generation DCB have shown noninferior or superior results in patients with de novo coronary lesions compared with non-DCB devices [17,18]. These studies pointed out the fact that not all DCB are equal, and that they cannot be treated as a "class effect." Future DCB with improvements in excipient technology and introduction of more suitable antiproliferative drugs are expected to improve the treatment of patients with CAD.
Our study found that DCB treatment was superior to BMS in terms of TLR for patients with HBR. Major bleeding was a common complication of dual antiplatelet therapy (DAPT), especially in patients with HBR, and a powerful predictor of morbidity and mortality after PCI [33]. BMS with 1 month DAPT was once recommended [10]. After the emergence of new evidence, new-generation DES with shorter DAPT (3 months for stable CAD and 6 months for Cardiovascular Therapeutics acute coronary syndrome) was preferred over BMS for patients with HBR [1]. The LEADERS FREE study had shown that polymer-free DES was superior to BMS with respect to the primary safety and efficacy end points among patients with HBR when used with a 1-month course of DAPT [34]. However, the optimal technique of PCI in patients with HBR is still unknown. The superiority of DCB compared with BMS from our study offered a useful alternative therapy for HBR patients. With short DAPT needed for both DCB and new-generation DES therapy, future studies are warranted to evaluate the efficacy and safety between the two strategies for HBR patients.
Limitations
Our study has some limitations. First, we only qualitatively reviewed the long-term results of two trials reporting the long-term clinical events [10,12]. The BELLO study showed the rate of MACE was lower in the DCB group compared with the DES group at 3 years [23]. In the DEBUT study, DCB treatment was associated with lower MACE compared with BMS treatment in patients with HBR at 3-year followup [10]. More trials are needed to confirm the long-term efficacy and safety of DCB treatment. Second, bailout stenting was common in the earlier studies, and gradually declined. We could not assess the impact of cross-over treatment systematically since this information was not provided in most of the publications. Third, different types of DCB were used in the available studies. Majority (8 of 12 studies) used SeQuent Please DCB, while Dior, IN.PACT Falcon, Restore, and Pantera Lux DCB were used in one study each. Sensitive analyses performed with a leaving-one-out approach showed that the PICCOLETO study using Dior DCB affected the results, hinting at the potential discrepancies among different DCB technologies. Following the advances of DCB technologies and operators' experience, the efficacy of DCB treatment further improved. Nonetheless, it was inappropriate to conduct an additional analysis comparing different DCB technologies because of the limited data in the present study. Furthermore, the information on concomitant medication, such as antiplatelet and statin therapy, was insufficiently supplied and could therefore not be analyzed. Finally, this is not a patient-level meta-analysis, which may increase the risk of bias; caution must be taken in interpreting the outcomes of the present study.
Conclusion
DCB treatment had a numerically lower rate of TLR compared to non-DCB devices in patients with de novo coronary artery lesions. TSA showed that more patients were needed to confirm this result. Subgroup analyses showed that DCB was superior to uncoated devices (POBA and BMS) in terms of TLR. No significant differences were observed between the DCB and DES groups. In patients with HBR, DCB treatment had a lower rate of TLR than BMS. More high-quality randomized trials with long-term follow-ups are needed to further evaluate the role of DCB for the treatment of de novo lesions.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Disclosure
There are no relationships with the industry.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
|
2020-09-03T09:10:14.540Z
|
2020-08-10T00:00:00.000
|
{
"year": 2020,
"sha1": "c38999946c0417d0c495c839971858087d54bee2",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cdtp/2020/4158363.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd0726add8e9fb5b350c89ec1bb2e36fa6c36f0f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12215062
|
pes2o/s2orc
|
v3-fos-license
|
Effect of thyroid hormone concentration on the transcriptional response underlying induced metamorphosis in the Mexican axolotl (Ambystoma)
Background Thyroid hormones (TH) induce gene expression programs that orchestrate amphibian metamorphosis. In contrast to anurans, many salamanders do not undergo metamorphosis in nature. However, they can be induced to undergo metamorphosis via exposure to thyroxine (T4). We induced metamorphosis in juvenile Mexican axolotls (Ambystoma mexicanum) using 5 and 50 nM T4, collected epidermal tissue from the head at four time points (Days 0, 2, 12, 28), and used microarray analysis to quantify mRNA abundances. Results Individuals reared in the higher T4 concentration initiated morphological and transcriptional changes earlier and completed metamorphosis by Day 28. In contrast, initiation of metamorphosis was delayed in the lower T4 concentration and none of the individuals completed metamorphosis by Day 28. We identified 402 genes that were statistically differentially expressed by ≥ two-fold between T4 treatments at one or more non-Day 0 sampling times. To complement this analysis, we used linear and quadratic regression to identify 542 and 709 genes that were differentially expressed by ≥ two-fold in the 5 and 50 nM T4 treatments, respectively. Conclusion We found that T4 concentration affected the timing of gene expression and the shape of temporal gene expression profiles. However, essentially all of the identified genes were similarly affected by 5 and 50 nM T4. We discuss genes and biological processes that appear to be common to salamander and anuran metamorphosis, and also highlight clear transcriptional differences. Our results show that gene expression in axolotls is diverse and precise, and that axolotls provide new insights about amphibian metamorphosis.
with increases in thyroid hormone (triiodothyronine, T 3 and thyroxine, T 4 ; TH) [1,2] and RNA synthesis [3]. These events are interconnected; at metamorphosis, tissue-specific concentrations of TH activate and repress transcriptional networks within target cells that in turn regulate new patterns of development [4]. Many genes that are associated with molecular and morphological events during metamorphosis have been identified from studies of anurans, and in particular Xenopus laevis. In contrast, little is known about patterns of gene expression during salamander metamorphosis.
Although anuran and salamander metamorphosis are regulated by many of the same endocrine factors, there is considerable developmental variation between these groups. Most conspicuously, some salamanders do not undergo a complete metamorphosis in nature. These salamanders are called paedomorphs because they retain larval characteristics into the adult stage, and because genetic and phylogenetic evidence suggests that they evolve from metamorphic ancestors [5,6]. Paedomorphosis in the Mexican axolotl (Ambystoma mexicanum) is associated with low hypothalamic-pituitary-thyroid (HPT) activity and differential sensitivity of tissues to TH that results in some cryptic biochemical and molecular changes, but not the complete suite of morphological changes seen in related, metamorphic tiger salamanders. Interestingly, A. mexicanum can be induced to undergo anatomical metamorphosis by administering TH and endocrine factors that function upstream of TH synthesis [7,8]. The axolotl provides an excellent alternative to anuran systems because metamorphosis can be precisely induced and studied in juveniles or adults that are not developing toward a metamorphic outcome.
Functional genomic approaches are beginning to reshape the way transcription is conceptualized during amphibian metamorphosis [9][10][11]. The transcriptional program for tissue regression, remodeling, and organogenesis is significantly more complicated than was initially predicted for Xenopus [12][13][14][15]. Previously, we used microarray technology to identify keratin biomarkers for T 4 induced metamorphosis in the integument (epidermis) of the Mexican axolotl [11]. We showed that 50 nM T 4 induces a complex transcriptional program and axolotls complete metamorphosis with no mortality. Interestingly, this T 4 concentration is known to affect gene expression and mortality in anurans [16][17][18] and it is higher than T 4 concentrations estimated in the serum of spontaneously metamorphosing salamanders. For example, Larras-Regard et al. [2] reported 28 nM as the maximum serum T 4 level in Ambystoma tigrinum, a close relative of the axolotl that typically undergoes metamorphosis. To further investigate the effect of T 4 concentration on induced metamorphosis in the Mexican axolotl, we report the results of a second microarray experiment that induced metamorphosis using a much lower concentration of T 4 (5 nM). Using 5 and 50 nM T 4 microarray datasets, we describe the temporal transcriptional response of T 4 induced metamorphosis and specifically address the following question: Does T 4 concentration affect morphological metamorphosis and gene expression in the axolotl? We discuss the biological significance of some of the differentially expressed genes (DEGs) that were identified and the relationship between salamander and anuran metamorphic gene expression programs.
Effect of T 4 concentration on morphological metamorphosis
During T 4 induced metamorphosis, Mexican axolotls progress through developmental stages (0-IV) [19] that are defined by the resorption of the upper and lower tailfins, dorsal ridge, and gills. We staged all axolotls after 0, 2, 12, and 28 days of T 4 treatment. No metamorphic changes were observed after two days of T 4 treatment and thus axolotls from both T 4 treatments were assigned to Stage 0. At Day 12, morphological changes were observed in 50 nM T 4 treated axolotls (Stages I and II) but 5 nM T 4 treated axolotls were indistinguishable from control animals (Stage 0). At Day 28, axolotls reared in 50 nM T 4 had fully resorbed tailfins and gills, and thus had completed morphological metamorphosis (Stage IV). Between Days 12 and 28, 5 nM T 4 treated axolotls initiated metamorphosis but did not complete all morphological changes by Day 28 (Stage III). On average, individuals complete metamorphosis after 35 days in 5 nM T 4 (unpublished data). Thus, a low concentration of T 4 delays the initiation timing of morphological metamorphosis but not the length of the metamorphic period. 4 Our first set of statistical analyses tested control axolotls for temporal changes in mRNA abundance that were independent of T 4 treatment [see Additional files 1, 2]. Temporal changes are expected if patterns of transcription (gene expression) change significantly over time as salamanders mature, or if there are uncontrolled sources of experimental variation. After adjusting the false discovery rate (FDR) to 0.05, none of the probe-sets (genes) on the custom Ambystoma GeneChip were identified as significantly differentially abundant as a function of time. Thus, we found no statistical support for differential gene expression among control animals.
Gene expression in the presence of T 4 : 5 versus 50 nM
At each of the times that we estimated mRNA abundances during T 4 induced metamorphosis, the number and diversity of genes that were differentially expressed between the 5 and 50 nM T 4 treatments differed ( Figure 1A) [see Addi-tional files 3,4]. A total of 402 DEGs that differed by ≥ two-fold at one or more of our sampling times were identified among all day by T 4 treatment contrasts ( Figure 1A) [see Additional files 3,4]. We identified 30 DEGs as early as Day 2 (Table 1), and eighty percent of these DEGs were up regulated in 50 nM T 4 relative to 5 nM T 4 . This small group of early response genes was statistically associated with the amino acid transport and amine transport ontology terms. Additional gene functions of these early responding genes include epithelial differentiation, ion transport, RNA processing, signal transduction, and apoptosis/growth arrest. We note that no differentially expressed transcription factors were identified as DEGs at Day 2, and neither thyroid hormone receptor (TR; alpha and beta) was identified as differentially expressed at any time point in our study. At Day 12, when axolotls in 50 nM T 4 were undergoing dramatic tissue resorption events and axolotls in 5 nM T 4 were indistinguishable from controls, we identified the greatest number of DEGs between the T 4 treatments (n = 319; Figure 1A). An approximately equivalent number of up and down regulated DEGs were identified [see Additional files 3, 4]. These genes were Results of the statistical analyses conducted on the microar-ray data enriched for functions associated with epidermis development, carbohydrate metabolism, ectoderm development, response to chemical stimulus, negative regulation of cell proliferation, response to abiotic stimulus, negative regulation of biological process, development, and organ development. By Day 28, when axolotls in 50 nM T 4 had completed metamorphosis and axolotls in 5 nM T 4 were continuing to show morphological restructuring, we identified 216 DEGs that differed between the T 4 treatments ( Figure 1A). Of the identified DEGs, 76% were down regulated in 50 nM T 4 relative to 5 nM T 4 [see Additional files 3,4]. DEGs identified at Day 28 were associated with response to pest, pathogen, or parasite, negative regulation of cellular process, positive regulation of physiological process, response to stress, response to stimulus, transition metal ion transport, di-, tri-valent inorganic cation transport, response to other organism, immune response, development, organismal physiological process, muscle contraction, response to biotic stimulus, and response to bacteria. The functional categories that we identified show that the axolotl epidermal transcriptional response to T 4 is complex, involving hundreds of DEGs.
To further explore the effect of T4 on gene expression and morphological metamorphosis we conducted a principal component analysis (PCA). This analysis shows that global gene expression and morphological metamorphosis are strongly correlated, but there is little or no correlation between gene expression and T4 treatment ( Figure 2). This suggests that after metamorphosis was initiated within T4 treatments, molecular and morphological events were coordinately regulated. T4 concentration affected the onset timing of metamorphosis in axolotls, but not the sequence of transcriptional and morphological events that define this process.
Modeling the transcriptional response of genes during induced metamorphosis
To further investigate the effect of T 4 concentration on induced metamorphosis, we modeled mRNA abundance estimates from the 5 and 50 nM T 4 treatments using quadratic and linear regression. The regression analyses identified 542 and 709 DEGs that changed by ≥ two-fold relative to Day 0 controls, in the 5 and 50 nM T 4 treatments respectively [see Additional files 5,6,7,8]. Given our previous analyses, we expected to observe different regression patterns (expression profiles; Figure 3) for DEGs from each treatment because metamorphic initiation timing was delayed in 5 nM T 4 and metamorphosis was only completed in 50 nM T 4 . Indeed, most DEGs identified by the 5 nM regression analysis exhibited linear expression profiles during metamorphosis (e.g., linear down, LD; linear up, LU; Figure 3) while the majority of the DEGs identified by the 50 nM regression analysis exhibited curvilinear and parabolic expression profiles (e.g., quadratic linear convex down, QLVD; quadratic linear concave up QLCU; quadratic convex QV; quadratic concave, QC; Figure 3; see the methods for a summary of the biological interpretations of these expression profiles in the context of our experiment). Thus, biological processes known to be fundamental to tissue remodeling and/ or development were identified from both T 4 treatments, however they were statistically associated with different regression patterns ( Figure 3). For example, four collagendegrading matrix metallopeptidase (MMP) genes (MMP13, MMP9, MMP1) exhibited linear up regulated responses in 5 nM T 4 and were categorized as LU. However, under 50 nM T 4 , these genes were categorized among the QC and QLCU profiles. Several genes associated with organ development (transgelin, mitogen-activated protein kinase 12, distal-less homeo box 3, actin binding lim protein 1, collagen type VI alpha 3, and msh homeo box homolog 2) were up regulated in a linear fashion in 50 nM T 4 and were categorized as LU. In 5 nM T 4 , several of these genes (mitogenactivated protein kinase 12, actin binding lim protein 1, and msh homeo box homolog 2) were statistically significantly up regulated (LU and QLVU) but failed to eclipse our twofold change criteria. A single gene (collagen type VI alpha 3) was not statistically significant and categorized as "Flat" in 5 nM T 4 . However, this gene did not appreciably deviate from base-line expression levels until Day 28 in 50 nM T 4 ( Figure 4A). We attribute these differences between the T 4 treatments to the delayed onset timing of metamorphosis in the 5 nM T 4 treatment. Overall, the same generalized direction of expression was observed for 457 of 463 (99%) DEGs that were commonly identified from both T 4 treatments ( Figure 1B). The six genes (calponin 2, ethyl- 4B). These results show that T 4 concentration affected the shape of temporal gene expression profiles but essentially all epidermal DEGs that were identified in both T 4 treatments were regulated in the same direction by 5 and 50 nM T 4 .
Results of the principal component analysis
The regression analyses identified a number of DEGs that only met both of our criteria (statistically significant and ≥ two-fold change) in one of the T4 concentrations (5 nM, n = 79; 50 nM, n = 246; Figure 1B). Of the 79 genes unique to 5 nM T4, 36 (46%) were statistically significant in 50 nM T4 but did not eclipse our two-fold change criteria. Of the remaining 43 genes unique to 5 nM T4, 25 exhibited ≥ two-fold change in 50 nM T4 at one or more sampling times but were not statistically significant. Inspection of the 50 nM T4 regression patterns and fold change data associated with the 79 genes unique to 5 nM T4 revealed that all of these genes exhibited similar directional trends (up versus down regulation) in 5 and 50 nM T4 [see Additional files 9, 10]. Of the 246 genes unique to 50 nM T4, 96 (39%) were statistically significant in 5 nM T4, but failed to eclipse our two-fold change criteria. An additional 48 genes exhibited ≥ two-fold change for at least one sampling time in 5 nM T4, but were not statistically significant. Inspection of the 5 nM T4 regression patterns and fold change data associated with the 246 genes unique to 50 nM T4 demonstrated that 209 (85%) of these genes exhibited similar directional trends in 5 and 50 nM T4 [see Additional files 11,12]. An additional 22 genes unique to 50 nM T4 did not exhibit > 1.5 fold changes relative to Day 0 controls until Day 28 (at which time they were differentially regulated by ≥ two-fold), suggesting that they are expressed during the terminal stages of metamorphosis [see Additional files 11,12]. Presumably, these genes were not detected in 5 nM T4 because we did not sample latter time points for this concentration. These results reiterate the point that essentially all genes identified by our study were similarly, directionally expressed in the 5 and 50 nM T4 treatments.
To address similarity in terms of magnitude, we compared maximum fold level values for genes that exhibited QLVD and QLCU expression profiles in both T4 treatments (Figure 3; Table 2). We assumed that genes exhibiting these profiles had achieved maximum/minimum mRNA levels during the experiment, and thus could be reliably compared between treatments. No statistical differences were observed for fold level values of 13 QLVD and QLCU genes between the 5 and 50 nM T4 treatments (Wilcoxon signed rank test, Z = -1.293, P = 0.1961) [20], and the fold level values were highly correlated ( Table 2; Spearman's rho = 1.00, P < 0.0001) [20]. Although this analysis was performed on a small subset of genes, the results suggest that mRNA abundances are similar for genes that are differentially expressed in 5 and 50 nM T4.
Bioinformatic comparison: axolotl versus Xenopus
Salamanders and anurans may express similar genes during amphibian metamorphosis. To test this idea, we compared a list of 'core' up regulated metamorphic genes from Example regression profiles Xenopus [10] to DEGs identified from our study of axolotl.
Of the 59 genes that were reported as differentially up regulated by ≥ 1.5 fold in limb, brain, tail, and intestine from metamorphosing Xenopus, 23 (39%) are represented by at least one of the 3688 probe-sets analyzed in our study. Of these, only two (FK506 binding protein 2 and glutamatecysteine ligase modifier subunit) were identified as statistically significant and differentially expressed by ≥ two-fold in our study [see Additional files 13,14]. FK506 binding protein 2 was up regulated in axolotl and Xenopus. However, glutamate-cysteine ligase modifier subunit was down regulated in axolotl and up regulated in Xenopus. Thus, < 5% of the 'core' DEGs that are commonly expressed among Xenopus tissues during metamorphosis, were identified as DEGs in our study using axolotl.
We also conducted a comprehensive bioinformatics comparison between DEGs from axolotl epidermis and DEGs from T 3 induced Xenopus intestine [10]. Gene expression similarities may exist between the Xenopus intestine and axolotl epidermis because both organs undergo extensive extracellular remodeling that is associated with apoptosis of larval epithelial cells and the proliferation and differentiation of adult cell types. The presumptive orthologs of 111 of the 820 non-redundant DEGs from our study correspond to DEGs from Xenopus intestine [see Additional files 13,14]. Of these 111 genes, 50 (45%) exhibited the same direction of differential expression in axolotl epidermis and Xenopus intestine. This list includes genes that are known to be associated with metamorphic developmental processes in amphibians. For example, two MMPs (MMP9 and MMP13) that are associated with extra cellular matrix turnover were up regulated in Xenopus intestine and axolotl epidermis. However, other genes were regu-lated in opposite directions. For example, keratin 12 and keratin 15 were down regulated in axolotl epidermis but up regulated in Xenopus intestine. These results show that there are similarities and differences in gene expression between Xenopus and axolotls when comparing tissues that undergo similar remodeling processes.
Biological, technical, and statistical replication
In order to validate a subset of genes that were identified as DEGs in our microarray experiment, we conducted a second experiment in which we used quantitative realtime reverse transcriptase polymerase chain reaction (Q-RT-PCR) to generate expression profiles for five candidate genes (Table 3). These genes were chosen because they are involved in a variety of biological processes including cytoskeleton organization (desmin), cell-cell adhesion (desmocollin 1), tissue remodeling (matrix metallopeptidase 13), and ion transport (solute carrier family 31 member 1).
In addition, we investigated SRV_10216_s_at in order to verify results from a gene with unknown function. Results of the regression analyses performed on the Q-RT-PCR data are presented in Figure 5 alongside plots of the analogous microarray data. Residuals from the models fit for desmocollin 1 and solute carrier family 31 member 1 exhibited significant departures from normality (Shapiro-Wilk test, P < 0.05). Overall, there was very good agreement between the expression profiles obtained from microarray and Q-RT-PCR analyses. The fact that the Q-RT-PCR results are biologically and technically independent of the microarray data strongly suggests that these patterns are repeatable and unlikely to be experimental or technical artifacts. Previously, we used stringent statistical criteria (one-way ANOVA, FDR = 0.001, and ≥ two-fold change) to identify 123 annotated genes that exhibited robust responses to 50 nM T 4 [11]. In that study, we focused on the potential of several keratin loci to serve as biomarkers of early metamorphic changes that precede changes in gross morphology. In this study, we used less stringent criteria to identify DEGs and more fully explore temporal gene expression responses when T 4 concentration is varied. Of the 123 DEGs previously identified in the epidermis of metamorphosing axolotls, 116 genes were statistically significant and differentially regulated by ≥ two-fold in the 50 nM regression analysis. Of these, 91 (78%) were statistically significant and differentially expressed by ≥ two-fold in the 5 nM regression analysis. Only one of these 91 genes was expressed in opposite directions between the T 4 treatments (3' repair exonuclease 2). However this gene was classified as LU in 5 nM T 4 and QLCD in 50 nM T 4 , and represents another example of a transiently up regulated gene that was categorized as QLCD [see Additional files 7,8]. The 25 genes identified in 50 nM T 4 but not 5 nM T 4 ( Table 4) may function in late stage metamorphic processes that were only attained within 28 days under 50 nM T 4 . For example, keratin 17 is known to be a marker of proliferating basal epidermal stem cells in mammals [21]; this gene may be expressed late during metamorphosis in terminal cell populations of axolotl epidermis that give rise to adult epithelial cells. Other genes associated with tissue stress, injury, and immune function (ferritin heavy polypeptide 1, ras-related C3 botulinum toxin substrate 2, and cathespin S) also appear to be late response genes although we can't rule out the possibility that these genes may be differentially expressed as a toxic response to 50 nM T 4 . The majority of the DEGs identified previously using 50 nM T 4 and very strict statistical criteria were similarly identified using 5 nM T 4 and different statistical methods/criteria. These findings further emphasize that the metamorphic gene expression programs of A. mexicanum are similar even when TH concentration is varied by an order of magnitude.
Discussion
Paedomorphic Mexican axolotls can be induced to undergo metamorphosis by administering TH. We found that axolotls initiate metamorphosis at least one week earlier in 50 nM versus 5 nM T 4 and complete morphological transformations in 28 days. The lower 5 nM T 4 concentration was sufficient to induce metamorphosis but the initiation timing was delayed and this proportionally delayed the time to complete metamorphosis. The same sequence of morphological changes was observed between T 4 treatments and the majority of DEGs were identified in both T 4 treatments, although their expression profiles were temporally shifted. Nearly all DEGs exhibited similar directional trends between treatments, and the subset of genes that were directly compared between the T 4 treatments exhibited similar relative abundances. Our results show extremely similar changes in gene expression and morphology in the axolotl when varying T 4 by an order of magnitude. This is an interesting result because T 4 concentrations within this range are toxic to anurans and are known to affect tissue-specific abundances of transcription factors that regulate metamorphic gene expression programs [16]. Below, we discuss the axolotl's precise transcriptional response to the range of T 4 concentrations examined in this study. We then discuss the epidermal gene expression program of the axolotl, noting gene expression similarities and differences between salamander and anuran metamorphosis.
TH levels are known to increase in larval amphibians as they mature and reach maximal levels during metamorphic climax. When the concentration of TH reaches a critical intracellular level, transcriptional changes are initiated that bring about new patterns of development.
Because the TH concentration required to alter transcription is cell-specific, tissues are often described as having Comparison of Q-RT-PCR and microarray data Figure 5 Comparison of Q-RT-PCR and microarray data. Comparisons of the relationships between transcript abundance and days of 50 nM T 4 treatment as assessed in different biological samples via Q-RT-PCR (upper panels) and Affymetrix GeneChip technology (lower panels). Trend lines in the Q-RT-PCR data were obtained by linear or quadratic regression. Models with P < 0.01 are denoted by ** and models with P < 0.0001 are denoted by ***. R 2 refers to adjusted R 2 . The microarray data represent the mean of three samples ± standard deviation. MMP13 = matrix metallopeptidase 13 and SLC31A1 = solute carrier family 31, member 1.
different sensitivities to TH. The sensitivity of cells to TH involves multiple factors that affect the intracellular concentration of TH and the ability of TH to affect transcription, which is determined in part by the number of nuclear TH binding sites (TRs) [22]. Mexican axolotls have functional TRs [23], but TH levels are apparently too low to initiate metamorphosis [24]. Direct hypothalamic application of T 4 , using a dose that is insufficient to initiate metamorphosis via intraperitoneal injection, is sufficient to initiate metamorphosis in the axolotl [7] and related paedomorphic tiger salamanders [25]. Thus, axolotls are capable of synthesizing TH in sufficient quantities to initiate and complete metamorphosis. However, the pituitary doesn't release a sufficient amount of thyrotropin to trigger the metamorphic process [24]. Axolotl epidermis can be stimulated to initiate metamorphic changes in vitro, in isolation of endogenously synthesized TH [26]. Thus, the metamorphic timing delay that we observed between the 5 and 50 nM T 4 treatments probably reflects the time required to autonomously activate gene expression within TH responsive cells of the epidermis and the time required to stimulate the HPT axis. Rosenkilde [27] showed that this latency period is TH concentration dependent and above a critical dose (37.5 nM T 3 ) there is no variation in latency. After accounting for an estimated one-week difference in the initiation timing of metamorphosis between the T 4 treatments, there was not a difference in the length of the metamorphic period. Thus, endogenous TH levels were functionally, if not quantitatively similar between 5 and 50 nM T 4 treated axolotls after metamorphosis was initiated. This idea is also supported by the precise gene expression response that we observed between the T 4 treatments: essentially all of the genes were expressed in the same direction, and a subset of genes that could be reliably compared showed the same magnitude of gene expression.
The precision of the transcriptional response between T 4 treatments indicates that axolotls are surprisingly tolerant to T 4 levels that dramatically affect anuran mortality and gene expression. Others before us have also noted the tolerance of axolotls to high levels of T 4 [27,28]. Because anuran metamorphosis involves a more extensive and integrated set of remodeling events that are accomplished over a shorter time frame, there may be greater overlap in the sensitivities of cells to TH among tissues that causes metamorphic remodeling events to occur out of sequence. The fact that salamander metamorphosis encompasses fewer morphological changes and that many of the changes are not as integrated (hindlimb development occurs months before tail metamorphosis) may explain why axolotls are so tolerant to high T 4 concentrations. However, failure to observe an increase in thyroid hormone receptor beta transcription in axolotl suggests there may be fundamental regulatory differences between anuran and salamander metamorphosis.
The larval epidermis of axolotls is extensively remodeled during T 4 induced metamorphosis [29][30][31]. Application of TH to paedomorphic axolotls induces many of the same epidermal changes that occur during natural and induced metamorphosis in anurans, including apoptosis of larval cells, proliferation of adult cell types, and epidermal keratinization. Our results show that TH induces a diversity of transcriptional changes that are associated with specific remodeling processes. We observed significant gene expression changes between the T 4 treatments at Day 2, prior to observable morphological changes at the wholeorganism level. Most of these genes were up regulated in the higher T 4 concentration relative to the lower T 4 concentration. Day 2 gene expression changes may reflect direct transcriptional activation via the binding of exogenous TH to TRs, which are functional in axolotls [23]. For example, the human ortholog of phosphoenolpyruvate carboxykinase 1, a primary target for transcriptional regulation of gluconeogenesis, is known to have a thyroid hormone response element [32]. Early up regulation of phosphoenolpyruvate carboxykinase 1, as well as fructose 1,6 bisphosphotase, glucose 6 phosphate dehydrogenase, and 70 kD heat shock protein 5 at Day 12, indicates a biochemical response at the cellular level that includes activation of key regulatory enzymes of the gluconeogenic pathway. This is an interesting finding for the epidermis because such responses are generally associated with hepatic cell functions. Several other interesting gene expression changes were detected at Day 2. These include ATP binding cassette, subfamily B, member 4, and transglutaminase 1. ATP binding cassette family genes are up regulated in mammals during epidermal lipid reorganization and keratinocyte differentiation [33], and transglutaminase 1 encodes an enzyme that functions in the formation of the cross-linked, cornified envelop of keratinocytes [34]. The early expression of these genes is curious because keratinization is assumed to be a terminal differentiation event in the metamorphosis of amphibian epidermis. Our results suggest that the process of keratinization is initiated very early. As a final example, two proteins that are specific to the mammalian inner ear were identified as significantly down regulated: otogelin and otoancorin. The head epidermis of the axolotl contains mechanoreceptors that are homologous to hair cells of the mammalian ear [35]. Our results suggest remodeling of these and other neural components in the axolotl skin at metamorphosis. There are many additional examples that could be highlighted from our gene lists that have not been previously discussed within the context of amphibian metamorphosis.
The most gene expression changes and the greatest changes in transcript abundances were observed at Day 12 in 50 nM T 4 . For example, keratin 14, a prototypical marker of proliferating keratinocytes in mammals [36], was up regulated 1146 fold in 50 nM T 4 . This also marks the time of the greatest morphological remodeling. After this time, gene expression levels of many genes decreased. Thus, as has been described in anurans, many gene expression changes in axolotl are transient, increasing initially and then decreasing. For example, apoptosis is activated and terminated during anuran [37] and salamander [38] metamorphosis to regulate the death and replacement of larval epithelial cells. When statistically significant genes were analyzed in the absence of a two-fold change criterion, genes that were transiently up regulated (i.e., exhibited QC profiles) were statistically associated with apoptosis and proteolysis functional ontologies (data not shown). As another example of the similarities between the metamorphic gene expression changes that occur in the epidermis of frogs and salamanders, we identified three distinct probe-sets with established orthologies to human uromodulin that are dramatically down regulated in the epidermis of metamorphosing A. mexicanum. Furlow et al. [39] have observed analogous results in X. laevis and have shown that Xenopus uromodulin orthologs are exclusively expressed in the apical cells of the larval epidermis. These and other genes that are similarly expressed between urodeles and anurans will provide useful biomarkers for comparative studies of metamorphosis between these two groups.
Our informatics comparison between DEGs identified from axolotl epidermis and Xenopus intestine identified > 100 genes that are commonly expressed in these organs during metamorphic remodeling. However, over half of these genes were differentially expressed in opposite directions in axolotl and Xenopus. For example, several genes associated with immune function (CD74 antigen, chemokine ligand 5, interferon regulatory factor 1, proteasome beta subunit 9, and class I-related major histocompatability complex) were down regulated in axolotl epidermis and up regulated in Xenopus intestine. This is not too surprising because it is well established that the axolotl immune system is fundamentally different from that of other vertebrates, including Xenopus [40]. Additionally, genes expressed in opposite directions in these comparisons may reflect fundamental differences that exist between intestinal and epidermal remodeling. Genes that exhibited similar transcriptional patterns between Xenopus intestine and axolotl epidermis were associated with many different functions. For example, DNA methyltrans-ferase 1 and 17-beta hydroxysteroid dehydrogenase 8 were down regulated in axolotl epidermis and Xenopus intestine during induced metamorphosis. In mammals, DNA methyltransferase 1 functions to maintain DNA methylation patterns that influence gene transcription [41] and 17-beta hydroxysteroid dehydrogenase 8 preferentially inactivates androgens and estrogens, [42,43]. This later example suggests a transcriptional response to increase gonadal steroid hormone levels during epithelial remodeling in amphibians. As a final example, a presumptive ortholog to human keratin 24 (SRV_13498_s_at) that was up regulated by > 1000 fold in axolotls exposed to 50 nM T 4 was also up regulated in Xenopus intestine, albeit by a comparatively modest four-fold increase. These comparisons emphasize similarities and differences in gene expression during metamorphic epithelial tissue remodeling in anurans and salamanders.
Conclusion
Recent microarray analyses of anurans and salamanders show that amphibian metamorphosis involves thousands of gene expression changes, involving many biological processes that have previously received little attention [9,10]. Our results show similarities and differences in the metamorphic transcriptional programs of anurans and salamanders. We expected to identify similarly expressed genes because epidermis was included in anuran tissue preparations that were used for microarray analysis, and because tissue remodeling that occurs during metamorphosis appears to involve some evolutionarily conserved biological processes. We also expected to observe transcriptional differences because anuran and salamander lineages diverged > 300 million years ago. Our results suggest that amphibian metamorphosis cannot be fully understood from the study of a few anuran species. We show here that axolotls offer several advantages (inducible metamorphosis, robust transcriptional response, less complex integration of remodeling events) that can be exploited to provide complementary and novel perspectives on amphibian metamorphosis. al. [11]. Five and 50 nM T 4 were made fresh for each water change by mixing 2.5 or 25 mL of 100 μM stock with 40% Holfreter's solution to a final volume of 50 L. Water was changed every third day.
Study animals for microarray analyses
Skin tissue was collected from salamanders following 0, 2, 12, and 28 days of T 4 treatment. These time points were sampled to test for early gene expression changes that might precede morphological metamorphosis, and because 28 days is a sufficient period for complete metamorphosis of 50 nM T 4 induced A. mexicanum [11]. To obtain tissue, salamanders were anesthetized in 0.01% benzocaine (Sigma, St. Louis, MO) and ≈ 1 cm 2 of skin tissue was removed from the top of the head.
RNA isolation
Total RNA was extracted for each tissue sample with TRIzol (Invitrogen, Carlsbad, CA) according to the manufacturer's protocol; additionally, RNA preparations were further purified using a Qiagen RNeasy mini column (Qiagen, Valencia, CA). UV spectrophotometry and a 2100 Agilent Bioanalyzer (Agilent Technologies, Santa Clara, CA) were used to quantify and qualify RNA preparations. Three high quality RNA isolations from each treatment and sampling time combination were used to make individual-specific pools of biotin labeled cRNA probes. Each of the 30 pools was subsequently hybridized to an independent GeneChip. The University of Kentucky Microarray Core Facility generated cRNA probes and performed hybridizations according to standard Affymetrix protocols.
Microarray platform
A custom Ambystoma Affymetrix (Santa Clara, CA) Gene-Chip was designed from curated expressed sequence tag assemblies for A. mexicanum and A. tigrinum [44,45]. The array contains 4844 probe-sets, 254 of which are controls or replicate probe-sets. Detailed descriptions of this microarray platform can be found in Page et al. [11] and Monaghan et al. [46].
Quality control and low-level analyses
We used the Bioconductor package affy [47] that is available for the statistical programming environment R [48] to perform a variety of quality control and preprocessing procedures at the individual probe level [49,50]. These procedures included: , and percent present (minimum = 81.5, maximum = 86.5, n = 30). Next, we processed our data by implementing the robust multiarray average (RMA) algorithm of Irizarry et al. [51].
Assessment of GeneChip precision
To obtain estimates of between GeneChip repeatability, we generated correlation matrices for the hybridization intensities across all probe-sets among replicate Gene-Chips. Very high and consistent mean r-values were calculated for each of the 10 treatment by sampling time combinations (range of mean r ± standard error = 0.966 ± 0.002 to 0.986 ± 0.001). These results demonstrate that we were able to obtain a high level of repeatability between replicate GeneChips. Our data are MIAME compliant and raw data files can be obtained at Sal-Site [45,52].
Data filtering
Microarray platforms may not accurately or precisely quantify genes with low intensity values [53,54]. Because low intensity genes contribute to the multiple testing problem that is inherent to all microarray studies, we filtered probe-sets whose mean expression values across all GeneChips (n = 30 per gene) were smaller than or equal to the mean of the lowest quartiles (25 th percentiles) across all GeneChips (n = 30, mean = 6.53, standard deviation = 0.04; data presented on a log 2 scale). Upon performing this filtration step, 3688 probe-sets were available for significance testing. We then performed PCA on the centered and scaled data from these probe-sets. This analysis allowed us to visualize the relationships between GeneChips within and across treatments. 4 We investigated whether genes exhibited differential expression as a function of time in the absence of T 4 (control animals sampled at Days 0, 2, 12, and 28) via linear and quadratic regression [55]. We corrected for multiple testing by evaluating α 0 according to the algorithm of Benjamini and Hochberg [56] with a FDR of 0.05. α 1 was set to 0.05.
Detecting and classifying DEGs
We conducted three analyses to investigate the effect of T 4 on epidermal gene expression. For the first analysis, we used limma [57,58] to identify genes that were differen-tially expressed as a function of T 4 concentration. The limma package couples linear models with an empirical Bayes methodology to generate moderated t-statistics for each contrast of interest. This approach has the same effect as shrinking the variance towards a pooled estimate and thus reduces the probability of large test statistics arising due to underestimations of the sample variances. Operating limma requires the specification of two matrices. The first is a design matrix in which the rows represent arrays and the columns represent coefficients in the linear model. The second is a contrast matrix in which the columns represent contrasts of interest and the rows represent coefficients in the linear model. For this analysis, the design matrix specified a coefficient for each unique treatment by sampling time combination (10 coefficients) and the contrast matrix specified the calculation of contrasts between the two T 4 concentrations (5 and 50 nM) at each of the non-zero sampling times (Days: 2, 12, and 28). In addition to moderated t-statistics, limma also generates moderated F-statistics. These moderated F-statistics test the null hypothesis that no differences exist among any of the contrasts specified by a given contrast matrix. A FDR correction [56] of 0.05 was applied to the P-values associated with the moderated F-statistics of the contrast matrix.
In order to further reduce the number of false positives, we required that all "identified" DEGs be differentially regulated by ≥ two-fold at one or more of the contrasted time points.
The last two analyses were conducted using the regressionbased approach of Liu et al. [55] to detect genes that exhibit differential expression as a function of days of T 4 treatment. This approach also classifies DEGs into nine categories based on their temporal expression profiles as determined via linear and quadratic regression. In the context of our experiment, these profiles have specific biological interpretations. However, exceptions to these interpretations exist (see results for examples). In general, genes that exhibited LD, LU, QLCD, and QLVU expression profiles were still actively undergoing changes in their expression levels when the study was terminated. In contrast, genes that exhibited QLVD and QLCU expression profiles underwent down and up regulation respectively but reached steady state expression levels before the experiment was terminated. Finally, genes that exhibited QC and QV expression profiles were transiently up and down regulated respectively before returning to baseline expression levels. Null results are described by the 'Flat' category. Separate analyses were conducted for the 5 and 50 nM datasets with α 0 evaluated at a FDR of 0.05 according to the algorithm of Benjamini and Hochberg [56] and α 1 = 0.05. DEGs were required to exhibit ≥ two-fold changes relative to Day 0 controls at one or more sampling times before they were categorized as "identified".
Over representation analyses for genes with established orthologies
Biological process gene ontology categories that are over represented in our lists of DEGs (statistically significant and ≥ two-fold change) were identified using the Database for Annotation Visualization and Integrated Discovery (DAVID) [59]. In all analyses, the 3085 probe-sets on the Ambystoma GeneChip with established orthologies were used as the background for generating expected values. The EASE threshold was always set to 0.05, and the count threshold was always set to two.
Bioinformatic comparison with Xenopus
In a recent microarray study, Buchholz et al. [10] presented "a core set of up regulated genes". These genes have been identified as up regulated in response to T 3 treatment by ≥ 1.5 fold in every tissue that has been examined in metamorphosing X. laevis via microarray analysis (limb, brain, tail, and intestine) [9,10]. We determined the orthologies of these genes to human as described in Page et al. [11] and Monaghan et al. [46]. We then identified genes listed by Buchholz et al. [10] that were also differentially expressed in our experiment. The same approach was used to compare our gene lists to the 2340 DEGs identified by Buchholz et al. [10] from the intestine of metamorphosing X. laevis.
Biologically and technically independent verification
We conducted a second experiment using Q-RT-PCR to investigate the temporal expression patterns of five genes identified as differentially expressed by microarray analysis (Table 3). Animals used in our second experiment were raised as described for the animals used in the microarray experiment, with the exception that T 4 treatment (50 nM) was initiated at 120 days post-fertilization. Tissue samples from two or three individuals were collected as described above beginning on Day 0 (prior to initiating T 4 treatment) and were collected every two days for 32 days (i.e., Day 0, Day 2, Day 4... Day 32).
Q-RT-PCR
Total RNA was extracted from integument as described for the microarray experiment with the exception that all samples were treated with RNase-Free DNase Sets (Qiagen, Valencia, CA) according to the manufacturer's protocol. For each sample, the Bio-Rad iScript cDNA synthesis kit (Hercules, CA) was used to synthesize cDNA from 1 μg total RNA. Primers (Table 3) were designed using Primer3 [60], and design was targeted to the same gene regions that are covered by Affymetrix probe-sets. All PCRs were 25 μL reactions that contained: cDNA template corresponding to 10 ng total RNA, 41 ng forward and reverse primers, and iQ SYBR-Green real-time PCR mix (Bio-Rad, Hercules, CA). Reaction conditions were as follows: 10 minutes at 50 C, five minutes at 95 C, 45 cycles of 10 sec-onds at 95 C followed by 30 seconds at 55 C, one minute at 95 C, and one minute at 55 C. Melting curve analysis was used to ensure the amplification of a single product for each reaction. All reactions were run in 96 well plates and blocked by sampling time (i.e., each of the 17 sampling times was equally represented for each gene on each plate). PCRs were performed using a Bio-Rad iCycler iQ Multi-Color Real Time PCR Detection System (Hercules, CA). All plates contained template free controls [61]. Primer efficiencies were estimated via linear regression and relative expression ratios (R) were calculated according to Pfaffl [62]. All expression ratios are relative to the average of the Day 0 animals, and normalized to transcriptional intermediary factor 1 (probe-set ID: L_s_at). This gene was selected as a control because it had an extremely small standard deviation across all treatment regimes in the microarray experiment (n = 30, mean = 14.44, standard deviation = 0.03; data presented on a log 2 scale).
Statistical Analysis of the Q-RT-PCR Data
Log 2 transformed R-values for each gene were analyzed separately via linear and quadratic regression models in which days of T 4 treatment was the predictor variable. These analyses were carried out using JMP, Version 5 (SAS Institute, Cary, NC). We decided whether to use a linear or quadratic model for a given gene via forward selection [20]. In short, quadratic models were accepted when the polynomial terms were significant (P < 0.05) and resulted in an increase in the proportion of variation in the data explained by the model (adjusted R 2 ) of ≥ five percent relative to the linear model. The residuals of all models were inspected graphically. In addition, the residuals of all models were checked for normality. In cases where the assumption of normality was violated, regression analyses were run to obtain equations that describe the response of these genes to T 4 as a function of time. However, such analyses were conducted with the understanding that a strict hypothesis testing interpretation could prove problematic.
Authors' contributions
RBP carried out statistical and informatic analyses, conducted the Q-RT-PCR experiment, and helped draft the paper. SRV conceived the research project, provided general oversight for the project, and participated in drafting the manuscript. AKS reared the animals associated with the microarray experiment, collected tissue, and extracted RNA. JJS participated in rearing the animals associated with the microarray experiment and contributed to the statistical analyses that were conducted. SP conducted the informatic analyses used to determine the presumptive orthologies of Xenopus genes to human. CKB helped conceive the project, participated in coordinating the project, and provided critical reviews of the manuscript. All
|
2014-10-01T00:00:00.000Z
|
2008-02-11T00:00:00.000
|
{
"year": 2008,
"sha1": "416d4f84934990940845f39d291e7796e79077ae",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-9-78",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a850bcd3cd1b021c5fa935ef120a3bbc1fcf821",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
67016709
|
pes2o/s2orc
|
v3-fos-license
|
Age Is a Risk Factor for Contralateral Tendon Rupture in Patients with Acute Achilles Tendon Rupture
Category: Ankle Introduction/Purpose: Rupture of the contralateral Achilles tendon following Achilles tendon rupture can lead to devastating outcomes. However, despite the clinical importance, the risk factors and incidence of contralateral Achilles tendon rupture have not been well-studied. This study aimed to determine the incidence of contralateral tendon rupture after Achilles tendon rupture and to identify associated patient characteristics. Methods: Medical records for 226 consecutive patients with Achilles tendon rupture were retrospectively reviewed. The occurrence of contralateral Achilles tendon rupture and patient characteristics were determined through review of medical records and telephone surveys. Results: The cumulative incidences of contralateral Achilles tendon rupture at one, three, five, and seven years after Achilles tendon rupture were 0.4%, 1.8%, 3.4%, and 5.1%, respectively. The only statistically significant risk factor was age between 30 and 39 years at the time of initial Achilles tendon rupture (hazard ratio = 4.9). Conclusion: Patients who sustain Achilles tendon rupture in their 30 s have significantly increased risk for contralateral tendon rupture.
Introduction
As a result of greater participation in recreational and competitive sports activities, the incidence of acute Achilles tendon rupture has risen [3,11,14]. Previous studies have shown an incidence of 2.66 ruptures per 1000 personyears [12] or about 18 ruptures (range 8.3-24 ruptures) per 100,000 people [4,6,11]. With the increasing incidence of Achilles tendon rupture, there has been a growing interest in rerupture and contralateral tendon rupture after initial treatment. The majority of studies has been performed on rerupture of Achilles tendon rather than contralateral tendon rupture. The reported incidence of rerupture varies from 2.7 to 12.6% [8,13,21,23], and the factors that increased the risk of rerupture were male sex, athletic activity, and age younger than 30 years [17,18].
For contralateral tendon rupture in patients with Achilles tendon rupture, recent studies reported an incidence of 6.4% and 6.5% [1,16], significantly higher than that of the general population. Although there is no study of the clinical outcome of contralateral Achilles tendon rupture, devastating 1 3 outcomes such as those experienced with contralateral injury after anterior cruciate ligament reconstruction are to be expected [19,20]. However, despite its clinical importance, previous studies did not investigate the risk factors of contralateral tendon rupture. Considering the insufficient knowledge available in the literature and the potential importance of reducing the incidence of contralateral tendon rupture after Achilles tendon rupture, identification of the risk factors of contralateral tendon rupture is essential.
Thus, the purposes of the present study were to determine the cumulative incidence of contralateral tendon rupture after Achilles tendon rupture and to identify the associated patient characteristics.
Materials and methods
Medical records of all patients with acute Achilles tendon rupture who visited our two institutions (Korea University Guro Hospital and Ansan Hospital) between 2005 and 2015 were reviewed. Over the 10-year period in question, 267 patients presented with acute Achilles tendon rupture, and all of these patients were treated via surgical repair. Three patients were not included because they had combined open lacerations; 38 patients who had incomplete medical records or did not respond to the telephone survey also were excluded. Therefore, 226 patients were finally enrolled in this retrospective study. This study was conducted with approval from the Institutional Review Board of Korea University Guro Hospital (IRB number, 2018GR0129).
Electronic medical records were reviewed for demographic details: age at injury, sex, body mass index, underlying comorbidity, pre-injury sports activity level, and mechanism of rupture. Pre-and post-injury sports activity levels were classified by ankle activity score [5]. This activity score was originally developed to assess sports-related ankle function and previously has been used to evaluate sports activity levels in patients with Achilles tendon rupture [15,23]. To identify the risk factors for contralateral tendon rupture at the time of initial rupture, all variables for analysis were those measured at the time of initial Achilles tendon rupture.
The occurrence of contralateral Achilles tendon rupture was determined from patient medical records. To identify cases of contralateral Achilles tendon rupture that were not treated at our institutions, patients were interviewed via telephone survey as to whether they sustained a contralateral tendon rupture after the end of follow-up for the initial tendon rupture. In addition, patients were asked about the type of sport to which they returned. Among the patients with contralateral Achilles tendon rupture, three were treated for initial Achilles tendon rupture at other institutions. However, these patients were included in the present study because the data at the time of former rupture could be assessed fully via medical records.
Statistical analysis
The cumulative incidence of contralateral Achilles tendon rupture was calculated using Kaplan-Meier survivorship analysis. Random effects Cox regression was attempted, but the relatively low incidence of contralateral Achilles tendon rupture compromised model convergence, prohibiting reliable parameter estimation. Therefore, for age, patients were grouped according to whether or not they were in their 30 s at the time of rupture, whereas for ankle activity score, patients were grouped according to score ≤ 5 or > 5. Hazard ratios were calculated to assess the association between each variable and the risk of contralateral Achilles tendon rupture. Fisher's exact test was used to compare groups for significant differences. All analyses were performed using IBM® SPSS® statistics software version 23.0 (IBM Corp., Armonk, NY, USA), and significance was set at p < 0.05. A post hoc power analysis performed using G × Power software, version 3.01 (Franz Faul, Christian-Albrechts-Universität Kiel, Kiel, Germany) indicated a power of 0.78 and an α of 0.02 for detecting a difference in the incidence of contralateral Achilles tendon rupture between the age groups.
Results
In the 226 patients with acute Achilles tendon rupture, the median age was 38 years (range 23-71 years), and the majority of patients (91%) were male. Thirteen patients were identified as having contralateral Achilles tendon rupture at a median of 5.5 years from the time of initial rupture (range 1.1-11.0 years). The cumulative incidence rates of contralateral tendon rupture at 1, 3, 5, and 7 years after Achilles tendon rupture were 0.4%, 1.8%, 3.4%, and 5.1%, respectively. The estimated annual incidence rate of contralateral Achilles tendon rupture was 1.08% (Fig. 1).
The only factor significantly associated with contralateral Achilles tendon rupture was age, especially among patients in their 30 s (Table 1). After initial Achilles tendon rupture, patients between the ages of 30 and 39 years at the time of injury were five times more likely to sustain a contralateral Achilles tendon rupture than those in the other age groups.
A frequency distribution table was constructed to investigate subsequent age patterns of contralateral Achilles tendon rupture ( Table 2). Unlike patients who did not experience contralateral Achilles tendon rupture, all of the contralateral tendon ruptures occurred in patients aged between 20 and 49 years; and the difference in incidence of contralateral Achilles tendon rupture according to age was significant (p = 0.035).
The level of sports activity to which patients returned after initial injury was categorized using ankle activity score; the distribution of these values is shown in Fig. 2, and the difference was not statistically significant (ns).
Discussion
The most important finding of the present study was that, when patients sustain Achilles tendon rupture in their 30 s, the risk for contralateral tendon rupture was significantly higher compared to injury sustained at other ages. Until now, the risk for contralateral tendon rupture after Achilles tendon rupture was not identified; thus, the novel findings of the present study will be helpful not only for predicting the prognosis of Achilles tendon rupture, but also for preventing contralateral rupture.
In previous studies, age was also significantly correlated with rerupture of primary Achilles tendon repair [18] and contralateral injury after anterior cruciate ligament reconstruction [20]. Unlike the present study, which showed a particular middle-aged age range as a risk factor, younger Fig. 1 Kaplan-Meier survivorship analysis for contralateral Achilles tendon rupture. The cumulative incidences of contralateral tendon rupture at 1, 3, 5, and 7 years after Achilles tendon rupture were 0.1%, 1.8%, 3.4%, and 5.1%, respectively Table 1 Results of Cox regression analysis to assess the risk factors of contralateral Achilles tendon rupture a ns not significant, HR hazard ratio, CI confidence interval a All variables for analysis were those measured at the time of the initial Achilles tendon rupture. Values are expressed as number of patients b Patients who did not experience contralateral Achilles tendon rupture c Ankle activity scores greater than 5 were grouped as strenuous activity involving vigorous jumping age was identified as a risk factor in these two studies. These studies suggested the possibility of an association between younger age and higher intensity of sports activities, but were unable to establish a definitive conclusion. In the present study, a clear explanation is also needed; however, because only the demographic factors of patients were investigated for risk factors and none of the identified factors were significantly associated with age, our conclusion was limited by the limited data. Consequently, whether age itself is a risk factor or whether other factors associated with age are key remains undetermined. To address this issue, the authors are prospectively investigating age-related changes in other factors-specifically, tendinopathy [22], histologic change [7,10], and genomic variations [2,9]-that are potentially associated with Achilles tendon rupture. In addition, because the incidence of non-concomitant bilateral Achilles rupture is low, further research using an epidemiological approach or healthcare database is warranted.
With regard to contralateral tendon rupture after Achilles tendon rupture, Raikin et al. [16] and Aröen et al. [1] reported incidences of 6.4% and 6.5%, respectively. In both studies, a review of the demographics of cohort patients for a specific period was conducted. However, because the observation periods for all patients varied, the simple proportion of contralateral Achilles tendon ruptures in the total patient cohort up to a specific endpoint has a limited meaning. To address this issue, the present study investigated cumulative incidences. In the present study of 226 patients with Achilles tendon rupture, cumulative incidences of contralateral tendon rupture were 0.4%, 1.8%, 3.4%, and 5.1% at 1, 3, 5, and 7 years, respectively, after initial Achilles tendon rupture. As these values quantify the occurrence probabilities during the corresponding time periods, the authors believe that these values are more helpful for predicting the overall prognosis of patients with Achilles tendon rupture.
The primary limitation of the present study was the inability to conduct subsequent subgroup analyses because of the relatively small number of ruptures. Even though the study cohort was large, there were not enough rupture cases available to perform statistically valid analysis. Therefore, a specific linear relationship of age and contralateral tendon rupture was not confirmed. For the same reason, although there might be some relevance between age at the time of contralateral tendon rupture and ankle activity score, the statistical significance of any correlation between these two factors could not be determined. Second, the relatively small number of patients with follow-up longer than 9 years (less than 20%) might lead to a biased result due to the characteristics of cumulative incidence; the cumulative incidence of contralateral tendon rupture may increase dramatically at 9 years after Achilles tendon rupture. The authors thought that, with a longer period of follow-up, this value would be more likely to decrease, and a subsequent study could potentially determine a more exact long-term cumulative incidence. Third, some of the included patients may have sustained contralateral Achilles tendon rupture but failed to report the incident in the telephone survey, leading to selection bias. However, the response rate was 85%, and the number of remaining patients in the cohort was large.
In addition, the demographics of these patients were not statistically different from those of patients included in the present study, so the authors did not consider this factor to significantly affect the results. Last, the possibility of recall bias of the activity level that patients regained after initial injury and the high proportion of males (91%) in the study cohort are also limitations of the present study.
Conclusion
The cumulative incidences of contralateral tendon rupture at 1, 3, 5, and 7 years after Achilles tendon rupture were 0.4%, 1.8%, 3.4%, and 5.1%, respectively. Patients who sustain Achilles tendon rupture in their 30 s are at significantly increased risk for contralateral tendon rupture.
Author contributions YHP: lead investigator and first author. TJK: data analysis and manuscript review. GWC: data analysis and manuscript review. HJK: corresponding author, primary surgeon. No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the contents of this study.
Funding There is no funding source.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Ethical approval Institutional review board approval was obtained, and the requirement for informed consent was waived.
|
2019-02-19T15:11:51.532Z
|
2019-02-18T00:00:00.000
|
{
"year": 2019,
"sha1": "7ab1659a42cd8d704d4720c8c1c0dea755b847cf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "7ab1659a42cd8d704d4720c8c1c0dea755b847cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233669097
|
pes2o/s2orc
|
v3-fos-license
|
Intratumoral gene expression of dihydrofolate reductase and folylpoly-c-glutamate synthetase affects the sensitivity to 5-fluorouracil in non-small cell lung cancer
Background Various factors related to the sensitivity of non-small cell lung carcinoma (NSCLC) to 5-fluorouracil (5-FU) have been reported, and some of them have been clinically applied. In this single-institutional prospective analysis, the mRNA expression level of five folic acid-associated enzymes was evaluated in surgical specimens of NSCLC. We investigated the correlation between the antitumor effect of 5-FU in NSCLC using an anticancer drug sensitivity test and the gene expression levels of five enzymes. Materials and methods Forty patients who underwent surgery for NSCLC were enrolled, and the antitumor effect was measured using an in vitro anticancer drug sensitivity test (histoculture drug response assay) using freshly resected specimens. In the same sample, the mRNA expression levels of five enzymes involved in the sensitivity to 5-FU were measured in the tumor using real-time PCR. The expression levels and the result of the sensitivity test were compared. Results No correlation was found between dihydropyrimidine dehydrogenase (DPD), orotate phosphoribosyltransferase (OPRT), or DPD/OPRT expression and the antitumor effects of 5-FU. On the other hand, a correlation was found between thymidylate synthase (TS), folylpoly-c-glutamate synthetase (FPGS), and dihydrofolate reductase (DHFR) expression and 5-FU sensitivity. Conclusion Expression of FPGS and DHFR may be useful for predicting the efficacy of 5-FU-based chemotherapy for NSCLC.
Introduction
5-fluorouracil (5-FU) is an antimetabolite that is widely used to treat various solid cancers including non-small cell lung cancer (NSCLC) [1,2] In Japan, an oral 5-FU prodrug has been developed. At present, UFT® and TS-1® are indicated in Japan for NSCLC and are often used as one of the drugs in postoperative adjuvant chemotherapy or systemic chemotherapy [3,4] Despite various measures to enhance the effects of 5-FU, the treatment is ineffective in many patients. In addition, 5-FU anticancer agents are less effective in patients with advanced or recurrent NSCLC [2] However, if the therapeutic effect of 5-FU anticancer drugs in cancer patients can be predicted before administration, the disadvantages of unnecessary administration can be prevented. Thus, identification of factors that can predict the effect of drugs before drug administration is important. 5-FU is activated only after it is converted to 5-fluorodeoxyuridine monophosphate (FdUMP). FdUMP inhibits DNA synthesis by forming a ternary complex with Thymidylate synthase (TS), which is an essential enzyme for DNA synthesis, together with 5,10-methylenetetrahydrofolate (5,10-CH2-THF). It has been reported that the main pathway for conversion of 5-FU to FdUMP in tumors is the pathway converted by Orotate phosphoribosyltransferase (OPRT) in the presence of phosphoribosylpyrophosphate (PRPP) [5] On the other hand, 5-FU is rapidly degraded by dihydropyrimidine dehydrogenase (DPD). Therefore, the expression levels of OPRT and DPD may be involved in the effects of 5-FU. In addition, 5,10-CH2-THF, which forms a ternary complex with FdUMP and TS, is one of the reduced folic acids and is regulated by dihydrofolate reductase (DHFR) and folylpoly-c-glutamate synthetase (FPGS) [6] Therefore, DHFR and FPGS may also be involved in the effects of 5-FU.
Several reports of NSCLC cases have described the sensitivity to 5-FU using clinical samples [7][8][9], but no clear findings have been obtained. In this study, the level of mRNA expression (TS, DPD, OPRT, DHFR, and FPGS) of factors associated with 5-FU sensitivity was measured with reverse-transcriptase polymerase chain reaction (RT-PCR), and we used the histoculture drug response assay (HDRA) method to assess sensitivity to 5-FU. We then investigated the correlation between the expression level of each factor and the chemosensitivity result.
Patients
Tissue samples were collected at the time of surgery from 40 patients who underwent surgery at the Department of Surgery (II), University of Fukui Hospital for NSCLC with a tumor size > 20 mm between January 2012 and December 2015. The experimental use of the chemosensitivity test was approved by the Institutional Research Committee, and the trial was approved by the Institutional Review Board for University of Fukui Hospital. All patients were informed of the nature of this study, and written informed consent was obtained.
Fresh specimens were sampled from the primary lesion immediately after surgical resection, immersed in Hank's solution, and used for in vitro chemosensitivity testing with HDRA. Ten-micrometer-thick slices from the paraffin-embedded specimens were later used for quantitative RT-PCR.
HDRA
The HDRA was used as an in vitro drug sensitivity test as previously reported. [10,11] Collagen sponge gels (Gel foam®) manufactured from pig skin were purchased from Pfizer Japan Inc. (Tokyo, Japan). Cancerous portions of specimens were minced into pieces of approximately 10 mg, which were then placed on the prepared collagen surface in 24-well microplates. Plates were incubated for 7 days in the presence of drugs dissolved in RPMI 1640 medium containing 20% fetal calf serum in a humidified atmosphere containing 95% air/5% CO 2 at 37 °C. 5-FU was provided by Tokyo Chemical Industry Co. Ltd. (Tokyo, Japan) and used at 300 μg/ml. After these specimens were histocultured, 100 μl Hank's balanced salt solution containing 0.1 mg/ml type I collagenase (Sigma) and 100 μl 9.6 mg/ ml 3-(4,5-dimetylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) solution dissolved in phosphate-buffered saline were added to each culture well and incubated for another 16 h. Following extraction with dimethysulfoxide, absorbance of the solution in each well was read at 540 nm (control 630 nm) using a microplate reader (Spectra Max M5; Molecular Device LLC, San Jose, CA). Absorbance per gram of cultured tumor tissue (OD/W) was calculated from the mean absorbance of tissue from three culture wells. The tumor tissue weight was determined prior to culture.
The inhibition rate was calculated using the following formula:
Laser-capture microdissection and real-time RT-PCR (the Danenberg tumor profile [DTP] method)
The quantitative assay of the five genes of interest (TS, DPD, OPRT, DHFR, and FPGS) using paraffin-embedded sections of resected NSCLC specimens was performed according to the DTP method of Response Genetics (New York, NY, USA) [12]. For every specimen, four sets of 10-µm-thick sections and one set of 5-µm-thick sections were prepared from formalin-fixed, paraffin-embedded tissues. The 5-µm-thick sections were stained with hematoxylin and eosin and examined histologically. The 10-µm-thick sections were stained with nuclear fast red (American Master Tech Scientific, Lodi, CA) and used for laser-capture microdissection (PALM Microsystem, Leica, Wetzlar, Germany) from the upper and lower thirds of tumors, separately. The dissected tissue samples were transferred to reaction tubes containing 400 µl RNA lysis buffer, and RNA was isolated. Finally, cDNA was prepared as described by Lord et al. [13,14]. Quantification of the five genes of interest (TS, DPD, OPRT, DHFR, and FPGS) and an internal reference gene (β-actin) was performed using a fluorescence-based real-time PCR system (ABI PRISM 7900 Sequence Detection System, Applied Biosystems, Foster City, CA). The final volume of the reaction mixture was 20 µl. Cycling conditions and the primers and probes were described previously by Matsubara et al. [15] Gene expression was analyzed twice to confirm the reproducibility, and values (relative mRNA levels) were expressed as the ratio between the gene of interest and the internal reference gene (β-actin).
Examination of the necessity of Gimeracil (5-chloro-2, 4-dihydroxypyridine (CDHP))
NSCLC has high DPD activity [12] and requires the use of CDHP, an inhibitor of DPD. In an anticancer sensitivity test, the sensitivity effect of adding CDHP was also examined. Previous experiments were performed to determine the appropriate concentration of CDHP. The experiment was performed by dividing 5-FU alone into four groups at a concentration obtained by adding 200 μg/ml and 300 μg/ml CDHP to 200 μg/ml and 300 μg/ml 5-FU. At that concentration, 5-FU was used in combination with CDHP, and the results were observed.
Statistical analysis
Statistical analysis was performed on a personal computer with Stat Mate IV (Ver. IV, ATMS, Japan). The correlations between the mRNA expression of genes examined and clinico-pathological parameters were evaluated with the Pearson product moment correlation coefficient. To evaluate the correlation between two variables, linear regression Inhibition rate (%) = (1 − mean OD/W of treated well/mean OD/W of control well) × 100 was performed, and the Spearman rank correlation coefficient was calculated. Probability (P) values of less than 0.05 were considered statistically significant.
Anticancer sensitivity test (HDRA) for 5-FU in our institution
The anticancer drug sensitivity test for 5-FU in NSCLC has been conducted in our department since 2006, and HDRA at the relevant 5-FU concentration of 300 μg/ml was successfully performed for 419 patients (289 males and 130 females, median age 71.3 years, range 39-82 years).
Examination of the necessity of CDHP
A test was conducted to examine the results of sensitivity to 5-FU by changing the concentration of CDHP in 29 cases of NSCLC. We found no significant difference in sensitivity between the 5-FU only group (suppression rate 62 ± 17%, n = 29) and the group to which CDHP was added (300 mg; 60 ± 18%, n = 29, 200 mg; 60 ± 17%, n = 23). Therefore, this study was performed using only 5-FU. We also examined the toxicity of CDHP by performing a CDHP alone sensitivity test without 5-FU. CDHP alone showed a very low inhibition rate and no antitumor effect (8 ± 10%, n = 29).
The operative procedure was lobectomy in 35 patients (87.5%), segmental resection in two patients (5%), and partial wedge lung resection in three patients (7.5%). The histology included 23 cases of adenocarcinoma (57.5%), 15 cases of squamous cell carcinoma (37.5%), and two cases (5%) of other types of NSCLC. Regarding EGFR mutation and ALK translocation in adenocarcinoma, seven cases were L858R positive, three cases were exon 19 positive, one case was ALK positive, and 12 cases were wild type (Table 1).
Comparison of gene expression and the percent inhibition in the anticancer sensitivity test (HDRA) for 5-FU
Gene expression of TS, OPRT, DPD, FPGS, and DHFR was measured in 39 samples except for one that could not be measured successfully. However, 2 samples did not grow on the HDRA method, so they were excluded. Finally, 37 samples were considered. The mean levels of mRNA of TS, DPD, OPRT, FPGS, and DHFR in these specimens were 3.69 ± 2.89 (n = 37), 1.200 ± 0.637 (n = 37), 0.706 ± 1.23 (n = 37), 0.885 ± 0.477 (n = 37), and 2.10 ± 2.50 (n = 37), respectively ( Table 2). Expression of these genes was not correlated with the age, sex, histopathological type, clinical stage, or driver gene mutation. Table 3 shows the correlations among TS, OPRT, DPD, FPGS, and DHFR mRNA for all samples. The mRNA expression levels of TS were moderately correlated with those of DPD (Correlation Coefficient (r): -0.426, p < 0.01), and those of DPD were weakly correlated with those of OPRT (r: 0.350, p < 0.036) and FPGS (r: 0.350, p < 0.036). The mRNA expression levels of FPGS were moderately correlated with those of DHFR (r: 0.451, p = 0.0057).
We found no significant correlation between expression of DPD, OPRT, or DPD/OPRT and the percent inhibition with 5-FU using the HDRA method (Fig. 1). We found a significant correlation between expression of TS, FPGS, and DHFR and the percent inhibition with 5-FU using the HDRA method (p < 0.05, r = 0.350, p < 0.01, r = 0.418, and p < 0.05, r = 0.331, respectively) (Fig. 2).
Discussion
To the best of our knowledge, this is the first study to show a correlation between FPGS and DHFR expression and 5-FU sensitivity results in NSCLC. FPGS showed a stronger correlation with 5-FU sensitivity than DHFR.
FPGS converts intracellular folic acid and folic acid antagonists, such as methotrexate, into polyglutamic acid, which is retained intracellularly for a long time. Polyglutamylation of intracellular 5,10-methylenetetrahydrofolate enables more efficient formation and stabilization of inhibitory ternary complexes involving TS and metabolites of 5-FU, and it may also increase the cytotoxicity of 5-FU. [6] Two reports have shown a correlation between 5-FU sensitivity and FPGS in colon cancer [6] and breast cancer. [16].
DHFR is the target enzyme of methotrexate, which enters the cell, tightly binds to DHFR, and inhibits the reduction of dihydrofolate to tetrahydrofolate. [17] Only one report has evaluated the activity of DHFR and the antitumor effect of 5-FU. [18] They reported that DHFR mRNA had a high value using a 5-FU-resistant mouse cell line. 5-FU is activated only after it is converted to 5-fluorodeoxyuridine monophosphate (FdUMP). TS, an enzyme that is essential for DNA synthesis, methylates deoxyuridine monophosphate (dUMP) and converts it to deoxythymidine monophosphate (dTMP). Therefore, FdUMP is covalently bonded to TS together with 5,10-methylenetetrahydrofolate (5,10-CH2-THF), which is reduced to folic acid, to form a strong ternary complex, which inhibits DNA synthesis. [19] FPGS acts on the pathway that converts 5,10-CH2-THF from monoglutamate to polyglutamates, the increase in which is indispensable for creating a ternary complex that may affect the sensitizing effect of 5-FU. The function of TS activity is inhibition, and the pool of dTTP, which is a precursor of DNA, is depleted, leading to inhibition of DNA synthesis and cell death. [19]. In this study, the positive correlation between TS expression and 5-FU sensitivity is the opposite of the results of other reports of NSCLC. [7,[20][21][22] Regarding the sensitivity of TS in gastric cancer and colorectal cancer, sporadic reports have shown no correlation or reverse correlation between the sensitivity to 5-FU and increased TS activity. [23,24] In NSCLC, a positive correlation with TS was reported, but TS activity tends to be lower than in other carcinomas, [12] suggesting that even if TS activity is high, it falls within the range where the effect of 5-FU can be observed.
Regarding the sensitivity of 5-FU, the activities of OPRT and DPD have been well evaluated, and several reports show that they are involved in the sensitivity of NSCLC. [5,25,26] The results of this study did not show a correlation between OPRT and DPD expression and 5-FU sensitivity. Although the OPRT/DPD ratio has been reported to be an important predictor of the efficacy of fluoropyrimidine-based chemotherapy for metastatic colorectal cancer, [5] this report on DPD/OPRT showed no significant difference with the sensitivity of 5-FU.
Several reports have described the relationship between TS, DPD, and OPRT and 5-FU sensitivity in NSCLC. Eguchi et al. [20] evaluated the relationship between response to treatment and immunohistochemical expression levels in patients with advanced NSCLC. Low expression levels of DPD and TS were seen in patients not treated with S-1-carboplatin, which is associated with better response and longer survival in patients treated with paclitaxel-carboplatin. Tumor expression levels of TS and DPD predict the response to S-1-carboplatin chemotherapy in patients with advanced NSCLC. Nakano et al. [21] reported an immunohistochemical study on the clinical importance of TS, OPRT, and DPD expression using 151 NSCLC specimens resected from patients treated postoperatively with tegafur/uracil (UFT). Patients who had tumors with low TS expression (p = 0.0133), high OPRT expression (p = 0.0145), or low DPD expression (p = 0.0004) had significantly high of 5-year survival rates. Shintani et al. [7] investigated patients using RT-PCR for intratumoral expression and examined the correlation between gene expression and the efficacy of 5-FU in NSCLC. Patients receiving postoperative 5-FU alone (n = 30) comprised the 5-FU group, and those who had only surgery were included in the control group (n = 86). When dichotomized by mean TS and DPD mRNA levels, patients with low DPD tumors receiving 5-FU had significantly better prognosis than those who did not receive adjuvant treatment (p = 0.041). Based on these results, quantification of TS and DPD mRNA levels can predict the efficacy of 5-FU after surgery in patients with NSCLC. 5-FU is rapidly degraded by DPD. Due to the higher DPD activity in NSCLC compared to other carcinomas, [12] 5-FU alone is less effective, necessitating co-administration of CDHP, for which S-1 was developed. In view of the mechanism of action of 5-FU, the effects of 5-FU are expected to be reduced if expression of the target enzyme TS and the degrading enzyme DPD in tumor tissue is high. CDHP (Gimeracil), which is used in S-1, [27] inhibits DPD. In this study, we found no difference in the sensitivity results even if CDHP was added to 5-FU in the preliminary sensitivity test. This suggests that DPD may not affect the antitumor effect of 5-FU in vitro.
There are several reports that The HDRA method correlates well with the susceptibility of NSCLC to anticancer drugs and its clinical efficacy. [28][29][30] Further, the usefulness of HDRA has been documented for several other cancer types including gastric cancer [31] and colorectal cancer. [32] This histo-culture method has the advantage of culturing cancer cells while maintaining cell-cell contacts which has good cell viability, and the disadvantage of requiring a certain amount of tissue sample. In the present study, sufficient amount of sample could be obtained from the surgical specimens. Moreover, the high evaluability rate (n = 419, 96.5%) from previous tests for NSCLC conducted at our institution demonstrate that this method is a good alternative for testing the sensitivity of 5-FU in NSCLC.
The limitations of this study are the small number of cases, the single-institution design, and the in vitro results of the anticancer drug sensitivity test. This study pointed out the correaltionship between in vitro sensitivity of NSCLC samples to 5-FU and mRNA level of FPGS and DHFR, whether or not this can reflect in vivo drug effect needs more investigation. Furthermore, studying the relationship between anticancer effects in NSCLC patients who actually received 5-FU and the expression levels of various factors in the tumors is necessary in clinical study. Another limitation is the in vitro use of specimens obtained during surgery. For unresectable advanced NSCLC, small specimens such as those obtained from bronchoscopy should be used. The feasibility of such transbronchial lung biopsy samples is being investigated. Nakajima et al. [33] used a metastatic lymph node sample obtained with endo-bronchial ultrasound-guided transbronchial needle aspiration in patients with non-small cells to obtain TS, DPD, TP, and OPRT mRNA. The feasibility of expression analysis should be evaluated. Clinical application is also expected.
Few reports have examined the sensitivity of 5-FU in NSCLC. Our study provides results that will be useful for assessing the sensitivity of 5-FU in future clinical applications.
Conclusion
The mRNA levels of five folic acid-associated enzymes involved in the antitumor effect of 5-FU were examined in resected NSCLC tumor specimens with RT-PCR, and the in vitro anticancer sensitivity test was performed. In conclusion, FPGS and DHFR may be involved in 5-FU sensitivity.
Previous studies have reported that TS, OPRT, and DPD are associated with 5-FU sensitivity. Combined with this result, the folic acid metabolism pathway is also important, and it will be the basis for the development of new anticancer agents using both pathways.
|
2021-05-05T00:08:49.271Z
|
2021-03-19T00:00:00.000
|
{
"year": 2021,
"sha1": "cc1dd52a8b8e59472da5064f599097b93cd9b84d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12672-021-00413-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01cc183f065c048bc509cc705bc7a1dda5cac1e7",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
251656602
|
pes2o/s2orc
|
v3-fos-license
|
Nasopharyngeal sialocoele with underlying auditory tube neoplasia in a cat
Case summary An 8-year-old cat was presented with recent signs related to upper airway obstruction. CT revealed a hypoattenuating mass, with rim enhancement, in the nasopharynx. Paracentesis yielded a viscous fluid, consistent with saliva on cytology. The sialocoele was aspirated, and surgical excision of the ipsilateral mandibular and sublingual salivary glands was performed. The sialocoele recurred 3 months later, associated with a polypoid structure in the auditory tube region. This was surgically extirpated. Histology was consistent with a tubulopapillar adenocarcinoma. Relevance and novel information To our knowledge, this is the first case report of a nasopharyngeal sialocoele with confirmed underlying neoplasia in a cat, and the first description of CT imaging features of a nasopharyngeal sialocoele in a cat.
Introduction
A sialocoele is defined as a localised accumulation of saliva resulting from its extravasation through a tear in a salivary duct. Clinical signs are related to the anatomical location of the sialocoele and its dimensions. The position of the cavity depends on where the tear occurs in the salivary duct. While this condition is common in dogs, 1,2 sialocoeles have only been reported in 19 cats in the literature. [3][4][5][6][7][8][9][10][11][12] Cats have five major salivary glands (parotid, mandibular, sublingual, molar and zygomatic), and minor salivary glands which cannot be seen on direct examination of the oral cavity (labial, lingual and palatal mucosal salivary glands). 3,13 Sialocoeles may be associated with any salivary glands in cats, [4][5][6][7][8][9][10][11] except for the molar gland, which opens directly into the buccal cavity (without a duct).
The cause of sialocoeles is unclear in most cases in cats, as in dogs. 2 Some authors have suggested that they may be induced by pre-existing conditions, including trauma (as a complication of oral or neck surgery or secondary to penetrating or blunt trauma), 1,3 salivary obstruction by sialoliths 5 or glandular duct stenosis. 10 Although considered possible, an underlying neoplasm has never been reported in cats. This article presents an original case report of a nasopharyngeal sialocoele in a cat, caused by an underlying malignant neoplasm and diagnosed using CT. To our knowledge, neither the CT features nor the neoplastic origin of a nasopharyngeal sialocoele have been previously reported in cats.
Case description
An 8-year-old domestic shorthair neutered male cat was presented with a recent history of dysphagia and upper airway dysfunction, including stridor and orthopnoea. Oral examination revealed ventral protrusion of the Nasopharyngeal sialocoele with underlying auditory tube neoplasia in a cat rostral portion of the soft palate (Figure 1), causing decreased pharyngeal lumen.
After premedication using midazolam (Hypnovel; Roche), anaesthesia was induced using intravenous (IV) injection of propofol (Propovet; Zoetis) and continued using inhalation of isoflurane (Vetfluran; Virbac) in oxygen via an endotracheal tube. CT of the head was performed using a 64-slice helical CT scanner (Aquilion 64 system; Toshiba Medical Systems), prior to and 2 mins after IV injection of 2 ml/kg iodinated contrast agent (iohexol [Omnipaque; GE Healthcare]). Helical acquisitions were obtained with exposure parameters of 120 kV and 100 mA, 1 mm slice thickness and a reconstruction interval of 0.3 mm. Images were reconstructed using a 512 × 512 matrix, a slice thickness of 1 mm, and both bone and soft tissue kernels.
A round-shaped, well-delineated 15 mm diameter lesion was observed at the right dorsal aspect of the nasopharynx ( Figure 2). The nodular lesion was hypoattenuating (20 Hounsfield units [HU]) with thin rim enhancement (<1 mm). It was closely associated with, or infiltrated, both the right dorsolateral part of the nasopharynx and the nasal mucosa of the soft palate. The latter was ventrally deviated, the mass filling almost half of the nasopharyngeal lumen. It was also continuous, or in close contact, with another lesion, localised in the vicinity of the right auditory tube. This latter lesion was heterogeneous, moderately enhancing (65-82 HU), with a thick and irregular wall ( Figure 3). The right auditory tube was enlarged.
There was bilateral filling of the tympanic bullae with hypoattenuating, fluid density material, although there was mild focal enhancement in the rostrolateral compartment of the right tympanic bulla. The osseous wall of the right tympanic bulla was mildly thickened and irregular. There was no other bony deformity in the vicinity of the lesion. Mandibular, lingual and sublingual salivary glands, and sublingual connective soft tissue spaces, were unremarkable. Regional (parotid, mandibular and medial retropharyngeal) lymph nodes had a normal appearance.
Chronic osteitis of the right tympanic bulla was observed and bilateral otitis media was present, possibly secondary to local obstruction of the auditory tubes. Fine-needle aspiration of the cavitary lesion was performed via the oral cavity while the cat was under anaesthesia. Pale-orange viscous fluid (4 ml) was aspirated from the lesion. This was consistent with saliva on cytological examination. A diagnosis of pharyngeal sialocoele was thus established.
The pharyngeal sialocoele was freely aspirated, and the mandibular, sublingual monostomatic and sublingual polystomatic glands were surgically removed 1 week after CT. An amoxicillin and clavulanic acid combination was administered orally at a dosage of 15 mg/kg q12h (Kesium; CEVA) and meloxicam was administered orally at a dosage of 1 mg/kg q24h (Metacam; Boehringer, Germany) for 5 days postoperatively.
There was spontaneous resolution of the sialocoele in the immediate postoperative period. Fifteen days postoperatively, a fluctuant swelling appeared at the original location of the sialocoele, followed by progressive recurrence of the clinical signs over a duration of 4 months. At a 4.5-month follow-up consultation, examination of the oral cavity revealed bulging of the soft palate, in the region of the right auditory tube. Visual oral examination and radiological evaluation using cone beam CT (New Tom 5G; QR) confirmed that the previously described heterogeneous enlargement of the right auditory tube had grown as a tubular mass at the right dorsal aspect of the nasopharynx. This lesion extended towards the right tympanic auditory tube. A neoplastic process of the right auditory tube was suspected.
A week later, partial surgical excision of the nasopharyngeal tubular mass and bulla curettage were performed via a combination of right ventral bulla osteotomy and a transpalatine approach. Postoperative care was unremarkable.
The histopathological findings were consistent with a tubulopapillar adenocarcinoma of the auditory tube, which was consistent with the initial visual examination. The owner declined proposed adjuvant treatments, including chemotherapy and radiation therapy.
Follow-up CT examination was performed 6 months later, owing to recurrence of the clinical signs. A voluminous nasopharyngeal sialocoele, causing partial obstruction of the nasopharynx, was observed. The sialocoele was continuous with a heterogeneous, strongly enhancing tissue-attenuating structure at the right dorsal aspect of the nasopharynx, protruding into the right auditory tube. It was consistent with recurrence/local spread of the tubulopapillar adenocarcinoma. Needle aspiration of the pharyngeal sialocoele was performed to improve the cat's comfort.
Discussion
The differential diagnosis of pharyngeal nodular lesions in cats includes nasopharyngeal polyps, granulomas, cryptococcosis, lymphoma, cysts, abscesses (associated with bacterial inoculation through oral trauma or foreign body penetration) and sialocoeles. 14,15 In our case, the CT appearance of the lesion, featuring hypoattenuating and unenhanced content with a thin, well-defined rim, was highly suggestive of a sialocoele. This was confirmed cytologically.
Salivary mucocoeles or sialocoeles are a common cause of cervical or intraoral swelling in dogs and less commonly reported in cats. [3][4][5][6][7][8][9][10][11][12][16][17][18] In one study, the authors reported that the occurrence of mucocoeles in dogs was three times greater than in cats. 2 Nasopharyngeal sialocoeles appear to be more commonly encountered in brachycephalic dogs, especially in Pugs. 18 There has been one reported case of a cat with a nasopharyngeal sialocoele causing acute respiratory distress. 4 No diagnostic imaging technique was documented in this report and no underlying cause could be identified. Unlike in previous reports, the clinical signs of dysphagia, stridor and orthopnoea developed gradually in our case. This chronic evolution is, however, described in brachycephalic dogs with nasopharyngeal sialoceles, 18 with presenting signs of chronic upper airway obstruction, such as snoring, discomfort while sleeping and exercise intolerance.
The tomodensitometric features of this condition have been described in dogs, [17][18][19] with the lesion presenting as a well-defined, hypoattenuating and nonenhancing mass, surrounded by a thin, contrast-enhancing wall, located at the caudal aspect of the soft palate. Similar features were noted in our case, although the lesion further extended into the enlarged right auditory tube. This was supposedly related to the underlying presence of a neoplastic process within the auditory tube, which could have been suspected since the initial CT examination. In another study, 17 mineralisations were observed in 54% of sialocoeles, although none of them in the nasopharyngeal region. However, this was not the case in this cat.
In previously reported feline sialocoeles, the origins of the lesions were unknown, [3][4][5][6][7]11,12 except in one animal for whom the lesion was secondary to duct stenosis. 10 In dogs, salivary mucocoeles are more frequently encountered. In our case, the presence of another soft tissue lesion in the enlarged auditory tube was visualised on the first CT examination and further diagnosed as a tubulopapillar adenocarcinoma. We suspected that the neoplastic process may have caused obstruction to the salivary flow because of infiltration of the duct or glandular tissue, or through mechanical obstruction of the duct. In either case, salivary outflow impairment would have led to tearing of the duct or gland, with subsequent leakage and focal accumulation of saliva. The underlying neoplastic process probably accounted for the recurrence, despite surgical treatment as well. The main salivary glands in cats include the mandibular and sublingual glands, which are closely related, and the zygomatic glands. There are additional minor salivary glands, which cannot be seen on direct examination of the oral cavity, including labial, lingual and palatal mucosal salivary glands. 3 A sialocoele related to the minor palatal mucosal salivary gland was considered most likely, although a relationship with the major salivary gland could not be ruled out considering the relatively large volume of the lesion. The elected surgical strategy was therefore believed to be most likely to succeed in this case. Aspiration of the sialocoele and surgical excision of the ipsilateral mandibular and sublingual salivary glands were performed in order to prevent further accumulation of saliva. Recurrence implied that these glands were not implicated in the formation of the sialocoele, and therefore, considering the location, it is likely that the lesion originated from a minor palatal mucosal salivary gland.
It is likely that focal compression or infiltration of the soft palate by the neoplastic process induced the development of the sialocoele in our patient. The owners then declined further surgical treatment and radiotherapy.
Conclusions
This article presents the first description of the tomodensitometric (CT) features of a nasopharyngeal sialocoele in a cat. In our case, it was associated with a neoplastic process in the ipsilateral auditory tube in a cat. The authors suggest considering a sialocoele in the differential diagnosis of hypoattenuating well-defined lesions in the nasopharyngeal region in cats, and that a potential local neoplastic process as an underlying condition should be scrutinised.
|
2022-08-19T15:10:55.653Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b08e17e3c4fc0a5f080170395d455728aa233f5a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/20551169221109011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bd211c09450af75da972ecc507fa85301b0571f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
134135277
|
pes2o/s2orc
|
v3-fos-license
|
Carbon Footprint Analysis at the Operation Phase in a Student Residential Hall in Hong Kong
According to the climate action plan published by the Environment Bureau in Hong Kong, it is expected that the absolute carbon emission to have a reduction of 36% per capita by 2030 (base year 2005). Using walkthrough combined with energy simulation, we have investigated the energy consumption pattern of a student residential hall. Due to the fluctuation nature on the number of residents in a student residential hall, we have simulated several scenario: 1) typical occupancy, 2) typical occupancy with energy optimization, 3) peak occupancy, 4) peak occupancy with energy optimization in order to estimate the range of consumption. In scenario 4, effective energy use pattern for air-conditioning system was implemented and approximately 480 tonnes (6.5%) of carbon emission can be reduced for a 50 years building life span consideration. Various energy conservation measures should be taken into consideration in order to further reduce carbon emission in the future.
Background
To deal with the world-wide issue of global warming or climate change, as discussed in the 2015 United Nations Climate Change Conference, there is an urgent need for the reduction of the greenhouse gas emission. In Hong Kong, the Government has proposed a target of reducing carbon emissions by 50-60% by 2020 since 2010. [1] In the Hong Kong Climate Change Report 2015, the Government has advised it will use China's pledge to reduce carbon intensity by 60-65% between 2005 and 2030 and formulate various measures to mitigate climate change. [2] To accede the China's climate plan and the Paris Agreement, the Government has recently outlined the long-term measures to combat climate change and attain the updated carbon intensity target for 2030 in the Hong Kong's Climate Action Plan 2030+. [3] The new target is to reduce Hong Kong's carbon emissions by 65%-70% by 2030 from the 2005 levels, which is equivalent to 26-36% absolute reduction and a reduction to 3.3-3.8 tonnes in per capita emissions. To achieve the updated carbon emissions target, various strategies such as reducing coal-fired electricity generation, widen the use of renewable energy, as well as implementing energy-saving measures for buildings and transportation had been proposed. As a start to contribute to this carbon reduction mission and to devise the suitable environmental management system in our institute, we have initiated a life cycle assessment study of one of the institute halls of residence and used data simulated by eQuest for comparison. The result obtained will help to extend the life cycle assessment study to the whole institute in future. A few literatures have mentioned the life cycle carbon emission assessment for residential buildings or similar buildings on function like hotels. However, these studies mainly focus on the material comparison or at the construction phase of the building. [4][5] Construction materials affect the energy performance of whole building. However, from the viewpoint of the whole life cycle of the building, operation stage (or use phase) constituted the largest proportion of energy consumption, i.e. carbon emission from energy generation. In terms of the characteristics of a building's operation phase, up to 85% of total carbon emissions by building type due to the use of heating and cooling energy and electrical facilities could be accounted [5]. In this study the operation stage will be focused and the electricity consumption of HVAC system and other electrical facilities such as lighting system will be evaluated.
Methodology
Information and parameters to predict carbon footprint through eQUEST are collected and summarized. Most of the items for simulation are referenced from the design report of the institute's residential hall including building geometry (floor plan in CAD files), building area and number of floor, floor height (floor to floor and floor to ceiling), types and dimensions of door and window, HVAC system, design temperature and air flow, etc. Figure 1 shows the typical floor plan used in this eQuest modelling study. Other parameters like number of existing residents and electricity consumptions are gathered from the hall manager. The DD (Design Development) Wizard will be used instead of SD (Schematic Development) Wizard since it can be used to create buildings with multiple shells and provide more flexibility in assigning them to building areas. Carbon emission varies with energy use and the fuel mix of energy generation. The following equation are used: (1) The net electricity consumption should be used for calculation. A net electric consumption refers to total energy consumption minus any renewable energy generation. the GHG emissions factors (EFs) is 0.540 kg CO 2 e per 1 kilowatt-hour (kWh) in 2015 in Hong Kong for electricity uses. [6] The system boundary is defined as the energy consumption of the MVAC, lighting and water heating system during the operation phase.
Measurement and Validation
The residential building consists of 12 floors and the 12 th floor has been unoccupied for a consecutive 12 months so it will not be considered. G/F remained in working condition in the whole year, so it can be regarded as a fixed consumption floor and will also not be considered. This scenario simulates the energy consumption of 1/F to 11/F from 1 January to 31 December 2016. 24-hour operation is set for 1 January to 31 May and 1 September to 31 December due to school day.
[7] Low occupancy is set for school day due to existing low density situation. [8] In summer holiday from 1 June to 31 August, the building operation schedule is set to be closed for business. It is assumed that there is no user in the building except regular inspection which will only trigger light tube in corridor. Energy saving light tubes are installed in corridor and they will only be triggered when movement is detected, the influence should be low to this validation. Each floor has a height of 3,150 mm from floor to floor including 150 mm concrete between two floor. All floors have similar component, setting and structure. A floor consists of bedroom, mechanical/electrical room, corridor, activity room, common area. Usually more than 40% of space occupied by bedroom for student or guest, 10% for mechanical/electrical room or corridor each, others sum up to 100%. The complete common areas, each contains television, refrigerator, cooking ranges and resident controlled air conditioning system, are located on 1/F, 4/F, 7/F. There is no exterior door nor exterior window shades but only exterior window with 1500 mm width and 1550 mm height. The percentage of window against net wall area (floor to ceiling) is slightly different in each floor. Two types of air-conditioning system are installed for different areas. Variable Refrigerant Volume (VRV) System, HVAC system 1 in simulation, will be used to simulate the podium area such as lobby, corridor, common area, activity room, etc. Splittype air-conditioning unit, HVAC system 2 in simulation, will be used to simulate the bedrooms and warden's flats. Variable Refrigerant Volume (VRV) System will operate for 24 hours per day in podium area to maintain a comfortable environment. Split-type air-conditioning unit is set to operate 8 hours during students' sleeping hour. The design temperature for indoor condition is 23 o C dry bulb ±1 o C. For the domestic water heating, an instantaneous type electric hot water heater is installed in each bedroom for bathing. Ten gallon per person per day of hot water use is assumed, the consumption is therefore approximately 50L. As shown in Figure 2, the largest proportion of usage is space cooling. To assist space cooling, ventilation is also an important component so it occupies the second largest proportion of energy used. Typically, the trend of energy consumption in entire year should be ideally linear, a line goes up from January then reach the climax at August and becomes low again unit December. Since Hong Kong is located in subtropical regions, the weather is hot and humid during May to August. The relatively low usage in June and August is due to the low occupancy in the summer holiday. No space heating burden should be considered as no heating equipment is provided. The design parameter of outdoor temperature in winter is 7 o C so no heating for warming indoor temperature is required. Water heating is mainly used to operate instantaneous type electric hot water heater for bathing. There is refrigerator in common area but consumption on refrigeration is not considered in the simulation because all of the single electrical device consumption is categorized to miscellaneous equipment. Miscellaneous equipment also include television, cooking ranges, charger, etc. either in common area or bedroom. The remaining consumption is area lighting. All the activity rooms and common areas use glass instead of concrete as an external wall to introduce day light for illumination. Automatic sensor is put in the corridor to minimize the energy consumption. The light bulb will only be triggered when the sensor detects any movement. If no movement is detected again within a minute, the light will be turned off to standby mode. The simulated data was then used to compare with the real situation during the measured occupancy period. As seen in Figure 3, although the occupancy condition is set to zero due to summer holiday period, the simulated wage of July and August are still high and quite different from real situation. In simulation, occupancy rate affects all usage in building but only affects to a lesser extent to the majority of resident-orientated usage. the electricity usage of resident-orientated usage like lighting, water heating and miscellaneous equipment has dropped significantly in June to August however it does not make tremendous change to total consumption. The saving of air-conditioning from bedroom can cover the higher consumption of the central air-conditioning system in public area due to hot weather so the consumption level of space cooling is similar to May and September. The simulated result is higher than realistic situation during July and August, a possible reason can be attributed to the fact that the real operation schedule of air-conditioning system in summer holiday had been adjusted to a more environmental friendly mode. For example, the system was turned off at night as no resident is needed to be served. It is a massive energy saving if the system is off from 11 p.m. to 7 a.m. and it can also explain the difference between real consumption and simulated result. The coefficient of determination of the simulated and real data was calculated to be 0.9712, which demonstrated that the simulated model agrees quite well with the actual situation.
Typical Occupancy
Due to the unforeseen number of occupancy in a student residential hall, several scenarios have been attempted in order to obtain a reasonable range of energy usage. Scenario 1 simulates a typical occupancy of VTC Student Dormitory based on the recent number of residents. All other settings remain the same as the validation's case like 24 hour operated MVAC system, 50L domestic hot water for a person, etc. The total energy consumption of typical usage is 303,510 kWh in 2017. Most of the energy is used to maintain a comfortable living environment by air-conditioning and ventilation. More than a half of energy, i.e. 177,510 kWh, is consumed by space cooling and 84,420 kWh is used to operate ventilation fans. 22,480 kWh is used for water heating, while 9,190 kWh is used for miscellaneous equipment and 9,880 kWh is used for area lighting. For the trend of energy consumption, space cooling and ventilation are fluctuating due to varying weather condition. Miscellaneous equipment used and area lighting are stable for every month and the consumption is nearly a constant for both of these two items. Scenario 2 simulates a typical occupancy with an energy saving approach. MVAC system can be optimized in a more energy efficient manner after understanding the behavior of resident. A student dormitory is not equivalent to a hotel, it does not need to be maintained as perfect condition as a hotel for clients who has an option to spend their time to enjoy the facilities rather than going to the bed. It is expected that the student should go to bed or stay in their bedroom after 11 p.m. As the earliest lesson is at 9 am at the institute, they can take rest until 6 am in the morning. To reduce unnecessary energy usage at night, this scenario proposes only regular air exchange to be operated during normal sleeping hour from 11 pm to 6 am, air-conditioning will cease due to less requirement in that period. All other settings remain at the same condition as scenario 1. The total energy consumption under typical use is lowered to 278,810 kWh. Most of the energy is still used to maintain a comfortable living environment by air-conditioning and ventilation. 160,630 kWh is consumed by space cooling and 76,610 kWh is used to operate ventilation fans. Other parameters remain unchanged with the same number of resident. 22,480 kWh is used for water heating, 9,190 kWh is used for miscellaneous equipment and 9,880 kWh is used for area lighting. Compare to scenario 1, the reduction of electricity consumption is 24,700 kWh which is equivalent to a reduction rate of around 8%.
Peak Occupancy
Scenario 3 simulates peak occupancy of residential hall to find out the maximum energy that could be consumed. All settings remain the same as scenario 1 except number of residents. The total energy consumption under peak use is 410,360 kWh in 2017. More energy is used to maintain a comfortable living environment especially in May and September. 222,270 kWh is consumed by space cooling and 108,700 kWh is consumed to operate ventilation fans. 43,350 kWh is used for water heating, 23,410 kWh for miscellaneous equipment and 12,610 kWh for area lighting. The consumption for space cooling and ventilation increase from 261,930 kWh for typical use to 330,970 kWh for peak use.
Although the quantity of resident is doubled theoretically, the energy used is not doubled. Because airconditioning and ventilation for public place is generally fixed, the increase is mainly come from the use of the air conditioner in bedroom during hot and humid weather. On the other hand, consumption of water heating depends on occupancy rate unless students take shower in somewhere else. Thus the increasing rate is nearly double, from 22,480 kWh to 43,380 kWh. The last two parameter, miscellaneous equipment and area lighting, demonstrate a greater difference in increasing rate even though they consume at a similar level of electricity during typical occupancy. The former has a huge rise, from 9,910 kWh to 23,410 kWh, while the latter only has a slight increase. It is because the energy use factor is set larger manually due to several possibilities including higher probability of public devices usage, much more personal electrical devices usage and larger volume of cloth washing, etc. In contrast, area lighting consumption increased by 2,730 kWh to 12,610 kWh. It shows that the day light benefits both public area and bedroom as over 30% of surface area can introduce day light for illumination. Again, the consumption in public area is close to a fix value, thus the increase mainly comes from bedroom and activity room. With energy saving, i.e. scenario 4, the total energy consumption is lowered to 385,030 kWh. Most of the energy consumed is to maintain a comfortable living environment by air-conditioning and ventilation. 204,850 kWh is consumed by space cooling and 100,790 kWh is used to operate ventilation fans. Other parameters remain unchanged due to the same number of residents. 43,350 kWh is used for water heating, 23,410 kWh is used for miscellaneous equipment and 12,610 kWh for area lighting. Compared to scenario 3, the reduction of total electricity consumption is 25,330 kWh which is equivalent to a reduction rate of around 6%. The result is very close to the decline between scenario 1 and scenario 2. Since the total energy consumption is higher, the decreasing percentage is lower at a similar reduction value.
Prediction of Life Cycle Carbon Emission
PV panel and wind turbine generated 20,790 kWh and 2,800 kWh respectively per year in the residential hall. [9] Before determining the carbon emission, these energy savings are subtracted from total electricity consumption in the simulation result. No other energy source such as liquefied petroleum gas was used in the student dormitory therefore electricity is the only source of carbon footprint in the operation phase. The latest GHG emissions factors (2015) of 0.540 kg CO 2 e per kWh provided by CLP for electricity generation is adopted for the following calculation. The result will be presented in CO 2 e per m 2 and per capita for convenient comparison with other building's performance.
The surface area of the residential hall from 1/F to 11/F is 15,075 m 2 . The density is 440 residents for typical occupancy (50%) and 880 residents for peak occupancy (100%). The carbon emissions of each scenario are shown in Table 1. For a typical building's life span of 50 years, if energy saving mode is adopted, a total of 684 tonnes of CO 2 e can be reduced, which is equivalent to about 6.5% of carbon reduction for 50 years of operations.
Energy Conservation Measures
Currently, the following energy conservation measures have already been implemented at the residential hall: Use of occupancy sensor and photo sensor for lighting control. dormitory like when do they leave and return and how often do they stay in common area.
Use air-conditioner in good practice. Reminders should be posted near the switches of the airconditioner to remind the users of the best practices in using air-conditioners such as setting the air-conditioner temperature at 25.5 o C. Install tap aerators. These water saving devices control the amount of water that flows through the tap without affecting the water pressure as they mix the water with air. The aerator acts as a sieve, separating a single flow of water into many tiny streams which introduces the air into the water flow. As there is less space for the water to flow through, the water flow is reduced, resulting in water savings.
Conclusion
The present study involved a carbon audit for the operation phase of the student residential hall at Tsing Yi campus from 1/F to 11/F using walkthrough combined with modeling software. The scope of this study includes the energy use of MVAC system, lighting system, water heating and other electrical equipment. An existing condition was first measured and applied as a baseline study to validate the accuracy of the simulation. Four scenarios are simulated for the following year, in which two for typical occupancy and two for peak occupancy. The average carbon emission is 13.9 kg CO 2 e per m 2 per annum and 237.3 kg CO 2 e per capita per annum under peak occupancy. The carbon emission is 7,310 tonnes CO 2 eq for 50 years' operation. Scenario 4, also under peak occupancy, introduces an effective energy use pattern for air-conditioning system which only operate between 6 am to 11 pm. If this approach is to be adopted, carbon emission will be 6,831 tonnes CO 2 e and 684 tonnes of carbon emission can be reduced in the entire 50 years life span. Finally, some energy saving opportunities was suggested for this residential building. Both the material use in construction stage and selection of electrical equipment as well as the user behavior are crucial to reduce carbon emission.
|
2019-04-27T13:10:03.448Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "fe4b17bf6d605458a0bec055e367cc214fc7577d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/168/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8cdcc73f74d59c811362fc27ffd6af78ad2a0cc0",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
237639059
|
pes2o/s2orc
|
v3-fos-license
|
Semantic Pattern Detection in COVID-19 Using Contextual Clustering and Intelligent Topic Modeling
The COVID-19 pandemic is the deadliest outbreak in our living memory. So, it is the need of hour to prepare the world with strategies to prevent and control the impact of the pandemic. In this paper, a novel semantic pattern detection approach in the COVID-19 literature using contextual clustering and intelligent topic modeling is presented. For contextual clustering, three level weights at term level, document level, and corpus level are used with latent semantic analysis. For intelligent topic modeling, semantic collocations using pointwise mutual information (PMI), and log frequency biased mutual dependency (LBMD) are selected, and latent dirichlet allocation is applied. Contextual clustering with latent semantic analysis presents semantic spaces with high correlation in terms at corpus level. Through intelligent topic modeling, topics are improved in the form of lower perplexity and highly coherent. This research helps in finding the knowledge gap in the area of COVID-19 research and offered direction for future research.
INTRODUCTION
The Coronavirus family comprises of a wide range of animal and human viruses.Coronaviruses are positive-sense RNA viruses and are classiðed into four genera: Alpha, Beta, Gamma, and Delta-coronaviruses. (Weiss & Leibowitz.,2011;Burrell et al., 2016).Alpha coronaviruses and beta-coronaviruses are found exclusively in mammals, whereas gamma coronaviruses and deltacoronaviruses primarily infect birds.Prior to 2003, members of this family were believed to cause only mild respiratory illness in humans.
The 2003 epidemic of SARS-Cov prompted an intensive research for novel coronaviruses, resulting in the detection of a number. of novel coronaviruses in humans, domestic animals and wildlife.This research finds the greatest discovery, which suggest that bat and avian species are the natural reservoirs of the viruses (Guo,2020).Recent studies also discover that these coronaviruses are the result of recent cross species transmission events.The emerging of novel corona virus(2019-nCov) has awakened the echoes of SARS-Cov from nearly two decades ago (Gralinski & Menachery 2020).This zoonotic human coronavirus of the century emerged in Dec 2019, with a cluster of patients with connection to Huanan south China sea food market in Wuhan, Hubei Province China.Similar to severe acute respiratory syndrome (SAR-Cov) and Middle east respiratory Syndrome coronavirus (MERS-Cov) infections patients exhibited symptoms of viral, pneumonia including fever difficulty in breathes and bilateral lung infiltration in the most severe cases (Wuhan Municipal Health Commision,2020).
Since its emergence in China in Dec 19, the coronavirus is spreading very fast in the entire world.Till 8th June globally there have been 6,881,352 confirmed cases of COVID-19, including 399,895 deaths, reported to WHO.According to country wise detail data from WHO dashboard on 8th June 2020, United states of America has the highest number of confirmed cases of 1915712, And at second largest confirmed cases in Brazil with 672846, and then Russia is at third position with 467673 and United Kingdom with 284,872 confirmed cases.Till 8th June India has total 265740 confirmed corona cases, out of which 129358 are active cases and 128894 recovered successfully, and unfortunately 7473 deceased [https://www.covid19india.org/]India as the 2nd largest most populated country of world after china, where the first Covid-19 case emerged in Kerala on Jan 30,2020, which originally originated from China.Till 20 March, India observed around 223 confirmed cases and out of 4 lost their lives due to this pandemic.Indian Government has taken all the necessary step to tackle the pandemic in our country.
Till today the Covid -19, pandemic shows no sign of abating, as vaccine is yet to found.Although all the countries are trying to control with lockdown and local and global social distancing.Even in some countries situation is under control, Government started unlocking the country in phases with necessary precautions.
The researchers in different part of worlds are trying their best in different research labs and individually in different fields of medicine, bioinformatics, virology, technology, Data analytics, artificial intelligence to help the humanity to tackle this horrible epidemic with minimum loss.
Data scientist and analytics with advanced machine learning, deep learning algorithms try to predict the number of infected people in the future, also try to predict number of susceptible populations, so that government can take necessary action like implementation of lockdown, building necessary healthcare infrastructure.
In this paper, our approach towards Covid-19 pandemic is using distributional semantics, here the emphasis is to present semantic pattern in available literature of Covid-19, through contextual hierarchical clustering and intelligent topic modeling.For contextual hierarchical clustering implementation latent semantic analysis with novel three level weights at term level, document level and corpus level are used.We choose two three level semantic space, ATC -Augmented weighting at term level, log term frequency at document level and Cosine normalization at corpus level and, NPC-Neutral at term level, probabilistic weighting at document level and Cosine normalization at corpus level.
Intelligent topic modeling is implemented using semantic collocations selection using point wise mutual information (PMI) and log frequency biased mutual dependency (LBMD) and then latent dirichlet allocation is applied.To show the effectiveness of our proposed methodology, both the approaches are compared with neutral weights at three level in contextual hierarchical clustering and traditional topic modeling algorithm latent dirichlet allocation.
The paper begins with data collection understanding, followed by 4 stages of analysis 1. Keyword trend analysis.2. Contextual Hierarchical clustering in three semantic spaces 3. Cosine Similarity score analysis of term pair in three semantic spaces 4. Topic Modeling of Dataset using Intelligent Latent Dirichlet Allocation.
Related work
A study analyzed, trips from Wuhan to other parts of China including different mode of transport (air, road, train) between 370 cities in china and special administrative region of Hong Kong and Macau from data Dec3,2019 -Jan 24, 2020.Here Non homogeneous poison process model is constructed to predict the risk of infection in the traveler coming to Wuhan city and resident of Wuhan city (Lim et al.,2020).
In another study clinical finding of the patient, who was the first person become a carrier of territory transmission outside china.This is medical study where use of medicine on this patient with medicine lopinavir/ritonavir in treatment, in different stages of treatment and its effects are analyzed (Kim et al.,2020).
A model developed at John Hopkins University uses a stochastic simulation model which aims to mitigate pandemic at the one-set of the outbreak.The Meta population model connect airport network at global scale.In each airport a discrete time susceptible exposed infected recovered model is implemented to model the 2019-Covid spread (Biswas & Sen,2020;Li et al.,2020).
In other study, to predict the impact of disease at several level, many preliminary mathematical models are formulated by various research organizational groups.These insights will help as input for designing strategies to control the epidemics.In a study susceptible exposed infected framework is formulated to prevent epidemics during large events.e. g During parties or concert with huge crowd (Du et al.,2020).
A research in south Korea was conducted which attempts to isolate the pathogen from Covid-19 patients.In this study upper and lower respiratory tract secretion sample from putative patients with Covid-19 were inoculated on the cells to isolate the virus.Full genome sequencing and electron microscopy were used to identify the virus (Yunlu,2020).
In another approach where the three variants of genome sequence of Corona virus-Covid-19 by using amino acid, which are named as A, B, C, with A being the ancestral type are analyzed using phylogenetic network analysis (Forster et al.,2020).
A non-pharmaceutical intervention for preventing and controlling this deadly Covid -19 infectious disease is desirable.IoT (internet of things) and machine learning methods has sufficient potential to contribute in this time (Chakraborty,2019).A machine learning model based study has been done to predict the infection in Mexico city (Muhammad1 et al.,2020).
A secure, privacy concerned IOT (internet of things) inspired model for monitoring of epileptic patients are proposed (Gupta et al.,2019),It can also be utilized in this critical pandemic of SAR-Cov-2.In this pandemic major factor to contain the disease is social distancing and movement controlling.Automated digital contract tracing is effective and efficient technology.A hardware based model that capture movement information and contract of object is developed using IoT techniques (Garg et al.,2020).
Methodology Data Collection
In this paper authors have used a collection of 21323 articles published on Covid-19 till on 21st May 2020.This is collected from database is an open research dataset growing through resources of scientific papers published by various researchers in the entire world on Covid-19 and related to historical coronavirus resources.In this collection, articles on coronavirus from year 2000 to 2020 are collected, and the publication trend as shown in figure 1 is observed.Before the outbreak of coronavirus in Dec 2019 in Wuhan city of China, very few publications exist in the dataset.
Preprocessing of Dataset
The dataset is preprocessed before topic modeling to reveal the semantic themes in the huge data set collection.The data set is cleaned by using basic text mining tools.The dataset contains some articles in the Covid-19 data set in languages other than English like Chinese and German, for better interpretation of results only those articles written in English are considered and others are removed from the dataset.568 articles of other languages are removed.The abstract of all 20755 documents are extracted as a single corpus object, and then all stop word are removed, all punctuation symbols are removed, all capitals are converted into lowercase, all numbers are removed and finally the corpus object is converted into a document term matrix for final topic modeling.To consider the importance of each keyword in dataset the term frequency and inverse document frequency weight mechanism during document term matrix construction.
Parameter Setting
In this semantic theme detection research of Covid-19 Data set, we have used two technique known as Latent semantic analysis and Latent Dirichlet Allocation.For significant semantic pattern detection through Latent semantic analysis, three level weights at term, document and corpus level are used.After preprocessing step from collection of 20755 documents, we get the document Term matrix of 786*20755, with term frequency weights.After this our novel three level weight are used on document term matrix (Deng et al.,2004;Debole & Sebastiani 2003).In second phase of analysis, we applied topic modeling algorithm latent Dirichlet allocation, with novel intelligent phrase detection using point wise mutual information(PMI) and log biased mutual dependency(LBMD) for enhanced semantic themes in data set.
Number of Topics: In topic modeling techniques, the most important factor is choosing the number of topics.Topics are chosen in such a way that truly explore the dataset, and also able to find existing semantic themes in data set as accurate as human.In probabilistic method, there are many techniques exist [Cao Juan et al.,2009;Deveaud et al.,2014;Griffiths and Steyvers,2004) but which one to choose for dataset in hand is again a big question.Most of the techniques use likelihood method, and when executed with certain range of topics, they converge either on the lowest value of topics i.e. at the starting point and in some case converges at the highest number of topics.In both the cases it become very confusing to choose the right approach, choosing too few numbers of topics will not be able to explore the dataset, and so many numbers of topics result in overlapping of topics.So in this work we use a very efficient techniques given by Arun (Arun et al.,2010), in this approach the normalized form of matrices generated from latent Dirichlet output, known as word topic matrix, and document -topic matrix is used and the K-L divergence between these matrices are calculated, best values of topics are chosen at the point where this divergence is minimum.
So, we have chosen 6 as the number of topics, and semantic themes broadly exist in dataset.
Latent Semantic Analysis
LSA is a two-step process, the first step is to create a term document matrix consists of document collection, where each row represents all terms in document collection and each column represents individual document in document collection (Deerwester et al.,1990), each cell in this matrix contains the frequency with which the term of its row appears in the document denoted by its column.So, first step of latent semantic analysis is creating a term document matrix with term frequency as a basic weighting method for each term.
In second step single value decomposition is used on term document matrix, basically SVD is a dimension reduction technique (Papadimitriou et al.,2000), which decompose our m*n matrix (where m is number of terms and n is the number of documents) into a product of three matrices.
The component U, describes the original row entities in A -i.e. the term matrix (n*n), V matrix describes original column entities in A describes the document matrix (Gefen, et al.,2017).The third component W is a n*n diagonal matrix of singular values.The quality of factorization of LSA is that matrix (term document matrix) decomposed so perfectly that if we retain only the k greatest singular values in W and retain in U and V the column corresponding to those values then the product resulting matrices U, W, V is the best approximate of rank k.
Three Level weight
Three level weight is a concept inspired from Salton.(Salton et al.,1988;Buckley et al.,2004) where three factors are considered for assignment of appropriate weight to every single term.These three factors are 1.Total term count in corpus or document collection represented with term frequency.2. Second factor is collection frequency factor that consider or separates relevant document from irrelevant documents.For.e.g.inverse document frequency is considered to increase the terms discriminating power in document collection.3. Third factor is a way of considering the document length for analysis, a cosine normalization factor is incorporated to equalize the length of documents.
In this approach, we choose two three level semantic space, ATC -Augmented weighting at term level, log term frequency at document level and Cosine normalization at corpus level and, NPC- Neutral at term level, probabilistic weighting at document level and Cosine normalization at corpus level.Equation for Augmented(a), Term frequency(t), Probabilistic(P)and cosine normalization(C) are given below. (5) Algorithm 1: Contextual Hierarchical Clustering 1. Data set is prepared in .csvformat from collection of text files.2. Data is preprocessed using all necessary step like stemming, punctuation removal, stop-word removal, and all whitespaces etc. and corpus object is made 3. Corpus object of data set is converted into Term document matrix for further text processing.4. Three level of weights are applied to document Term matrix as: a) ATC -Augmented weighting at term level, log term frequency at document level and Cosine normalization at corpus level b) NPC-Neutral at term level, probabilistic weighting at document level and Cosine normalization at corpus level.5. Latent Semantic Analysis function is applied to matrices generated in step 4. this will generate two latent semantic spaces known as ATC-Latent semantic space and NPC-Latent semantic space.6.Using cosine similarity score for specific term, contextual hierarchical clusters are generated in both the semantic spaces.
Latent Dirichlet Allocation
Inspired from the very popular vector space assumption of text mining, known as 'bag of words' assumption, where the order of words in a document can be ignored.The theory of probabilistic language model, like latent Dirichlet allocation are founded on the assumption of exchangeability (Blei et al.,2003)It state that documents are exchangeable, and the order of documents can also be neglected.The idea of Latent Dirichlet Allocation (LDA) is based on the concept that one document exhibits multiple topics in different proportion, and each topic is defined as a distribution of fixed set of words.For example, the document of Sports has vocabulary of sports as well as health and education.So, the document contains words related to all the three topics, and each topic has its fixed vocabulary that defines it.But how much proportion of these topics a document contains is a big challenge.
LDA formally cast the concept of semantic themes or topic detection through hidden variable model of documents.In these models the semantic themes in document collection are considered as hidden variables, and words in collection are observed variables, the process of learning the topic distributions in these document, and word distributions in topics is described through plate notation in figure 4.
Distribution of latent variables given the document
Proposed Intelligent Latent Dirichlet Allocation
It is extension of traditional topic modeling algorithms where the traditional theory of text mining algorithm is challenged.Traditional topic modeling works on the principal of 'bag of words' approach and also 'exchangeability', which state that the order of documents in a corpus does not matter, and in latter order of words in documents does not hold much weightage in text mining.Very few studies consider the importance of semantic order of words in text mining (Wallach. 2006).In this study an novel intelligent phrase refinement using two statistical measure known as point wise mutual information(PMI)(Gerlof Bouma,2009)and log frequency biased mutual dependency(LGMD) (Church and Hanks,1990) are applied to select only meaningful semantic phrases for topic modeling of Covid -19 dataset, at preprocessing level, semantic order between words are captured using these two exclusive statistical measures, and only those terms or phrases are considered in topic modeling, those crosses a basic threshold of these metric's statistical score.
Point wise Mutual Information (PMI)
It is a metric based on of how much the actual probability of a particular co-occurrence of events P (W1, W2) differs from what we would expect it to be on the basis of the probabilities of the individual events.In PMI, it is assumed that rare events contain more information than frequent events.This means that the PMI of perfectly correlated words is higher when the combination is rarely occurring.PMI can be interpreted as a measure of independence rather than as a measure of correlation.[Griffiths et al.,2007] Log Frequency Biased Mutual Dependency (LBMD) Mutual dependency can be calculated in the phrases by subtracted from PMI the information that the whole event bears, which is self-information for any event X.
So mutual dependency (MD) can be defined between two co-occurring word pair w1 and w2 Mutual dependency will be maximized for perfectly dependent phrases or statistical confidence; it is suggested that slight bias towards frequency can be beneficial.So, a new measure known as log frequency biased MD can be defined as In other words, it is combination of Mutual Dependency with T-score Algorithm 2: Intelligent Topic Modeling 1. Data set is prepared in .csvformat from collection of text files.2. Data is preprocessed using all necessary step like stemming, punctuation removal, stop-word removal, and all whitespaces etc. and corpus object is made 3. Corpus object of data set is converted into Term document matrix for further text processing 4. Collocation function is applied to term document matrix to construct semantic phrases up to N-Gram.
3. Semantic collocation (phrases) are selected using a) Point wise mutual information (PMI) score b) Log Biased Mutual Dependency (LBMD) score 4. Semantic collocation matrix is constructed and latent dirichlet allocation is applied for intelligent topic modeling
Results and Analysis
In this paper to understand the literature of Covid-19, two topic modeling technique called Latent Dirichlet Allocation and Latent semantic analysis are used.Various details of dataset are analyzed.
word Level Analysis
Before detailed data analysis through topic modeling techniques, authors explore the data set, by exploring the frequency of terms in dataset.We consider top 20, the most highly frequent terms used in various research papers in last decades.Figure 5, shown the terms with their frequencies.
Context Aware Hierarchical Clustering in three semantic spaces.
In this analysis of latent semantic analysis (LSA), using all the three proposed weight for latent semantic analysis, the three semantic spaces are constructed known as Latent semantic space-NNN, Latent semantic space-NTC, and Latent semantic space-ATC.Then in these semantic spaces context aware clustering for specific term is generated using heat-map.In this hierarchical clustering all the correlated term using cosine similarity measure are clustered together.These heat maps are constructed from 20 closest term to a specific term "antibody".The pattern in the heat-map shows the association between rows and columns.Hierarchical clustering in heat-maps are formed based on distance and similarity between them.These contexts aware hierarchical clustering is shown in Figure 6(a-c).In these heat maps we use two color schemes each represents semantic relatedness score between terms related to "antibody", Here red color shows similarity score between (0-0.4), light red (0.4-0.6), orange (0.6-0.7), yellow color shows 2nd highest semantic relatedness (0.8-0.9) and white color indicates 1, means terms are exactly same.In figure 6 In next phase of analysis, we have done modeling using Intelligent LDA, in this phrase refinement using log based mutual dependency and point wise mutual information has been done.In this analysis we found more crisp topics in Covid-19 Data set, and perplexity of topic modeling with same number of parameters improved a lot, also this intelligent topic modeling provides more cohesive topics, because the model itself in the initial phase calculated very critical statistical measures, known as Point-wise mutual information(PMI) and log based mutual dependency(LBMD).After selection of quality phrases, the document term matrix is constructed for further topic modeling.In Figure 7 documents topic 5, and remaining 5% comprises the topic 2 and 4.Where in Latent Dirichlet allocation model, topic 3 contain 60% of documents, topic 2 contain 15% of documents, and topic 6 contain 10% of documents, topic 4 contains 5% of documents and topic 5 contains 3% of documents, and topic 1 contain only 2% of documents.And when we calculated perplexity in both model at number of topics 6, the latent Dirichlet allocation has 822.32 and intelligent phrase refinement topic modeling has perplexity value is 445.2167.It is a great improvement of intelligent phrase refinement for topic modeling.For quality topic model the lower perplexity value is considered best.
CONCLUSION
When the entire world is suffering from Covid-19 pandemic, it is very important to understand the pandemic from multiple perspectives like virology, medicine, bioinformatics, economics, Artificial intelligence, Epidemiological models.
In this paper, authors proposed a novel semantic pattern detection approach based on contextual hierarchical clustering and intelligent topic modeling-to explore the Covid-19 literature, till May 23, 2020.The data set contain around 21323 documents.Data set is analyzed using latent semantic analysis with three level weights at term, document, and corpus level.These weights are known as NTC (Neutral, inverse document frequency, Cosine normalization) and APC (Augmented, Probabilistic, Cosine normalization).For evaluation of results, we compare these two semantic spaces results with NNN (Neutral at three level) or no weights except term frequency at document term matrix.In all the semantic spaces we compare the results using co-term similarity using cosine similarity score, which shows how close they appear contextually in all the three semantic spaces.In this proposed three level weighted NTC-latent semantic space and APC -latent semantic space shown significant improvement as compare to NNN-latent semantic space as shown in table 1.Also significant improvement in contextual hierarchical clustering using exploratory data analysis shown in Figure 6(a-c) In next level of Covid-19 corpus analysis, authors used Latent Dirichlet allocation and intelligent latent Dirichlet allocation topic modeling techniques to find the topics in the dataset.Intelligent latent Dirichlet allocation has shown the lower perplexity values and more cohesive topics.
In the future, it is advisable that novel topic modeling techniques based on contextual semantics should be used in bioinformatics.Like genome sequence of SAR-Cov-19, can be explored using non-negative matrix factorization with its various variants for efficient pattern mining, to know the structure of SAR-COV viruses in more detailed way.
Figure 2 .
Figure 2. Methodology for proposed Approach to Cognitive semantic themes Detection in Covid-19 Dataset.
Figure 3 .
Figure 3. Optimal number of topics selection.
Figure 5 .
Figure 5. Top 20 terms with frequency in dataset.
Figure
Figure 6(a).NNN-Latent Semantic Space Figure 7 (a) Documents-Topic Proportion in LDA Topic Modeling (a)-7(b), Document topic proportion in all the six topics are shown in both latent Dirichlet allocation topic modeling and in intelligent latent Dirichlet allocation topic modeling.Here intelligent phrase refinement shown an indication that 50% documents fits in topics 6 and 40% topic 1, then only 5%
Figure 8 .
Figure 8. Six topics with top 15 terms as word clouds.
|
2021-08-12T13:24:55.662Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "66a910a7ccd99035ec0763d93600e2a0f08ba370",
"oa_license": null,
"oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=280703&isxn=9781799896821",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e9bb826c8714c1efb4efa02f60c73c586fce547f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8171937
|
pes2o/s2orc
|
v3-fos-license
|
Activation of the intrinsic and extrinsic pathways in high pressure-induced apoptosis of murine erythroleukemia cells
We previously demonstrated that caspase-3, an executioner of apoptosis, is activated in the pressure-induced apoptosis of murine erythroleukemia (MEL) cells (at 100 MPa). Here, we examined the pathway of caspase-3 activation using peptide substrates and caspase inhibitors. Using the substrates of caspases-8 and -9, it was found that both are activated in cells under high pressure. The production of nuclei with sub-G1 DNA content in 100 MPa-treated MEL cells was suppressed by inhibitors of caspases-8 and -9, and pan-caspase. In 100 MPa-treated cells, pan-caspase inhibitor partially prevented the cytochrome c release from the mitochondria and the breakdown of mitochondrial membrane potential. These results suggest that the intrinsic and extrinsic pathways are activated in apoptotic signaling during the high pressure-induced death of MEL cells.
INTRODUCTION
To maintain tissue homeostasis in multicellular organisms, unwanted cells undergo cell death and are eliminated from the tissue. Recent research has revealed the pathways of apoptotic and non-apoptotic cell death [1][2][3]. Apoptosis is programmed and caspase-dependent cell death, and it is a physiological process that removes virus-infected or surplus cells [4]. All forms of apoptosis are characterized by structural properties such as cell shrinkage, condensation of nuclei, and loss of microvilli [1][2][3][4]. The non-apoptotic pathway is caspaseindependent, and cell deaths such as necrosis and autophagy are involved in this pathway [1]. The properties of necrosis are cellular swelling and organelle degradation [1][2][3]. In its final stages, the cell membrane is disrupted so inflammation occurs due to the release of the cellular contents. It was recently shown that autophagy also participates in programmed cell death in a caspaseindependent manner [1]. The morphology of autophagic cells is distinct from those of apoptotic and necrotic cells. In apoptotic signaling, there are two caspase-dependent pathways, i.e. extrinsic and intrinsic [5]. The extrinsic pathway is activated in Fas-induced apoptosis. In this pathway, the Fas ligand, a member of the tumor-necrosis factor family, binds to the cell-surface death receptor Fas. This ligand-receptor interaction recruits the adaptor protein FADD (Fas-associating protein with death domain), which in turn recruits procaspase-8 [2]. Oligomerization of procaspase-8 leads to its autocatalytic activation. The activated caspase-8 directly activates effector caspases such as caspase-3 [6,7]. Furthermore, caspase-8 cleaves the N-terminal domain of Bid, an apoptosis-promoting member of the Bcl-2 family [7]. Truncated Bid translocates to the mitochondria and induces the breakdown of mitochondrial membrane potential and the release of cytochrome c [7]. Thus, caspase-8 is the apical caspase in the extrinsic pathway. The intrinsic pathway is activated by various cellular stresses, including serum deprivation. The release of cytochrome c from the mitochondria into the cytosol is another pathway to activate caspase-3 via caspase-9 [8,9]. The release of cytochrome c from the mitochondria is mediated by Bcl-2 family members such as Bax and Bcl-2 [10]. Pro-apoptotic members like Bax and Bid facilitate the cytochrome c release, whereas the anti-apoptotic members such as Bcl-2 and Bcl-XL prevent its release [10]. Thus, the mitochondria play a central role in the intrinsic pathway. Apoptosis is induced upon the exposure of cells to high pressures. For instance, mammalian cells such as MEL cells [11] and human lymphoblasts [12] undergo apoptosis when exposed to a pressure of 100 MPa. Interestingly, living organisms have been found in deep-sea environments where the pressure reaches 110 MPa. Thus, it is interesting to examine how apoptosis in mammalian cells is induced by high pressure. Previously, we demonstrated that caspase-3 is activated in high pressure-induced apoptosis [11]. However, the pathway of caspase-3 activation remains unclear. In this paper, we report that the intrinsic and extrinsic pathways are activated in high pressure-induced apoptosis of MEL cells.
Detection of apoptosis by flow cytometry
MEL cells were preincubated in the presence of caspase inhibitors for 60 min at 37ºC, exposed to a pressure of 100 MPa, and then incubated in the presence of caspase inhibitors for 90 min at atmospheric pressure (0.1 MPa) and 37ºC. The concentration of all the caspase inhibitors used was 50 μM. For z-VAD-fmk only, a concentration of 100 μM was also used. After culture, the suspensions were centrifuged for 5 min at 250 x g and 4ºC. The cells were washed twice with phosphate-buffered saline (PBS: 136 mM NaCl, 2.7 mM KCl, 8.1 mM Na 2 HPO 4 , 1.5 mM KH 2 PO 4 , pH 7.4) and then fixed with 70% ethanol overnight at -20ºC. The samples were washed with PBS and treated with RNase A (100 μg/ml) in PBS for 20 min at 37ºC. After treatment, the cells were washed once in PBS and stained with PI (50 μg/ml) for 10 min at room temperature. Flow cytometric analysis was performed using an EPICS XL System II (Coulter).
Measurements of caspase activity
For the measurement of the caspase-8 and -9 activities, 100 MPa-treated cells were cultured at atmospheric pressure for 90 min at 37ºC. After the culture, the cells were washed twice with chilled PBS, and then treated with aqueous solution containing 1% Triton X-100 and 1% NP-40. The lysate was incubated for 30 min at 4ºC, and centrifuged for 5 min at 17,000 x g and 4ºC. The supernatant and a substrate (Ac-IETD-MCA for caspase-8 and Ac-LEHD-MCA for caspase-9) in 100 mM HEPES-KOH, 5 mM DTT, 10% NP-40 and 10% sucrose, pH 7.4 were mixed, and the mixture (0.5 ml) was incubated for 60 min at 37ºC. After incubation, 2 ml of sodium acetate (1 M, pH 4.2) was added into the reaction mixture. The released 7-amino-4-methylcoumarin (AMC) was measured at 380 nm excitation and 460 nm emission.
Measurements of the mitochondrial membrane potential and cytochrome c MEL cells were preincubated in the presence of z-VAD-fmk (100 μM) for 30 min at 37ºC, exposed to a pressure of 100 MPa, and then cultured in the presence of z-VAD-fmk (100 μM) for 90 min at atmospheric pressure and 37ºC. After the culture, the cells were washed three times with PBS. For the measurement of mitochondrial membrane potential, the cells were suspended in PBS containing Rhodamine 123 (10 μM) and incubated for 15 min at 37ºC. The membrane potential was measured using a flow cytometer (EPICS XL System II, Coulter). To isolate the mitochondria, the cells were suspended in mitochondrial isolation buffer (250 mM sucrose, 20 mM HEPES-KOH, 10 mM KCl, 1 mM EGTA, 1 mM EDTA, 1.5 mM MgCl 2 , 1 mM DTT, 0.1 mM PMSF, 10 μg/ml leupeptin) and stood for 30 min at 0ºC. The samples were homogenized by douncing forty times in a Dounce homogenizer, and centrifuged at 800 x g for 10 min at 4ºC. The supernatants were centrifuged at 20,000 x g for 15 min at 4ºC. The pellets were used to measure cytochrome c within the mitochondria. The cytochrome c was measured using an ELISA kit (Quantikine M, R&D Systems, Inc.)
The suppression of high pressure-induced apoptosis by caspase inhibitors
To examine the pathway of caspase-3 activation, MEL cells exposed to a pressure of 100 MPa were cultured in the presence of caspase inhibitors, and analyzed via flow cytometry (Fig. 1A). The apoptotic cells appeared to have dominantly sub-G1 DNA content ( Fig. 1A-b). The high pressure-induced apoptosis was significantly suppressed by the inhibitors of the pan-caspases (Fig. 1A-c and 1B), caspase-8, and caspase-9 (Fig. 1C). Of these inhibitors, the pancaspase inhibitor z-VAD-fmk was most effective (Fig. 1C). The inhibition effect of z-VAD-fmk at 100 μM was almost the same as that at 50 μM (data not shown). Additionally, when the activities of caspases-8 and -9 were examined using fluorescence substrates, the caspase-8 activity in extracts prepared from 100 MPa-treated MEL cells increased 5.6-fold compared with that from cells not subjected to high pressure, whereas the caspase-9 activity increased 3.8-fold. These results suggest a possibility of the contribution of caspases-8 and -9 to the activation of caspase-3.
Mitochondrial membrane potential in high pressure-treated MEL cells
Positively charged lipophilic molecules such as Rhodamine 123 are partitioned between the cell and the surrounding medium depending on the mitochondrial membrane potential. Therefore, these charged dyes are plentifully incorporated into polarized mitochondrial membranes, but poorly into depolarized ones. Thus, the normal cells with polarized mitochondrial membranes are expected to show strong fluorescence intensity, whereas apoptotic cells that have a breakdown of mitochondrial membrane potential show weak fluorescence intensity. Therefore, changes in the mitochondrial membrane potential in MEL cells were examined using Rhodamine 123 ( Fig. 2A). In the 100 MPa-treated MEL cells, the breakdown of mitochondrial membrane potential was observed in parts of the cell population. Such a reduction in membrane potential was considerably recovered by z-VAD-fmk.
Release of cytochrome c from the mitochondria in high pressure-treated MEL cells
The release of cytochrome c from the mitochondria in high pressure-treated cells was examined using the ELISA method (Fig. 2B). In 100 MPa-treated MEL cells, about 70% of cytochrome c was released from the mitochondria. However, this release of cytochrome c was suppressed by about 40% by z-VAD-fmk.
DISCUSSION
We previously showed that there is caspase-3 activation in the high pressureinduced apoptosis of MEL cells [11]. In this study, the pathway of caspase-3 activation was analyzed using peptide substrates and caspase inhibitors. Several lines of evidence show the contribution of caspases-8 and -9 to the signaling pathway of 100 MPa-induced apoptosis. The breakdown of mitochondrial membrane potential and the release of cytochrome c from the mitochondria are interesting events in apoptosis. The response of the cells to high pressure is dependent on the cell cycle. MEL cells in the G1-or G2-phase are stable to a pressure of 80 MPa, whereas cells in the S-phase are sensitive to that pressure [13]. That explains why the breakdown in the membrane potential is observed only in a part of the MEL cell population. In high pressure-treated cells, the breakdown in the membrane potential is prevented by z-VAD-fmk. Here, it is useful to compare our results with those for apoptosis induced by other methods. In apoptosis induced by ionizing radiation in Jurkat cells, a loss of mitochondrial membrane potential is observed [6]. However, this loss is unaffected by z-VAD-fmk, indicating the contribution of a caspase-independent pathway to this event [6]. Thus, the apoptotic signaling to the mitochondria induced by high pressure is different from that induced by ionizing radiation. One pathway of caspase-3 activation is associated with the release of cytochrome c from the mitochondria [4,7]. The released cytochrome c binds to Apaf-1 (apoptotic protease activating factor-1). This oligomeric cytochrome c-Apaf-1 complex recruits and activates caspase-9. Then, caspase-9 activates the executioner caspase-3 [4,7]. In 100 MPa-treated MEL cells, the release of cytochrome c occurs in parts of the cell population. Provided that the mitochondrial outer membranes are disrupted by a pressure of 100 MPa, cytochrome c would be released from the mitochondria of all the cells. However, no such release of cytochrome c is observed. This suggests that the cytochrome c release from the mitochondria in 100 MPa-treated MEL cells is not due to the disruption of the outer mitochondrial membranes by high pressure, but is a response to apoptotic signals. Such a release of cytochrome c is partially prevented by z-VAD-fmk. This suggests that Bid, a pro-apoptotic factor, remains inactive due to the inhibition of caspase-8 by z-VAD-fmk [14]. Furthermore, the cytochrome c release being insensitive to z-VAD-fmk suggests the existence of a caspase-independent pathway such as the translocation of Bax from the cytosol to the mitochondrial membranes [4,7,9]. In 100 MPa-treated MEL cells, we demonstrated that caspase-9 is activated, and that the apoptosis is suppressed by the caspase-9 inhibitor. These results suggest that the intrinsic pathway is activated in high pressure-induced apoptosis of MEL cells. Another pathway of caspase-3 activation is associated with caspase-8 [6,15]. In 100 MPa-treated MEL cells, we demonstrated caspase-8 activation and reduction of the nuclei with sub-G1 DNA content by the caspase-8 inhibitor. These results suggest that the extrinsic pathway is also activated in high pressure-treated MEL cells. Active caspase-3, an executioner of apoptosis, cleaves the inhibitor of caspaseactivated deoxyribonuclase (ICAD) which forms a complex with caspaseactivated deoxyribonuclease (CAD) to inhibit its DNase activity [16]. The released CAD enters the nucleus and degrades chromosomal DNA. In this study, the fragmentation of DNA was monitored by flow cytometry. Cells with degraded DNA have sub-G1 DNA contents. The production of nuclei with sub-G1 DNA contents is inhibited, but not completely, by z-VAD-fmk or caspase-3 inhibitor Ac-DEVD-CHO (200 μM) [11]. Thus, it seems likely that caspaseindependent DNA degradation, as seen with necrosis, also occurs in 100 MPatreated MEL cells. Similar results for necrotic cell death are reported for 100 MPa-treated lymphoblasts [12]. In terms of analysis of the signalling pathway, UV-induced apoptosis is ahead of pressure-induced apoptosis. It is well known that UV irradiation induces DNA damage such as thymine dimers and nucleotide deletion [17,18]. Such DNA damage induces the activation of ATR (ATM-and Rad3-related) protein kinase and in turn Chk1, a protein kinase needed for the DNA damage G2 checkpoint control [19]. If DNA damage is severe, the apoptotic pathway via the mitochondria is activated. On the other hand, it is unclear what the apoptotic stimulus for high pressure is. By contrast to the UV situation, DNA is stable against high pressure. For instance, 80 MPa-treated Xenopus nuclei are able to replicate DNA in Xenopus extracts [20]. It is known that oligomeric proteins are dissociated under high pressures [21]. So, it is of interest to examine the influence of high pressure on multiprotein complexes participating in DNA replication. Further, reactive oxygen species (ROS) also induce apoptosis and the production of ROS is enhanced by pressure treatment [22]. Thus, data to understand how apoptosis is induced by high pressure needs to be accumulated. Further work is to investigate the pathway of caspase-8 activation and the caspase-independent pathway of cytochrome c release in 100 MPa-induced apoptosis.
|
2017-08-03T00:22:00.486Z
|
2007-10-19T00:00:00.000
|
{
"year": 2007,
"sha1": "f12daef412aa377789d6cdd9c918ec9d495b4bd7",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc6275616?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f12daef412aa377789d6cdd9c918ec9d495b4bd7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
39683809
|
pes2o/s2orc
|
v3-fos-license
|
C-terminal Amino Acid Residues Are Required for the Folding and Cholesterol Binding Property of Perfringolysin O, a Pore-forming Cytolysin*
Perfringolysin O (θ-toxin) is a pore-forming cytolysin whose activity is triggered by binding to cholesterol in the plasma membrane. The cholesterol binding activity is predominantly localized in the β-sheet-rich C-terminal half. In order to determine the roles of the C-terminal amino acids in θ-toxin conformation and activity, mutants were constructed by truncation of the C terminus. While the mutant with a two-amino acid C-terminal truncation retains full activity and has similar structural features to native θ-toxin, truncation of three amino acids causes a 40% decrease in hemolytic activity due to the reduction in cholesterol binding activity with a slight change in its higher order structure. Furthermore, both mutants were found to be poor at in vitro refolding after denaturation in 6 m guanidine hydrochloride, resulting in a dramatic reduction in cholesterol binding and hemolytic activities. These activity losses were accompanied by a slight decrease in β-sheet content. A mutant toxin with a five-amino acid truncation expressed in Escherichia coli is recovered as a further truncated form lacking the C-terminal 21 amino residues. The product retains neither cholesterol binding nor hemolytic activities and shows a highly disordered structure as detected by alterations in the circular dichroism and tryptophan fluorescence spectra. These results show that the C-terminal region of θ-toxin has two distinct roles; the last 21 amino acids are involved to maintain an ordered overall structure, and in addition, the last two amino acids at the C-terminal end are needed for protein folding in vitro, in order to produce the necessary conformation for optimal cholesterol binding and hemolytic activities.
Thiol-activated cytolysins (1) comprise a family of bacterial protein toxins that are produced by Gram-positive bacteria. They share a high degree of homology in their amino acid sequences (40 -70%) (2-7) and have common biological characteristics, cholesterol binding and the formation of oligomeric pores on plasma membranes. Perfringolysin O (472 amino acids), known as -toxin, is such a toxin produced by Clostridium perfringens type A. Its cytolytic mechanism is thought to comprise at least four steps: binding to cholesterol in membranes, insertion into the membrane, oligomerization, and pore formation. -Toxin binds specifically to cholesterol on plasma membranes with high affinity (K d ϳ 10 Ϫ9 M) (8). By forming oligomeric pores on plasma membranes (9), -toxin causes cell disruption.
After several attempts to crystallize -toxin (10,11), its three-dimensional structure was recently revealed by x-ray diffraction (12). This analysis showed -toxin to be an elongated rod-shaped molecule rich in -sheets and to consist of four discontinuous domains. Domain 4 ( Fig. 1b) (residues 363-472), the C-terminal domain, is an autonomous structure comprising a continuous amino acid chain. Six of the seven total tryptophan residues reside in domain 4, and three are located in the sequence of ECTGLAWEWWR (residues 430 -440), the longest conserved sequence among thiol-activated cytolysins (2, 3). From many efforts to achieve mutagenesis of this toxin family (13)(14)(15)(16)(17), it was shown that all mutations that inhibit cell binding activity reside in domain 4, suggesting that some region in domain 4 binds to membrane cholesterol upon binding to cells. This is consistent with our previous findings that a C-terminal tryptic fragment that contains predominantly domain 4 binds to cholesterol and to cholesterol-containing membrane (18). Our findings that the toxin binding to cholesterol in liposomal membrane triggers a conformational change around tryptophan residues in domain 4 also support this view (19,20). Recently, possible roles of the C-terminal region in cell binding were suggested by a report that a monoclonal antibody thought to bind near the C terminus specifically blocks cell binding, although the exact epitope was not identified (21). Despite this finding, it is not known whether the C-terminal region plays a role in cholesterol binding or membrane insertion activity, inasmuch as either one could affect toxin binding to cells. Recent x-ray crystallographic analysis showed that there are two -strands in antiparallel orientation in the C-terminal end and that one of them is composed of 7 amino acids in the C-terminal end (12).
Here, we constructed and analyzed toxin mutants truncated in the C terminus to define the role of the C-terminal region on cholesterol binding activity. Using an ELISA 1 assay for quantitative analysis of cholesterol binding activity, we show that the C-terminal end is essential for folding of -toxin into the native conformation, thus ensuring activities of cholesterol binding and hemolysis.
Site-directed Mutagenesis-Plasmid pNSP10 containing the perfringolysin O gene (pfoA) (13) was used to construct six pfoA derivatives encoding truncated -toxins. Stop codons were introduced at appropriate sites in the pfoA gene by a site directed mutagenesis kit (CLONTECH) based on the unique site elimination method (22). The 5Ј-deoxyoligonucleotide, dGT-GACTGGTGAGGCCTCAACCAAGTC, was used to make a unique restriction site for the selection of all mutations. Stop codon insertion was performed using the following mutagenic primers for: ⌬471, 5Ј-deoxynucleotide dCAGTTTTTACTTTAGTaTAcTatTAAGTAATACTAG; ⌬470, dCTTTA-GTTTAATTtTAtcaAATACTgGATCCAGGGT; ⌬468, dGTTTAATTGTAA-GTttatCagGATCCAGGGT. Lowercase letters represent bases changed for mutagenesis. The DNA sequences in the resulting plasmids were confirmed by means of the dideoxynucleotide chain-termination method (23). Predicted amino acid sequences in the C-terminal ends of the mutant toxins are shown in Fig. 1.
Protein Production and Purification-Protein production and purification were performed as described previously (13) with slight modifications. Escherichia coli strains BL21(DE3) and BL21(DE3) harboring pLysS (24) (Novagen, Madison, WI) were used for the overexpression of wild type -toxin and mutant toxins. Wild type -toxin and mutant toxins were purified from the periplasmic fraction by a series of DEAE-Sephacel chromatographies. In the case of mutant toxins having no hemolytic activity, the fractions eluted from the first DEAE-Sephacel column were analyzed by immunostaining with anti--toxin antibody after SDS-PAGE. Then, the toxin fractions were loaded onto a second DEAE-Sephacel column equilibrated with 20 mM BisTris, pH 6.5, and eluted with the same buffer containing 40 mM NaCl. For further purification, the toxin fractions were applied to a hydroxylapatite column equilibrated with 20 mM sodium phosphate buffer, pH 7.5, and the toxins were eluted with 100 mM sodium phosphate. Then, the toxins were loaded onto a butyl-agarose column equilibrated with 20 mM Tris-HCl, pH 7.5, containing 1.7 M (NH 4 ) 2 SO 4 and eluted with 0.5 M (NH 4 ) 2 SO 4. The purity of the toxins was checked by SDS-PAGE (25).
Determination of the Hemolytic Activity of Toxins-Hemolytic activity was determined as described previously (26). The amount of toxin required for 50% hemolysis of 1 ml of 0.5% sheep erythrocytes in 30 min at 37°C (HD 50 ) was determined using the von Krogh equation (27). The hemolytic activity of each toxin was obtained as a 1/HD 50 value and expressed relative to the wild type toxin.
Binding of Wild Type and Mutant Toxins to Sheep Erythrocytes-After activation with 10 mM dithiothreitol for 30 min at 10°C, each toxin (0.3 g) was incubated with 0.5% hematocrit sheep erythrocytes in phosphate-buffered saline, pH 7.0, containing 1 mg/ml bovine serum albumin for 20 min at 20°C. The mixture was centrifuged at 250,000 ϫ g for 20 min at 4°C, and both the pellet and supernatant fractions were analyzed by Western blotting.
Binding to Cholesterol on TLC Plates-The cholesterol binding activity of each toxin was examined by using TLC plates as described previously (13,18).
ELISA Assay-The cholesterol binding activity of each toxin was determined by ELISA using the microtiter plate (Immulon 1, Dynatech Laboratories, Alexandria, VA). The wells were coated with various concentrations of cholesterol (12 -10,000 pmol) and treated with 10 mg/ml fatty-acid-free bovine serum albumin in Tris-buffered saline for 1 h for blocking. Toxins (each 1 ng) were then added to the wells and the mixtures were incubated for 1 h. After washing with Tris-buffered saline, the mixtures in the wells were incubated with anti-(-toxin) antibody for 1 h, followed by incubation with peroxidase-conjugated anti-rabbit IgG for 1 h. Toxins bound to the cholesterol on the microtiter plates were detected by measuring the intensity at 410 nm of the color development with 2,2Ј-azino-di[3-ethyl-benzthiazoline sulfonate(6)] (Kirkegaard & Perry Laboratories Inc., Gaithersburg, MD) as a peroxidase substrate.
Susceptibility of Toxins to a Protease-Purified toxins (600 ng each) were treated at 22°C with subtilisin Carlsberg at an enzyme to substrate ratio of 1:60 (28). At appropriate times, the digestion was stopped by the addition of 1 mM phenylmethanesulfonyl fluoride. The resultant fractions were analyzed by SDS-PAGE and Western blotting by using anti-(-toxin) antibody.
Measurment of Circular Dichroism Spectra-Circular dichroism (CD) spectra were recorded using a JASCO J-720 spectropolarimeter at room temperature with 1-or 5-mm pathlength cells. Purified proteins were diluted in 10 mM phosphate buffer with or without 150 mM NaCl. Scans from 250 to 190 nm were recorded with 1-mm cells in the absence of NaCl to minimize buffer noise. Molecular ellipticity ([]) was calculated based on the mean residue weight and extinction coefficient (E 280 0.1% ) estimated as 110.8 and 1.6, respectively. The CONTIN program for secondary structure estimation was kindly provided by Dr. S. W. Provencher.
Measurement of Trp Fluorescence-Fluorescent studies were performed with a Shimadzu spectrofluorophotometer RF-5000. Emission spectra were measured in the range of 300 -400 nm with an excitation wavelength of 280 or 295 nm. Purified toxins were diluted with Hepesbuffered saline, pH 7.0, to a protein concentration of 10 g/ml.
Unfolding and Refolding of Wild type Toxin and Mutant Toxins-Wild type and mutant toxins were unfolded by treatment with 6 M guanidine hydrochloride (GdnHCl) in 20 mM Tris-HCl, pH 8.0, 150 mM NaCl at room temperature for 10 -30 min. Denaturation was confirmed by a red shift in the fluorescence emission wavelength to 350 nm at an excitation wavelength of 280 nm. Refolding was carried out by dialyzing the samples against 20 mM Tris-HCl, pH 8.0, 150 mM NaCl at 4°C for 20 h.
N-terminal Sequence-N-Terminal sequences of wild type and mutant toxins were analyzed with a Biosystems protein sequencer 476A.
Mass Spectrometry-The molecular masses of the toxins were measured by electrospray ionization mass spectrometry (ESI-MS) using a Fourier transform ion cyclotron resonance mass spectrometer BioApex47E (Bruker Instruments) equipped with an external ESI source (Analytica of Branford). Before being injected into the source by a syringe pump operated at 30 l/h, the samples were desalted on a reverse-phase high pressure liquid chromatography column (Senshu Pak C8 -1251-N) eluted with a 10 -60% gradient of acetonitrile, 0.1% trifluoroacetic acid. In some cases, the samples were prepared from the SDS-PAGE gel by the method of Nakayama et al. (29). After SDS-PAGE, the gel was washed with distilled water and stained with 0.3 M CuCl 2 for 3 min. The toxin spot was excised from the SDS-PAGE gel and destained successively in 25 mM Tris-HCl, pH 8.3, and 12.5 mM Tris-HCl, pH 8.3. Then, the toxin was extracted from the gel in 50 mM Tris-HCl, pH 8.8, containing 50 mM EDTA and 0.1% SDS. To remove SDS and other impurities, the extracted toxin was applied to a Phenyl-5PW RP column (Tosoh, Tokyo, Japan) and recovered with 80% acetonitrile in 0.1% trifluoroacetic acid. When this method was used, the molecular mass of -toxin was determined by subtracting the mass of the adduct of copper, 63.4, from the observed mass.
RESULTS
Characterization of Truncated -Toxin--Toxin mutants truncated at the C terminus were constructed and expressed in E. coli as described under "Experimental Procedures" (Figs. 1 and 2). -Toxin has an intrinsic signal sequence at its N terminus and is secreted into the periplasm when expressed in E. coli ( Fig. 2 and Ref. 13). A similar expression profile was observed for ⌬471 and amounts comparable to wild type -toxin were recovered from the periplasmic fraction (Fig. 2). A slightly smaller amount was recovered in the case of ⌬470. Upon expression of the DNA construct for ⌬468 and mutants with larger truncations, the amounts of proteins with molecular sizes close to that of intact -toxin decreased with concomitant increases in the amounts of degradation products with sizes around 38 kDa ( Fig. 2 and data not shown). The results suggest that truncations at the C terminus affect the biosynthesis of -toxin and/or its secretion into the periplasm.
To further characterize the mutant toxins, the expressed proteins were purified from the periplasmic fraction by DEAE-Sephacel column chromatographies and their molecular masses were determined by ESI-MS (Table I). The observed molecular masses of wild type, ⌬471, and ⌬470 are within the range of the predicted molecular masses. In contrast, the observed moleculer mass of the protein recovered from the cells harboring the constructed plasmid for ⌬468 is smaller than the predicted mass for ⌬468 (Table I). This indicates that the production of the protein with a five-amino acid C-terminal truncation brings about a further truncated form. N-terminal sequence analysis revealed that the product has the same N-terminal sequence as the wild type toxin. From the results of N-terminal and molecular mass analyses, we conclude that the product comprises residues 1-451 (predicted M r , 50,375.6), with a 21-amino acid truncation at the C terminus. We designate the product as ⌬452 hereafter. The elution profile of ⌬452 is different from those of wild type -toxin and two mutants, ⌬471 and ⌬470; the former eluted from DEAE-Sephacel column at 110 mM NaCl, whereas the latter two mutants eluted at 60 mM.
The relative hemolytic activities of purified wild type and three mutant toxins were determined (Fig. 3, upper part). No differences in hemolytic activity were detected between wild type -toxin and ⌬471, indicating that the deletion of two amino acids from the C terminus of -toxin does not affect hemolytic activity. In contrast, ⌬470 showed a lower hemolytic activity, 40% that of wild type, while ⌬452 showed no hemolytic activity. These results indicate that truncation of the C terminus by 21 amino acids causes a loss of hemolytic activity.
Hemolysis by -toxin involves two important steps, binding and insertion into membranes, prior to pore formation. The binding activity to cells was measured and compared among the wild type and mutant toxins (Fig. 3, lower part). ⌬471 showed high-affinity binding to sheep erythrocytes similar to the wild type, but ⌬470 showed only very weak binding. ⌬452, which has no hemolytic activity, never bound to the cells. These results show a good correlation between cell binding activity and relative hemolytic activity.
Cholesterol on plasma membranes serves as a receptor for -toxin. Fig. 4a shows the cholesterol binding activity of mutant toxins on TLC plates as detected by immunostaining with anti--toxin antibody. ⌬471 and ⌬470 were found to bind to cholesterol on TLC plates and to specifically recognize free cholesterol but not phosphatidylcholine or esterified cholesterol. Their manner of binding was the same as that of wild type toxin, although ⌬470 shows weaker spots. On the other hand, ⌬452 did not bind to cholesterol at all. Fig. 4b shows the quantitative analysis of the cholesterol binding activity of the toxins by ELISA. ⌬471 shows an activity comparable to the wild type toxin. The activity of ⌬470 is about 40% of the wild type, while no activity could be detected for ⌬452. The results show that the cholesterol binding activity of the toxins correlates well with their cell binding and hemolytic activities. We previously reported that mutants with Trp to Phe substitutions within the tryptophan-rich consensus sequence show decreased binding affinity for erythrocytes (13). We examined the cholesterol binding activity of two such mutants, W438F and W439F, by ELISA and compared the results with the cholesterol binding activity of the C-terminal truncation mutants (Fig. 4b). Mutants with Trp to Phe substitutions show cholesterol binding activity similar to that of the wild type toxin (Fig. 4b), showing that mutations of Trp in the consensus sequence has little effect on the cholesterol binding activity. The decrease in cell binding activity of the mutants should be attributable to step(s) other than cholesterol binding. This makes a district difference from the results for the mutants with C-terminal truncations.
Effect of C-terminal Truncation on the Structure of -Toxin-The defects in the cholesterol binding activities of ⌬470 and ⌬452 can be attributed to either the deletion of cholesterolbinding sites or conformational changes around the binding sites. To assess these possibilities, we first examined the susceptibility of mutant toxins to a protease (Fig. 5). Digestion of wild type -toxin and ⌬471 by subtilisin Carlsberg produced a distinctive 39-kDa fragment assigned as the C-terminal fragment (28); a smaller amount of this fragment was detected when ⌬470 was digested. In contrast, ⌬452 was digested over time into undetectable pieces showing no distinctive bands. Trypsin digestion also produced proteolytic fragments of 28 and 25 kDa (18) from ⌬471 and ⌬470, but not from ⌬452 (data not shown). The results indicate that the secondary or tertiary structures of ⌬452 has been changed by C-terminal truncation of 21 amino acid residues.
Because six out of the seven tryptophan residues in -toxin are located in the C-terminal region (see Fig. 1), it is reasonable to measure tryptophan fluorescence in order to monitor the conformational alterations of -toxin induced by truncation. When the toxins were excited at 295 nm, no differences in the peak emission wavelength at 338 nm were detected between wild type and two mutants, ⌬471 and ⌬470 (Table II), showing that the environmental changes around the Trp residues are not significant in those two mutants. However, environmental changes around some fluorophores other than tryptophan appear to have occurred, since the mutants exhibited a red shift in the maximal emission wavelength when excited at 280 nm (Table II). On the other hand, a distinctive red shift of the maximal emission wavelength was observed for ⌬452 as compared with the wild type toxin (Table II), indicating that the environment of the tryptophan residues in these mutants is more hydrophilic than in the wild type. Simultaneously, the intensity of the tryptophan fluorescence in ⌬452 excited at 295 nm was enhanced to 3.2 times that of the wild type -toxin. The results suggest that the inactive mutant ⌬452 has significant alteration in its tertiary structure around tryptophan residues, and that this leads to the loss in hemolytic activity.
When -toxin interacts with cholesterol on dioleoylphosphatidyl choline/cholesterol liposomes, there is an increase in the intensity of the tryptophan fluorescence (19). The two truncated toxins, ⌬471 and ⌬470, also showed increases in the intensity when incubated with dioleoylphosphatidyl choline/ cholesterol liposomes. In contrast, no enhancement of fluorescence intensity was detected for ⌬452 (data not shown). There- fore, this mutant lacks an appropriate structure for interaction with cholesterol in membranes.
In order to determine whether the deletion of C-terminal amino acids affects the secondary structure of the toxin, far ultraviolet CD spectra were measured. As shown in Fig. 6a, wild type -toxin has a -sheet-rich structure and the spectra of ⌬471 and ⌬470 closely resemble that of the wild type toxin. On the other hand, drastic difference was detected in the spectrum of ⌬452 as compared with the wild type toxin. A significant increase in negative ellipticity was observed at 208 nm and shorter wavelengths. The CD difference spectrum obtained by subtracting the wild type spectrum from the ⌬452 spectrum exhibited a deep minimum at 200 nm or a shorter wavelength and a shoulder at around 225 nm. This difference spectrum resembles that usually taken to indicate an unfolded conformation (30). This observation suggests a large disorder in the secondary structures of ⌬452.
Effect of C-terminal Truncation on in Vitro Refolding-The structural analysis suggests that several amino acids at the C terminus play essential roles in in vivo protein folding and/or the maintenance of protein conformation. We carried out in vitro refolding experiments on the truncated mutants to define the function of C-terminal amino acids during folding. Wild type -toxin and mutant ⌬471 (truncated by two amino acids) were denatured in 6 M GdnHCl, renatured by dialysis, and their hemolytic activities were measured. As shown in Fig. 7a (upper part), wild type -toxin recovered 81% of full hemolytic activity while ⌬471 displayed only 13% recovery, even though ⌬471 has an activity comparable to the wild type before denaturation. The refolded ⌬471 hardly bound to sheep erythrocytes as shown in Fig. 7a (lower part), showing a good correlation with relative hemolytic activity. To investigate whether the refolded ⌬471 recognizes cholesterol, the cholesterol binding activity was measured by ELISA. The refolded ⌬471 shows much less cholesterol binding activity than native ⌬471, while the activity of the wild type toxin is not changed by the denaturation refolding treatment (Fig. 7b). Although we could not judge whether all the refolded ⌬471 molecules have lower binding affinities than in the native state or whether a small population of ⌬471 refolds to the native form with full activity, it is clear that ⌬471 easily loses its ability to bind cholesterol during the denaturation-renaturation process. A decrease in the cholesterol binding activity was also observed after denaturationrefolding of ⌬470 (data not shown).
To rule out the possibility that there might be a minor contaminating protease that cleaves the ⌬471 protein during the refolding treatment and causes it to lose activity, the relative molecular masses of native and refolded ⌬471 were determined by ESI-MS (Table I). The relative molecular masses determined for the refolded ⌬471 and native ⌬471 are the same and within the range of the predicted one (Table I), indicating that no proteolytic cleavage occurs during the refolding process. Wild type toxin also maintains its intact size during refolding treatment, as shown by the relative molecular masses before and after treatment (Table I). These results indicate that the loss of ⌬471 activity after refolding is not caused by the FIG. 6. Far-ultraviolet CD spectra of wild type and truncated mutants. a, spectra for the wild type and three truncated toxins were measured in a 5-mm pathlength cuvette at room temperature. Samples were prepared at a toxin concentration of 30 g/ml in 20 mM phosphate buffer, pH 7.0, containing 150 mM NaCl. 1, wild type (solid line); 2, ⌬471 (dotted line); 3, ⌬470 (long dashed line); 4, ⌬452 (dot-dashed line). b, comparison of the far-ultraviolet CD spectra of ⌬471 before and after denaturation-refolding. Measurements of the far-ultraviolet CD spectra were performed in a 1-mm pathlength cuvette at room temperature. Native ⌬471 (1, dot-dashed line) and ⌬471 after denaturation-refolding (2, long dashed line) were prepared at a toxin concentration of 150 g/ml in 10 mM phosphate buffer, pH 7.0.
TABLE II
Maximal emission wavelengths and the relative intensity of wild type and truncated toxins The fluorescence measurements of wild type and truncated toxins were carried out at excitation wavelengths ( ex ) of 280 and 295 nm. The data represent mean Ϯ S.E. for three independent experiments. Maximal emission wavelengths are displayed in nanometers (nm), and the maximal intensity of each mutant is shown relative to the intensity of the wild type toxin. action of protease. The above results show that even just two amino acid residues at the C terminus are involved in the correct folding of -toxin. To assess whether the inactivation of ⌬471 by denaturation-refolding is accompanied by a conformational alteration, the structural properties of wild type and ⌬471 after denaturation-refolding treatment were studied by CD and fluorescence analyses. Compared with native ⌬471, the far-ultraviolet CD spectrum of refolded ⌬471 shows a slight alteration in the secondary structure (Fig. 6b), a 3% decrease in -sheet content and a concomitant increase in random coil in the refolded ⌬471 as estimated by the algorism program, CONTIN (31). In the case of the wild type -toxin, no differences were observed in the CD spectra before and after refolding (data not shown). In the maximal emission wavelength of the fluorescence spectrum, no significant changes were observed in both wild type and ⌬471. These results clearly show that the secondary and tertiary structural changes in the refolded ⌬471 are small compared with those observed for ⌬452 (Fig. 6a and Table II), suggesting that the changes in refolded ⌬471 occur in a limited region of the toxin molecule.
DISCUSSION
The crystallographic study of -toxin showed that domain 4, the C-terminal domain supposed to contain the cholesterolbinding region, has nine  strands folded into a compact -sheet sandwich (12). Two  strands in antiparallel orientation are located within the C-terminal 20 amino residues and form a part of one  sheet (Fig. 1). In this study, focusing on the two C-terminal  strands, we constructed several C-terminal truncated -toxin mutants to investigate how C-terminal amino acids contribute to the folding of the protein and its toxic action.
We first demonstrated that amino acids in the C-terminal  strand play an important role in correct folding of the toxin. When ⌬471 was refolded after denaturation in 6 M GdnHCl, it lost its membrane binding and hemolytic activities with the reduction in cholesterol binding activity (Fig. 7), indicating the importance of the two C-terminal amino acids for correct folding in vitro into the conformation required for cholesterol binding. However, ⌬471 showed essentially the same hemolytic activity and secondary structure as wild type -toxin. This indicates that the mutant folds into the native conformation in vivo. Taking the difference between in vivo and in vitro folding into consideration, probably chaperone-like molecules help to achieve correct folding in vivo (32,33). As shown in ⌬470, a threeamino acid truncation affects folding both in vivo and in vitro.
The truncation of five amino acids from the C terminus leads to a further truncation of the protein in host E. coli, indicating that the C-terminal  strand protects the protein against proteolytic cleavage in host cells. For toxin production, we used E. coli B strain, BL21(DE3), as a host, because it lacks both lon and ompT proteases. Some other minor protease(s) in E. coli may contribute to cleaving the product during synthesis or secretion into the periplasm (34). The product, ⌬452, lacks the two  strands in the C terminus and completely loses its cell binding and hemolytic activities due to its inability to recognize cholesterol (Fig. 4). It is likely that the molecular structure required for the specific binding of cholesterol molecules is absent or not correctly organized in ⌬452. Spectroscopic data indicate its partially unfolded secondary structure and an environmental alteration around tryptophan residues to a more hydrophilic and unrestricted state ( Fig. 6a and Table II). Since the elimination of the two  strands from the C-terminal end causes this remarkable disorder in structure, the C-terminal two  strands are suggested to play key roles in constructing the overall structure of the toxin.
There are two distinct steps in cell binding by -toxin, cholesterol recognition and membrane insertion. In this study we demonstrated that truncation of the C terminus abolishes cholesterol-recognition ability, resulting in the loss of cell binding activity. This is in contrast to mutants with Trp to Phe substitutions within the tryptophan-rich consensus sequence (residues 430 -440), which show a loss in cell binding activity despite their ability to bind cholesterol ( Fig. 4b and Ref. 13). We previously suggested that the tryptophan-substituted mutants have some deficiency in membrane insertion activity (13,20) that could cause them to lose cell binding activity. The tryptophan-rich consensus sequence locates in close proximity to one of the two  sheets in domain 4, and distant from the other  sheet to which the C-terminal  strand belongs (Fig. 1b). The crystallographic data indicate that there are some amino residues in the proximal  sheet that are possible quenchers of the fluorescence of Trp-436 and Trp-439, Trps within the consensus sequence. Thus, a change in tryptophan fluorescence intensity could be a sensitive marker for a change in three-dimensional arrangement among these Trp residues and quenchers. Since either ⌬470 or refolded ⌬471 shows no significant change FIG. 7. The effect of denaturation-refolding treatment on wild type -toxin and ⌬471. a, hemolytic and binding activities to sheep erythocytes determined before and after denaturation-refolding. Hemolytic activities (shown as bars) of the refolded wild type and mutant toxins are expressed relative to the corresponding native forms. Binding of native and refolded toxins to sheep erythrocytes was examined as described under "Experimental Procedures." After centrifugation, the resultant supernatant (S), pellet (P), and total (T) fractions were analyzed by SDS-PAGE and immunoblotting. SRBC, sheep red blood cells. b, cholesterol binding activities of native wild type (open circles), refolded wild type (closed circles), native ⌬471 (open triangles), and refolded ⌬471 (closed triangles). Each activity was determined by ELISA and is expressed relative to the activity of the native wild type toxin at 10,000 pmol of cholesterol.
in tryptophan fluorescence compared with the wild type toxin, the microenvironment around the Trp residues remains intact in these mutants. This strongly suggests that the C-terminal truncation does not affect the structural features around the tryptophan-rich consensus sequence despite its significant effect on cholesterol binding activity. Probably the site of cholesterol binding is in different region in domain 4 from that of membrane insertion.
Since the molecular mass of ⌬471 remains unchanged by the unfolding-refolding treatment (Table I), the activity loss in cholesterol binding should be ascribed to a conformational change. It is likely that the molecular structure required for the specific binding of cholesterol molecules is not correctly organized after treatment. An approximately 3% decrease in sheet content in ⌬471 was detected after treatment. No change occurs in tryptophan fluorescence as discussed above. This implies that the change occurs within a limited region upon refolding of ⌬471 and that this local change in structure directly affects the cholesterol-binding site. Since the C-terminal  strand interacts directly with the next  strand (residues 453-460) by hydrogen bonding to form part of an antiparallel  sheet ( Fig. 1 and Ref. 12), it is likely that the decrease in -sheet content produced by treatment occurs near these strands. It has been reported that C-terminal truncation of pneumolysin, another cholesterol-binding cytolysin, causes a loss of cell binding activity (17). It would be interesting to know whether this loss in the cell binding activity of pneumolysin is due to the loss of cholesterol binding activity, although conformational studies and molecular mass determination of the truncated species of pneumolysin would be required to draw a conclusion.
There have been several reports showing that C-terminal residues are important for the correct folding and maintenance of the native protein conformations (35,36). Among them, -toxin is a distinct example since the deletion of only two amino acids from the C terminus out of a total of 472 residues seriously affects its folding and the maintenance of the functional conformation. We have reported chemically modified -toxin as a new probe for cholesterol (37). If the relationships between binding activity and the conformation of the C-terminal region are clarified and the minimum binding unit is identified, further design of useful probes can be realized.
|
2018-04-03T04:37:19.009Z
|
1999-06-25T00:00:00.000
|
{
"year": 1999,
"sha1": "ea8d153e4f4aa51e2e2f312bbbc8d9a048f53542",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/274/26/18536.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "b998f9eb23831e65f653b43cfaa619f272dce736",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
225115175
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge production in Iranian cardiovascular research centers: A way to reduce the burden of disease
BACKGROUND According to the World Health Organization (WHO), non-communicable diseases (NCDs) including cardiovascular diseases (CVDs) will be responsible for almost 70% of all deaths in 2020. Therefore, knowledge production to find suitable ways to prevent, diagnosis, and effectively cover this disease in research centers is mandatory. Therefore, the present study is carried out with the aim to examine the results of studies performed in three years in Iranian cardiovascular centers. METHODS Iranian cardiovascular research centers with more than three years of activity from 2015 to 2017 were evaluated. Research output, international collaboration, high quality publication, total citation, and average h-index (H) were evaluated and scored. RESULTS 23 cardiovascular diseases research centers (CVDRCs) related to 15 universities of Medical Sciences (UMSs) were evaluated. The mean and standard deviation (SD) of age of the research activities in CVDRCs was 11.47 ± 8.60 years. Based on the research ranking, the first three centers were Isfahan Cardiovascular Research Center, Iran, Tehran Heart Center, and Shaheed Rajaei Cardiovascular Medical and Research Center, Iran, respectively, all of which have independent budget line. However, there is not any CVD research center in some provinces such as Zanjan, Kurdistan, Lorestan, and Arak, Iran. CONCLUSION Mission oriented research activities in Iranian cardiovascular research centers may be effective in reducing the burden of CVDs. Moreover, establishment of CVD research centers in high risk areas may be useful.
Introduction
According to the World Health Organization (WHO), non-communicable diseases (NCDs) including cardiovascular diseases (CVDs) will be responsible for almost 70% of all deaths in 2020. 1,2 Based on the third Sustainable Development Goal, among NCDs, CVDs are responsible for one-third of mortalities. 3 Additionally, it has been estimated that 30.5% of deaths in the world will be caused by CVDs by 2030. 4 In Iran, based on a cohort study in Isfahan in 2013, CVD mortality rate was estimated to be 331 and 203 per 100000 person-years in men and women, respectively. 5 Moreover, in 2016, years of potential life lost (YPLL) [95% confidence interval (CI)] for CVDs was 22.7 (19.8-25.6). 6 Despite the numerous efforts made to control CVDs worldwide, these diseases continue to rise. 7 This is partly due to changing the pattern of the disease from communicable to non-communicable and injuries, especially in developing countries, besides, it may be due to the fact that secondary and tertiary prevention in CVDs are very expensive. 8 So, it is needed to find low-cost and effective methods based on research to control this problem and establish research centers that are important to conduct related projects. Now, in Iran, there are more than 800 research centers in different fields, only 27 of which are active in the field of CVDs. 9 The first Iranian CVD research center was established in 1974 in Tehran. Distribution of CVDs in Iranian provinces shows that in many provinces, CVDs increase by 13%, and heart diseases due to hypertension increase by 38%, which have been considered as the two major causes of early deaths. 10 In 2007, hypertension was the third most common cause of deaths and disabilities in Iran, which has risen to 24% over the past 10 years and in 2017, it was ranked first. 11 The key question is "what is the role of cardiovascular research centers in CVD prevention or hypertension control in Iran"? Based on a Global Burden of Disease (GBD) study in 2011, the prevalence of hypertension was more than 28% in 15 provinces in Iran. 12 Meanwhile, numerous research projects have been designed and implemented to prevent CVDs, especially hypertension. Accordingly, the present study is conducted to evaluate knowledge production based on these studies in Iranian CVD research centers.
Materials and Methods
The current study was implemented in Iran where there are 56 universities of medical sciences (UMSs) and more than 800 Medical Research Centers (MRCs) in clinical and biomedical fields. In clinical field, there are two main subgroups concluding communicable diseases (CDs) and NCDs. Based on another categorization, all of the approved MRCs are divided into two groups according to the budget line assigned (independent, dependent). All of MRCs are evaluated after at least one year of establishment by the Iranian Ministry of Health and Medical Education (MOHME). This was a cross-sectional study in which the total number of cardiovascular disease research centers in Iran established from 2015 to 2017 was evaluated.
The study inclusion criteria were: a-Having a principled agreement from legal and competent authorities b-Having more than three years of research activity The research indicators were designed based on peers' opinions in the expert panel. Representatives from research centers and UMSs, as well as three scientometric experts and research team were the members of this panel. It is worth noting that the indicators are revised and developed annually based on health policies considering the opinions of the stakeholders. Research indicators were classified to five main groups and their subgroups as follow: a-Research output: The steps of evaluation process were: i) extracting the scientific documents of each research center based on its affiliation in ISI, PubMed, and Scopus databases; ii) designing the end note data base for each MRC; iii) eliminating data overlapping via Access software; and iv) scoring all research indicators.
The scoring system for data weighing was designed by peer review opinions through the expert panel. The scores for each published article indexed in ISI, PubMed, and Scopus were 2, 1.5, and 1, respectively. Each book had 2 points and the score of each conference paper was 0.5.
The weight of the main groups was determined based on their importance and that of the research output, collaboration, qualification, citation, and hindex were 250, 150, 200, 400, and 100, respectively, thus the maximum score was 1050 (Table 1). The data obtained were analyzed using SPSS software (version 19.0, SPSS Inc., Chicago, IL, USA) and the P values < 0.050 were considered statistically significant. Descriptive analysis and some tests such as independent t-test were used for data reporting.
In this study, all of ethical considerations were met.
Results
In this study, out of the total 27 cardiovascular diseases research centers (CVDRCs) in Iran, 23 ones with more than three years of activity related to 15 UMSs were included. Table 2 demonstrates the name, number, and budget line of the research centers related to UMSs. Based on the statistical analysis, there was a significant correlation between the independent budget line and the score obtained (P < 0.050).
The mean and standard deviation (SD) of age of the research activities in CVDRCs was 11.47 ± 8.60 years, with minimum and maximum ages of 5 and 45 years, respectively.
Based on the results, there was a significant difference between the year of activity and total research score (P < 0.050).
The number of published articles indexed in ISI, PubMed, and Scopus by CVDRCs during 2014 to 2017 was estimated to be 1851, about 50% of which being published in research centers affiliated to UMSs with domestic cooperation. Almost 12% of the articles published in the best quartile journals in each subject and in more than 16% of cases, the articles had at least one foreign counterpart. The highest international cooperation was related to cardiovascular research centers in Tabriz and Isfahan. Tehran Heart Center with 46 high quality publications was the first center among CVDRCs. Moreover, cardiovascular research center in Isfahan had the highest number of citations and h-index. After scoring and weighing, the first three centers were Isfahan Cardiovascular Research Center, Tehran Heart Center, and Shaheed Rajaei Cardiovascular Medical and Research Center, respectively (Table 3).
Discussion
Based on the review results, there were 23 CVD research centers with more than three years of activity affiliated to 15 UMSs in 10 provinces consisting of Tehran, Golestan, Kerman, Isfahan, Mazandaran, Shiraz, Tabriz, Ahvaz, Birjand, and Hormozgan.
The total number of published articles indexed in ISI, PubMed, and Scopus was estimated 1851. Almost 12% of articles were published in the best quartile journals. 50% and 16% of cases were performed with domestic and foreign cooperation, respectively.
Comparing the results of evaluation of research activities in CVDRCs shows that there are more knowledge production and research scores in research centers with independent budget line, which may be due to attracting more scholars, other resources, equipment, and so on. 13 The maximum research score in this evaluation was related to Isfahan Cardiovascular Research Center. This center not only has more qualified published papers and citations, but also many valuable projects such as Isfahan Healthy Heart Programme (IHHP) were designed and implemented by its researchers. 14 The geographical distribution of cardiovascular research centers indicates that the establishment of these centers has not completely been based on the burden of CVDs. For example, based on Iranian surveys on NCD risk factors in 2004 and 2011, the prevalence of hypertension in 17 provinces was more than 28%, while there was a CVDRC in only five of them, including Golestan, Mazandaran, Tabriz, Ahvaz, and Hormozgan ( Figure 1). 9,15 Figure 1. Illustrates the prevalence of hypertension in different provinces in Iran. 15 Additionally, in some provinces such as Zanjan, Kurdistan, Lorestan, Arak, and Hormozgan, Iran, despite many efforts, treatment coverage has not been effective. It seems that in the research field, it is necessary for cardiovascular researchers to identify barriers to effective control and therapies through scientific methods. 15 Based on a study by Adedapo, in areas with high prevalence of CVDs, the number of related studies and publications is less compared to other cases. 16 Considering that CVDRC in Hormozgan Province had the fewest score in research evaluation, it is necessary to make much more effort in community heart health promotion. Moreover, establishing mission oriented cardiovascular research centers in Zanjan, Kurdistan, Lorestan, and Arak, Iran, with implementing applied research in heart health promotion can be useful. 17 In Tehran, the capital of Iran, there are nine CVDRCs with more than three years of activity. These RCs are affiliated to Tehran, Iran, Baqiyatallah, and Shahid Beheshti UMSs. Tehran is one of the largest metropolises in Iran with a high air pollution level, so it is an alarm of the increased CVD incidence rate, and consequently the high mortality rate. 18 Based on the lipid and glucose study in Tehran, the prevalence of coronary heart diseases (CHDs) and its associated risk factors in adult residents of Tehran is high and the age-adjusted prevalence of CHD is 21.8% (22.3% and 18.8% in women and men, respectively). 19 Due to the higher prevalence of hypertension in the north, west, and south of Iran, it is mandatory to provide the license necessary for the establishment of cardiovascular research centers in these areas. It is obvious that designing the strategic planning and determining main missions for the prevention, diagnosis, effective treatment, and rehabilitation of patients with CVDs through research projects by the CVDRCs can decrease the burden of the disease.
This study has two strengths. First, it evaluates knowledge production in the field of CVDs in order to make appropriate policies to reduce one of the most important disease burden in Iran. Second, based on geographic distribution of CVDRCs, it specified the provinces needing the establishment of such centers in them.
Failure to address the outcomes and impacts in this evaluation process is one of the limitations of the present study. A peer-based evaluation of research activities in these centers seems to lead us to more equitable judgments.
Conclusion
Mission oriented research activities in Iranian cardiovascular research centers may be effective to reduce the burden of CVDs. Furthermore, it is necessary to carry out a quantitative and qualitative evaluation for an accurate and comprehensive assessment.
|
2020-09-10T23:39:08.341Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "b9fd2cf9157ae2c35fd367eba0d20fb90874c484",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bab276be911caca75d26006ee60a67cb0850b46c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268653759
|
pes2o/s2orc
|
v3-fos-license
|
Leah’s ‘soft’ eyes: Unveiling envy and the evil eye in Genesis 29:17
the article is to demonstrate that these ‘soft’ eyes are not mere adornments of a character but essential threads woven into the fabric of ancient belief systems that interlace concepts of beauty, fertility, and the malevolent gaze. In doing so, light is shed on the narrative, its characters
Introduction
The Hebrew Bible, a repository of religious, literary, and historical depth, brims with passages that beckon scholars and readers to explore their intricacies.Amid this vast tapestry of narratives, one passage that has intrigued interpreters for centuries is the seemingly unassuming description of Leah as having 'soft' eyes רכות( )עינים in Genesis 29:17.While this brief characterisation may, at first glance, appear peripheral in the grand mosaic of biblical tales, closer investigation reveals that it has a deceptively rich node of significance.
Leah's 'soft' eyes, like a riddle, hint at hidden layers of meaning.Rather than symbolising fragility, they seem to conceal paradoxical qualities of envy and malevolence that may have implications extending far beyond the individual.Within the broader context of the malevolent gaze belief complex, these 'soft' eyes evoke questions about their influence on the fertility and destiny of Leah's sister, Rachel.
This article embarks on a journey to unlock the latent treasures within this enigma.The goal of the article is to demonstrate that these 'soft' eyes are not mere adornments of a character but essential threads woven into the fabric of ancient belief systems that interlace concepts of beauty, fertility, and the malevolent gaze.In doing so, light is shed on the narrative, its characters, and the enigmatic interplay of ancient beliefs that continue to shape our understanding of this text.
A social-scientific approach
Together with Mendenhall's (1974) The Tenth Generation and Van Seters' Abraham in History andTradition (1975), Thompson's (1974) The Historicity of the Patriarchal Narratives put a decisive end to earlier attempts to locate the ancestral figures in Genesis in a historical context.Before these works, scholars commonly assumed that the patriarchal narratives originated in the second millennium BCE and that they reflected social customs of that period rather than those of later Israel.Although none of the events as recounted in the traditions can be assumed to be historical, the narratives still make for interesting reading and lend themselves to interpretation as literature and folklore (Frazer 1919;Van Dyk 1994).
The social scientific approach to the study of the Hebrew Bible has proven invaluable for an understanding of biblical material and for avoiding anachronistic and ethnocentric misinterpretations (Sneed 2008).The aim of this interdisciplinary approach is to study biblical materials as a reflection of their cultural setting.Utilising methods and theories from sociology, anthropology and psychology, the meaning and socio-cultural background of the text are more fully illuminated.
The seemingly innocuous description of Leah as having 'soft' eyes in Genesis 29:17 has captivated scholars and readers for centuries.This article advances an ironic interpretation, suggesting that Leah's 'soft' eyes were not a sign of weakness but, rather, an indication of envy and malevolence, potentially contributing to fertility issues faced by her sister Rachel in terms of the ancient Near Eastern evil eye belief complex.In this context, the article delves into ancient belief systems that entwined beauty, fertility, and the malevolent gaze.
One significant aspect of the social scientific study of the Hebrew Bible involves examining the historical and cultural background of the text (Matthews & Benjamin 1993;Smith 2002).Scholars in this field often explore archaeological evidence, ancient Near Eastern texts, and comparative studies to reconstruct the social structures, religious practices, and political dynamics that shaped the world of the biblical writers.Another key focus of social scientific analysis is the examination of power dynamics, social inequalities, gender roles, and the representation of men and women in biblical narratives (Exum 1996).
Following in this tradition, this article seeks to contribute to an understanding of the complex sisterly relationship between Rachel and Leah by reading the text within its historical, cultural, and social context.Examining the gender dynamics, broader societal issues, and cultural belief systems at play contributes to a more comprehensive and nuanced understanding of not only the relationship between the sisters but also more specifically, the enigmatic description of Leah's eyes as 'soft'.
Leah and Rachel: A complex sisterly relationship
The narrative surrounding Leah and Rachel is a fascinating depiction of complex sibling relationships and rivalry (Dresner 1989).The text presents Leah as the older sister, while Rachel is described as more beautiful (Gn 29:17).The rivalry between the two sisters goes beyond mere competition for Jacob's love; it extends to their longing for children, especially in a culture where fertility and offspring were highly valued.Leah's fertility is portrayed as a blessing from the Lord, and she conceives multiple times, while Rachel initially struggles to have children.In an ironic twist, the one described as 'soft-eyed' becomes the mother of several sons, while the 'beautiful of form and appearance' Rachel faces difficulties.
It is important to notice that within this narrative, Jacob, their husband, plays a central role, and his feelings towards each of his wives contribute to the complex dynamics.While Jacob loves Rachel more, Leah's fertility intensifies the rivalry, and Rachel's desire for mandrakes, believed to enhance fertility, exemplifies her desperation to overcome her barrenness.According to Olszewska (2018:353), the mention of mandrakes (Mandragora officinarum) in the narrative occurs in the context of fertility.In Genesis 30:14, Reuben, Leah's son, finds mandrakes in the field and brings them to his mother.Rachel, upon seeing them, asks Leah for some.Mandrakes were considered to be an aphrodisiac and were believed to enhance fertility and conception in ancient cultures.In this context, Rachel's interest in mandrakes reflects her desire to overcome her infertility.
As the biblical narrative progresses, it becomes evident that Leah and Rachel's relationship undergoes transformation.After Leah's initial fertility, Rachel also conceives.This leads to a more equitable distribution of sons between the two sisters.The transformation in their relationship suggests that the tension and rivalry between Leah and Rachel may have been fuelled by the circumstances and cultural beliefs of the time.As their motherhood journeys unfold, the importance of bearing children becomes a common bond, gradually diminishing the initial rivalry.The role of Jacob as the husband and father of their children also influences their relationship.Jacob's love for Rachel initially intensifies the rivalry, but as they both bear his children, the focus shifts from the competition between sisters to the broader context of family and motherhood.The initial rivalry between Leah and Rachel provides the ideal background for an interpretation of Leah's 'soft' eyes as juxtaposed with Rachel's beauty in Genesis 29:17.
Leah's 'soft' eyes: A history of interpretation
Leah's description as having 'soft' eyes in Genesis 29:17 has led to various interpretations, some of which approach it from a medical perspective.Kotelmann (1910) discusses the symptoms of conjunctivitis simplex and suggests that Leah's eyes may have been affected by a chronic conjunctival inflammation, including symptoms such as swelling, mucus secretion, and reduced tolerance to light.Gordon (1941) presents a similar perspective, indicating that Leah's eyes might have been red, swollen, and possibly lacking eyelashes because of a condition such as blepharitis ciliaris.These interpretations, considering the physical state of Leah's eyes from a medical viewpoint, fail to shed light on the juxtaposition of Leah's eyes with the beauty, rather than the health, of Rachel.
The notion that Leah's 'soft' eyes could represent a positive trait is discussed in several interpretations.Some scholars suggest that 'soft' might imply tenderness and gentleness in Leah's nature, highlighting a positive aspect of her character (Einzig 2013).Jensen (2018) argues that 'soft eyes' should be understood as a reflection of Leah's overall appearance, emphasising her delicate and appealing nature, especially when compared to her sister Rachel (cf.Arnold 2009).
Leah's eyes have also been interpreted negatively.Some authors suggest that 'soft' may imply weakness, dullness, or unattractiveness (Fruchtenbaum 2009).According to Skinner (1910), Leah's eyes lacked the lustrous brilliancy associated with female beauty in the ancient Near East, which may have contributed to her being considered less desirable in comparison to her sister.In this view, Leah's eyes had a negative impact on her overall appearance and desirability as a wife.However, it remains unclear why her unattractive eyes, specifically, would be contrasted with Rachel's beauty in form and appearance.
An alternative perspective on Leah's 'soft' eyes is that they may serve a proleptic function, foreshadowing her later deception in marriage.In this approach, the 'soft' eyes description functions as a narrative device, anticipating her role in a later episode where she is switched with her sister Rachel to avoid remaining unmarried because of her lack of beauty (Marcus 2021).While the suggestion of a connection between the initial description of Leah's eyes and the subsequent events in the biblical narrative may be valid, its possible connection with Rachel's fertility problems remains unexplored.
Leah's 'soft' eyes are often discussed in comparison to her sister Rachel's beauty.The literature underscores the importance of this contrast in highlighting the differences between the two sisters (Longman & Garland 2008).This interpretation, while not assigning a clear positive or negative value to Leah's eyes, emphasises the narrative role of the eyes in distinguishing the two sisters.It remains unclear, however, why the author would contrast Leah's eyes, specifically, with Rachel's beauty in form and appearance.
Role and character interpretations focus on how Leah's eye condition may have influenced her character, responsibilities, and emotional state.Some authors suggest that Leah's eyes were connected to her emotional state, possibly because of crying (Seelenfreund & Schneider 1997).Her eyes are seen as possibly related to her emotional distress, leading to a significant change in her life events, such as her marriage to Jacob.This imaginary approach, too, fails to elucidate the apposition of Leah's eyes with the beauty of Rachel.It may be that this enigmatic juxtaposition is best understood against the background of the evil eye belief system in the ancient Near East.
The beauty-fertility-evil eye nexus in ancient belief systems
The belief in the evil eye, one of the oldest and most widespread belief systems in human history, is shrouded in antiquity.Its origins remain a subject of debate, but references to the evil eye abound in various ancient cultures, including Sumerian and Akkadian texts.This belief revolves around the idea that individuals, often women, possess the power to inflict harm through malevolent glances (Seligmann 1910).
One particularly intriguing aspect of the evil eye belief is its connection to beauty.While beauty is universally celebrated, it paradoxically invites the attention of envious or malevolent gazes.To counteract this risk, protective measures such as amulets and gestures have been employed.The blue eye amulet, a well-known protective talisman in Mediterranean regions, is worn to ward off the evil eye.Exceptional beauty is believed to be particularly susceptible to malevolent gazes (Elworthy 1895).The consequences of the evil eye's influence on beauty can be significant, potentially affecting an individual's appearance and overall well-being.Vulnerability to the evil eye is often linked to the idea that exceptional beauty can provoke jealousy, thereby attracting malevolence.
Fertility, particularly concerning infants and mothers, emerges as a primary target of the malevolent influence of the evil eye.In ancient cultures, a strong belief persisted in the power of the evil eye to harm newborns, exacerbated by the high infant mortality rates prevalent in antiquity (Elliott 2015).As a response to this perceived threat, protective rituals and practices were enacted in societies where belief in the evil eye held sway.Newborns were frequently shielded from the gaze of others for a specific period, and protective talismans were employed to avert harm.Mothers, too, were safeguarded from the potentially detrimental influence of the evil eye through a variety of protective gestures and items, with these measures sometimes extending for weeks or even months after childbirth.
The beauty-fertility-evil eye nexus, deeply embedded in human culture, presents an intricate and enduring aspect of ancient belief systems.The vulnerability of beauty and fertility to malevolent gazes consistently recurs across a spectrum of ancient cultures, necessitating protective measures to fend off the malevolent influence (Lykiardopoulos 1976).This historical backdrop sets the stage for an interpretation of the author's ironic description of Leah's eyes as 'soft'.
Leah's 'soft' eyes: An ironic interpretation
The Hebrew Bible is a treasure trove of literary complexity and depth, with irony serving as a powerful rhetorical tool (Häner & Miller 2023).Irony, characterised by a gap between appearance and reality, often reveals profound insights in biblical narratives (Good 1965).For example, it has been suggested that the author of Qohelet maintains an ironic tone throughout the book, reflecting on the human condition, the impact of God, and death.Qohelet's ironic stance towards traditional wisdom is evident, with the author engaging in Socratic-like reasoning (Spangenberg 1996).Sharp (2008) also emphasises the pervasive use of ambiguity and irony in sacred texts, stressing the importance of authorial intention in understanding irony.She underscores the influence of reader assumptions and interpretive communities, acknowledging that irony blurs the lines between what is said and unsaid, challenging the reader's perception.The significance of context in identifying irony is also highlighted.
The irony implied by the 'soft' eyes of Leah, is best understood against the background of the extramission theory of vision in the ancient Near Eastern and circum-Mediterranean world.Contrary to the current scientific intromission theory, in which the eye is a passive organ and recipient of light and sensation, the extramission theory regarded the eye as an active organ.It was thought to project particles of energy or light (Elliott 2015).In ancient texts, the eyes of humans and gods are often described as 'fiery', 'gleaming', and 'flashing', projecting particles of energy similar to the rays of the sun or a source of light.Weakness and old age were associated with a dim light of the eyes, whereas health and strength were associated with a strong light of the eyes (cf.Gn 27:1; Pr 15:30).
http://www.hts.org.zaOpen Access In view of this cultural conceptualisation of vision, the irony of Leah's 'soft' eyes mentioned in the context of Rachel's beauty becomes clear.The adjective used to describe Leah's eyes, ,רך related to the verb ,רכך 'to be tender, weak, soft', is used to describe tenderness and imply weakness (Gn 18:7; 33:13; 2 Sm 3:39; Brown, Driver & Briggs 1996).Significantly, in Proverbs 15:1 it is contrasted with a harsh and painful )עצב( word: 'A soft )רך( answer turns away anger, but a harsh )עצב( word stirs up ire'.If the reference to Leah's eyes was indeed intended to be ironic, her eyes could therefore be interpreted as 'hard' and inflicting harm in the context of the ancient Near Eastern evil eye belief complex. As illustrated here, beauty was believed to attract the evil eye.Therefore, rather than being 'soft' and 'weak', they were 'strong' and 'harsh', contributing to Rachel's fertility problems as described in the ensuing narrative.Marcus (2021) may have been right in surmising that the description of Leah's eyes as soft was proleptic, but in an ironic way, foreshadowing Rachel's initial inability to conceive as a consequence of Leah's envy.This interpretation extends beyond a mere physical description.Leah's 'soft' eyes, symbolising envious and malevolent intentions, align with the ironic nature of the Hebrew Bible's narratives.The use of irony within biblical texts underscores the capacity of these narratives to contain multilayered meanings and subtlety that provoke thought.
Leah's fertility stands in stark contrast to Rachel's initial barrenness.Rachel's desire for mandrakes also supports the interpretation that she may have believed her fertility to have been caused by the evil eye, as in some cultures it is regarded as an apotropaic against the evil eye (Seligmann 1910).Leah's envy, although merely implied, fits this narrative by contributing to the belief that Rachel's beauty attracted misfortune.Moreover, as illustrated earlier, the notion of an evil eye being connected to infertility was not exclusive to this biblical narrative.In various ancient cultures, from the Mediterranean to the Near East, the belief in the evil eye's harmful effects on fertility was widespread.This shared cultural context reinforces the interpretive perspective that Leah's envious eyes had a perceived impact on Rachel's initial fertility struggles in the context of the ancient Near Eastern evil eye belief complex.
Conclusion
The narrative of Leah and Rachel serves as a compelling illustration of complex sibling relationships, marital dynamics, and the significance of fertility in the ancient world.The desire for children, the use of mandrakes as a fertility drug and apotropaic against the evil eye, and the evolving relationship between Leah and Rachel provide a rich context for understanding the subtleties of this narrative and the description of Leah's eyes, in particular.
Leah's 'soft' eyes in Genesis 29:17, described in a seemingly casual manner, carry deeper layers of meaning that reflect the multifaceted nature of the Hebrew Bible.By applying an ironic interpretation within the context of ancient belief systems surrounding beauty, fertility, and the evil eye, this article pointed to the perceived influence of Leah's envious eyes on Rachel's fertility.
|
2024-03-24T15:18:22.653Z
|
2024-03-22T00:00:00.000
|
{
"year": 2024,
"sha1": "e7af102f6f4b4337e8790894b335fc4bf31a79e6",
"oa_license": "CCBY",
"oa_url": "https://hts.org.za/index.php/hts/article/download/9536/26659",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d0f7d67ea0bd9d9a933e19dd6dd714b7b1dff8dd",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
}
|
123308239
|
pes2o/s2orc
|
v3-fos-license
|
Electron recombination in dense photonic, electronic and atomic environments
Free electrons can recombine with ions by either radiative, dielectronic or three-body recombination. In this contribution we discuss variants of these fundamental processes which can occur in dense photonic, electronic and atomic environments. First, dielectronic recombination is generalized to the case where two atomic centers participate in the process. In this situation, the incident electron is captured at one center with simultaneous excitation of a neighboring ion, atom or molecule which subsequently decays via photo-emission. Modifications of radiative recombination in the presence of a strong laser field are discussed afterward. Various relativistic effects, arising from a high energy of the incoming electron and its strong coupling to the intense laser field, are found to clearly manifest themselves in the photo-emission spectra. Finally, we consider three-body "recombination" (i.e. annihilation) of an electron and a positron in the presence of a spectator electron. The process leads to emission of just a single photon and can compete with the usual annihilation into two photons at very high electron densities.
Introduction
Recombination of free electrons with atomic or molecular ions is a fundamental quantum process of general interest to various fields of science, comprising atomic and molecular physics, plasma physics, and astrophysics [1,2,3]. Recombination into single atomic centers is known to proceed in three different ways: (i) The electron can be captured into a bound atomic state upon photoemission. This process, which represents the time-inverse of photo-ionization, is referred to as radiative recombination. (ii) For certain energies of the incident electron, the resonant process of dielectronic recombination may occur. Here the electron capture leads to the formation of an autoionizing state (time-reversed Auger decay), which afterwards stabilizes radiatively. (iii) At very high particle densities, three-body recombination dominates where the electron capture is rendered possible by transferring the excess energy to another free electron.
In the present paper we discuss variations of these well-studied recombination processes which may occur in dense environments. As we will show, when an electron recombines with an ion which is not isolated in space but in close vicinity to another atomic center, resonant channels exist which rely on interatomic electron-electron correlations. Due to its resonant nature, this two-center dielectronic recombination can be remarkably efficient at internuclear distances up to the nanometer range [4,5]. Characteristic effects also arise when electron-ion recombination takes place in the presence of an intense laser field, forming a dense background of low-frequency photons. While the total recombination cross section will remain essentially unchanged, the presence of the laser field can substantially modify the photo-emission spectra [6]. Besides, interparticle correlation effects may occur in very dense electron-positron plasmas, where they can enhance the relevance of some higher-order quantum electrodynamic reactions which are usually suppressed. As an example, we shall discuss the impact of a nearby spectator electron on the annihilation of an electron-positron pair [7]. A connection of this QED process with three-body recombination of electrons with ions will be drawn.
In order to put our studies into perspective it should be noted that the influence of the environment on microscopic processes has been under very active scrutiny in recent years. In particular, detailed investigations have been performed on electron transitions in systems of two (or more) atoms which are mediated by so-called interatomic Coulombic decay (see, e.g., [8,9]), where the electronic excitation energy of one atom is transferred to a nearby partner atom, leading to ionization of the latter. This two-center autoionization mechanism can strongly accelerate molecular deexcitation and relaxation processes, as was demonstrated experimentally in helium dimers and water molecules, for instance [10,11]. Similar interatomic correlations are also responsible for radiationless energy transfer processes in slow atomic collisions [12], cold quantum gases [13], and biomolecules [14]. The influence of background laser fields on scattering reactions and atomic processes has been studied in detail as well. Results have been obtained for laser-assisted scattering of electrons and x-rays on electrons, atoms and nuclei [15]. Also laser-assisted electron-ion recombination was studied in the regime of nonrelativistic parameters (see [16,17,18] and references therein). Superintense laser-matter interactions even allow for efficient generation of electron-positron samples [19], which offers future perspectives for laboratory studies on dense, relativistic e + e − plasmas. Therefore, the dynamics and evolution of these plasmas is currently being explored, including many-body correlation effects [20,21,22].
The paper is organized as follows. In the following Sec. 2 we discuss dielectronic recombination with participation of two atoms. Section 3 is devoted to radiative recombination of a relativistic electron with a highly-charged ion assisted by a strong laser field. A QED process analogous to three-body recombination is described in Sec. 4. The conclusions are presented in Sec. 5.
Atomic units (a.u.) are used throughout unless explicitly stated otherwise.
Dielectronic recombination involving two atomic centers
Dielectronic recombination can be generalized to the case where two atomic centers participate in the process [4,5]. In this situation, the incident electron is captured at one center with simultaneous excitation of a neighboring ion, atom or molecule which subsequently decays via photo-emission. The process relies on resonant electron-electron correlations between the two neighboring atomic centers and may be called two-center dielectronic recombination (2CDR). An illustration is given in Fig. 1. We note that this interatomic channel represents an additional pathway for recombination which interferes with the channel of radiative recombination. Let R denote the internuclear distance between the two atoms in Fig. 1. We shall assume that R is not too small such that the atoms keep their individuality, but not too large either so that the interaction between the electrons may be treated as instantaneous Coulombic, disregarding retardation effects. That is, a 0 ≪ R ≪ c/ω f i , where a 0 is the atomic size, ω f i the electron transition frequency, and c the speed of light. Under such conditions, restricting our attention to dipole-allowed transitions, the interaction between the two atoms can be described as Figure 1. Schematic illustration of twocenter dielectronic recombination (2CDR). Shown is the first step, where an electron is captured from the continuum at center A (left) with simultaneous excitation of a neighbouring atom at center B (right). Afterwards, the system stabilizes by radiative deexcitation of atom B.
where r A,B are the coordinates of each participating electron with respect to its own nucleus. The dipole-dipole interaction (1) leads to a coupling of the initial two-electron state ψ p iwhere the incident electron with momentum p i is in the continuum, while the atomic electron at center B is in the ground state -to the resonant intermediate state ψ a -where the incident electron is in a bound state at center A and the atomic electron is in an excited state. This intermediate state is unstable and can either decay backwards via two-center Auger decay (resp. interatomic Coulombic decay) or it can stabilize via radiative deexcitation at center B, this way completing the recombination.
The cross section for the two-center recombination pathway is where Γ a = 2πp i | ψ a |V AB |ψ p i | 2 dΩ p i and Γ (B) rad are the widths due to two-center Auger decay and spontaneous radiative decay of the excited state at center B, respectively, and Γ = Γ a +Γ (B) rad is the total width. Besides, E p i and E a denote the energies of the two-electron states ψ p i and ψ a , respectively.
It is instructive to consider the ratio between the cross sections for two-center dielectronic recombination and the usual single-center radiative recombination at center A (without participation of atom B). Exactly on the resonance, where E p i = E a , one obtains the very simple relation ( rad is satisfied which will always hold true at sufficiently large values of R. The 1/R 6 behaviour reflects the resonant dipole-dipole nature of the two-center process. Since the typical transition frequency ω in atoms and ions is related to their spatial size a by ω ∼ αc/a, an alternative representation of Eq. 6 , where α ≈ 1/137 denotes the fine-structure constant. Hence, it is the ratio of atomic size to interatomic distance which mainly determines the relative importance of 2CDR. Since typical atomic transition energies lie in the range of ω f i ∼ 1-10 eV, relation (3) demonstrates that the resonant two-center channel may dominate over the nonresonant singlecenter channel at interatomic distances of R ∼ 1-10 nm, while it can still be competitive at R ∼ 10-100 nm.
As an example, let us consider recombination in a system initially consisting of a free electron with energy close to 27.2 eV, a proton at center A and a singly-charged helium ion at center B. In this situation, the electron can recombine with the proton either radiatively or via the Fig. 2. While the cross section for radiative recombination with the proton amounts to about 30 barn, the resonant two-center channel can strongly enhance the probability for recombination by two orders of magnitude at an internuclear separation of 3 nm. We note that in our calculation the positions of the nuclei are assumed to be fixed. To account for nuclear motion, an average over the internuclear distance in Eq. (2) could be carried out. It is important to note that the electron capture, which represents the first step of the 2CDR process, occurs on a very short time scale of about 1 a.u. during which the positions of the nuclei practically do not change. The second step of 2CDR (i.e., the radiative stabilization) does not depend on whether the nuclei move or not. Hence, taking an average in Eq. (2) would essentially mean to average the two-center Auger rate in the numerator, provided that Γ a < Γ Two-center dielectronic recombination can also be of relevance in more complex systems ranging from dense plasmas to (bio)chemical environments. As an example from daily life, let us consider an alkali salt such as NaCl dissolved in water. In this case, a hydrated Na + (H 2 O) n complex forms where on average n = 6 water molecules shield the cation at a mean distance of about R ≈ 3Å (and similarly for the Cl − anion). The ionization potential of neutral Na is 5.14 eV. The first photo-absorption band in water is relatively broad; it lies around 7.5 eV and has a width of a few eV. Hence, electrons with resonant energies in the range of about 1.5-4 eV can recombine with the Na + ion through 2CDR involving the assistance of one of the surrounding water molecules. Assuming that Eq. (3) also applies approximately to this more complex system, we may roughly estimate that the electron recombination is largely enhanced by ten orders of magnitude due to 2CDR. The corresponding cross section amounts to σ 2CDR ∼ 10 −10 cm 2 in the resonant region. Note that, even after integration over energies in an incident electron beam, 2CDR may have an enormous effect on the total number of recombination events since the absorption band in water is quasi-continuous and rather broad.
Concluding this section we point out that the electron capture at center A may also be accompanied by the ionization (rather than excitation) of a neighbouring atom B. This process, which has been studied in [23], effectively represents an interatomic electron exchange, letting the total charge at the two centers remain unchanged. Moreover, we note that the time-reversed process of 2CDR is two-center resonant photo-ionization in the limit of low field intensities [5,24,25]. Here, an incident photon resonantly excites an atom which subsequently deexcites via interatomic Coulombic decay. This process shows particularly interesting properties for larger photon field strengths due to the Rabi flopping dynamics induced in the asissting atom. Very special effects also arise in the resonant two-photon ionization of a system consisting of two identical (e.g. hydrogen) atoms [26]. The energy transfer between two spatially separated hydrogen atoms exposed to a nonresonant laser field has been investigated in [27].
Radiative electron-ion recombination assisted by a strong laser field
The availability of high-intensity lasers has led to sustained interest in radiative electron-ion recombination in the presence of an external laser field [16,17,18]. The latter can strongly modify the field-free properties of the process. The corresponding theoretical description, which so far was always restricted to the nonrelativistic interaction regime (with the laser field being treated in dipole approximation), has recently been extended to the fully relativistic domain [6]. Relativistic effects may arise from a high incident electron energy, a strong coupling to the applied laser field, and a deeply bound final electron state. Within the relativistic theory, the transition amplitude for laser-assisted radiative recombination may be written as whereŴ = α ·Â γ denotes the interaction with the quantized radiation field γ which is responsible for the spontaneous photo-emission during the process. The initial and final states are of product form, denoting the respective states of the electron, whereas |0 and |k ′ λ represent states of the radiation field containing, respectively, no photons and one non-laser photon of momentum k ′ and polarization λ.
The laser field is taken as a classical plane wave of frequency ω, wave vector k and field strength F 0 . For mathematical simplicity, it is assumed to be circularly polarized. The laser four-potential may be written as A µ = A 0 (1, ck/ω), with A 0 = F 0 (e 2 · r cos ϕ − e 1 · r sin ϕ) and the laser phase ϕ = ωt−k·r. This gauge offers the advantage that the hydrogen-like final bound state of the electron, ψ f (t), can be taken to a good approximation as a pure Coulomb-Dirac wave function which is undistorted by the laser field. The initial state of the electron may be approximated by ψ i (t) = exp[i(F 0 /ω)(e 1 · r cos ϕ + e 2 · r sin ϕ)]ψ p i , where ψ p i denotes a Dirac-Volkov state in the usual representation for an electron with asymptotic momentum p i . The latter is adapted to the chosen gauge by the additional phase factor. These approximations are justified as long as the nuclear charge satisfies the condition Z ≪ v i (with the incident electron velocity v i ) and the laser field strength is much lower than the nuclear Coulomb field experienced by the bound electron.
Within this framework, the transition amplitude can be evaluated analytically. It adopts the form where E p i is the initial electron energy outside the laser field, U p is the relativistic ponderomotive energy describing the "dressing" of the electron by the laser field, and ε 0 is the final electron energy in the bound state. The summation index n counts the number of laser photons absorbed (if n < 0) or emitted (if n > 0) during the recombination step. The structure of the transition amplitude (5) implies that the recombination cross section also decomposes into a sum of partial cross sections σ n which give the probability for emission of a photon of energy ω ′ = E p i + U p − ε 0 − nω. The energy spectrum of emitted photons will thus consist of a comb of lines, with a spacing of ω between neighbouring lines. The height of each line is determined by the squared magnitude of the matrix elements M n . One can show that the relevant range of photon numbers is bounded by ±n max with where p ⊥ i is the component of the initial electron momentum perpendicular to k. As an example we consider the recombination of a relativistic electron of momentum p i = 3mc into the ground state of a bare Zn ion (nuclear charge number Z = 30). In the absence of any laser field, the spectrum of emitted photons consists of a single line at the free-bound transition energy given by ω ′ = 1.117 MeV. In contrast, in the presence of an 800-nm laser beam of 10 16 W/cm 2 intensity which is irradiated under 90 degrees with respect to the incident electron velocity, the emission spectrum will comprise a multitude of lines forming a quasi-continuous distribution with a total width of about 2n max ω ≈ 50 keV and very pronounced side wings (see Fig. 3). The total cross section amounts to ∼ 0.1 barn.
The distinct features of the photo-emission spectrum in Fig. 3 find a semiclassical explanation in terms of the instantaneous electron energy inside the field. For the considered beam geometry, the classical electron energy reads Since the interaction with the laser field renders the electron energy time dependent, the emitted photon energy depends on the moment of recombination. Maximum photon energies will result when the recombination occurs at laser phases around ϕ = 0, whereas ϕ = π leads to minimum photon energies. The spectral width, accordingly, amounts to 2F 0 v i /ω which agrees with the prediction from quantum mechanics. Besides, the center of the spectrum is slightly shifted upwards by the ponderomotive energy U p ≈ 0.2 keV. Genuinely relativistic signatures arise in the angular distribution of the emitted photons. They are caused by the light pressure exerted by the laser field, leading to an electron momentum component along the laser beam axis [6].
4. Electron-assisted annihilation: three-body recombination with the QED vacuum Finally, we briefly discuss a process where an electron recombines not with an ion but rather with a hole in the vacuum state of quantum electrodynamics. This is, the electron recombines with a vacancy in the Dirac sea of negative-energy states or, in other words, it annihilates with a positron. However, contrary to the usual annihilation into two photons, e + e − → 2γ, in the process under consideration the annihilation proceeds in the presence of a nearby spectator electron which is capable of absorbing recoil momentum. In this situation the pair can annihilate by emitting just a single photon according to The triple interaction (9) resembles three-body electron-ion recombination because also there the recombining electron transfers its energy excess to a nearby partner electron. In both processes, the recoil absorbed by the assisting electron reduces the number of emitted photons by one, as compared to radiative recombination and two-photon annihilation, respectively. Reaction (9) has been studied in some detail with respect to the decay of positronium ions [28,29]. In this case the assisting electron is loosely bound to the neutral positronium core. For the ratio of single-photon to two-photon annihilation in Ps − , the small value R 1γ /R 2γ ∼ 10 −10 was found. The reason for this small ratio is that the annihilation channel (9) requires very high particle densities to become sizeable. Its relative contribution with respect to e + e − → 2γ may be estimated by order of magnitude as where ρ is the number density of electrons and λ C the Compton wavelength. Besides, a factor of the fine-structure constant α appears, because (9) is of higher order than e + e − → 2γ in the QED coupling parameter. For the case of positronium ions, Eq. (10) becomes R 1γ /R 2γ ∼ α 4 . Equation (10) implies that the single-photon annihilation channel will become prominent at extremely high densities of the order of ρ ∼ λ −3 C ≈ 10 31 cm −3 and above. In fact, we have shown [7] that reaction (9) starts to dominate over e + e − → 2γ in terms of total rate at e + e − plasma densities of 6.5×10 32 cm −3 , corresponding to a plasma temperature of about 3 MeV. Such enormous densities are generated in astrophysical processes such as the initial stage of gammaray bursts [30]. They also existed in the lepton era of the early universe between ∼ 10 −4 s and 4 s during which the density dropped from ∼ 10 38 cm −3 down to ∼ 10 28 cm −3 [31]. Note for comparison that also three-body recombination of electrons with ions dominates over the usual radiative recombination when the electron density is sufficiently high.
Besides, under certain conditions the photon produced in reaction (9) may exhibit very special properties [7]. The latter might allow for an identification of the single-photon annihilation process in a dedicated laboratory experiment. Suppose that a relativistic electron beam of few-MeV energy penetrates through a dense, cold e + e − plasma target. Then the photon from the three-body process (9) will be emitted predominantly into the backward direction with respect to the incident electron beam. Moreover, the photon is polarized to a high degree, with the polarization vector being orthogonal to the plane spanned by the momentum vectors of the photon and incident electron. These peculiar features of the photo-emission can be explained in intuitive terms based on a combination of classical electrodynamics and the familiar properties of two-photon annihilation [7].
Note that the inverse process of (9) is photo-production of an e + e − pair on an electron which has been studied for a long time. In recent years the interest in this kind of trident pair creation has been revived in view of its generalization to the multiphoton case in strong laser fields [32,33].
Conclusion
Three different electronic recombination processes have been considered which take place in dense environments consisting of atoms, electrons or photons. First, it was shown that dielectronic recombination can occur with participation of two spatially well-separated atomic centers. This interatomic recombination process can dominate over the competing single-center radiative recombination channel at atomic densities of the order or above ∼ 10 18 cm −3 . Second, we demonstrated that radiative recombination can be strongly modified by background laser fields of high intensity, corresponding to photon densities of ∼ 10 24 cm −3 . Finally, we discussed the process of electron-assisted annihilation of an electron-positron pair into a single photon, which may be viewed as a three-body recombination with a hole in the vacuum state of QED. It can compete with the usual annihilation into two photons at electronic densities around ∼ 10 32 cm −3 . While the latter process can be of relevance in astrophysical e + e − plasmas, the first two may be tested in the laboratory by modern experimental techniques available today.
|
2019-04-20T13:08:09.989Z
|
2012-11-05T00:00:00.000
|
{
"year": 2012,
"sha1": "b0bb0f1b8783a799d5b9f09aee75565587ba6941",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/388/1/012003",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3b487d8a4ed22fd5751e6b33fdcf3d7508c1de86",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
57189468
|
pes2o/s2orc
|
v3-fos-license
|
Successive Wyner-Ziv Coding for the Binary CEO Problem under Logarithmic Loss
The $L$-link binary Chief Executive Officer (CEO) problem under logarithmic loss is investigated in this paper. A quantization splitting technique is applied to convert the problem under consideration to a $(2L-1)$-step successive Wyner-Ziv (WZ) problem, for which a practical coding scheme is proposed. In the proposed scheme, low-density generator-matrix (LDGM) codes are used for binary quantization while low-density parity-check (LDPC) codes are used for syndrome generation; the decoder performs successive decoding based on the received syndromes and produces a soft reconstruction of the remote source. The simulation results indicate that the rate-distortion performance of the proposed scheme can approach the theoretical inner bound based on binary-symmetric test-channel models.
a soft reconstruction of the remote source. The simulation results indicate that the rate-distortion performance of the proposed scheme can approach the theoretical inner bound based on binarysymmetric test-channel models.
I. Introduction
Multiterminal source coding is an important subject of network information theory. Research on this subject has yielded insights and techniques that are useful for a wide range of applications, including, among other things, cooperative communications [2] distributed storage [3], and sensor networks [4]. A particular formulation of multiterminal source coding, known as the Chief Executive Officer (CEO) problem, has received significant attention [5].
In this problem, there are L encoders (also called agents), which observe independently corrupted versions of a source; these encoders compress their respective observations and forward the compressed data separately to a central decoder (also called CEO), which then produces a (lossy) reconstruction of the target source.
The quadratic Gaussian setting of the CEO problem has been studied extensively, for which the rate-distortion region is characterized completely in the scalar case [6]- [11] and partially in the vector case [12], [13]. Extending these results beyond the quadratic Gaussian setting turns out to be highly non-trivial; there are some results in [14]- [16]. Indeed, even for many seemingly simple sources and distortion measures, the understanding of the relevant information-theoretic limits is rather limited. A remarkable exception is a somewhat underappreciated distortion measure called logarithmic loss (log-loss). As shown by Courtade and Weissman [17], the rate-distortion region of the CEO problem under log-loss admits a single-letter characterization for arbitrary finite-alphabet sources and noisy observations. Different from the conventional distortion measures which are typically imposed on "hard" reconstructions defined over the given source alphabet, the reconstructions associated with log-loss are "soft". Specifically, in the context of the CEO problem, the most favorable "soft" reconstruction is essentially the a posteriori distribution of the source given the compressed data received from the encoders (which is a sufficient statistic); it is more informative than its "hard" counterparts and more suitable for many downstream statistical inference tasks.
Recent years have seen significant interests in a new paradigm of wireless communications called cloud-radio access network (C-RAN). It has been recognized that the informationtheoretic and coding-theoretic aspect of C-RAN is closely related to that of the CEO problem under log-loss [18]. This intriguing connection greatly enriches the implication of the latter problem and provides further motivations for the relevant research.
A main contribution of the present paper is a practical coding scheme for the CEO problem under log-loss. We adopt a hierarchical approach by decomposing the CEO problem into a set of simpler problems which the existing coding techniques can be directly brought to bear upon and then combining these small pieces to find the solution to the original problem.
Two most basic problems in information theory are point-to-point channel coding and (lossy) source coding (also known as quantization). It is well known that the fundamental limits of these two problems can be approached using graph-based codes (e.g., low-density paritycheck (LDPC) codes for channel coding [19] and low-density generator-matrix (LDGM) codes for (lossy) source coding [20]) in conjunction with iterative message-passing algorithms (e.g., the sum-product (SP) algorithm for channel decoding [19] and the bias-propagation (BiP) algorithm for (lossy) source encoding [21], [22]). These basic coding components can serve as the building blocks of more sophisticated schemes for the problems at the second level of the hierarchy. Notable examples include the Gelfand-Pinsker problem and the Wyner-Ziv problem, which are solved via proper combination of source codes and channel codes [23], [24]. With these solutions in hand, one can then tackle the problems at the third level or even higher. From this perspective, our proposed scheme for the CEO problem can be interpreted as successive implementation of Wyner-Ziv coding.
The conversion of the CEO problem to the Wyner-Ziv problem is realized using quantization splitting. The idea of quantization splitting is by no means new. Indeed, it has been May 28, 2019 DRAFT applied to the multiterminal source coding problem [25] and multiple description problem [26] among others [27], particularly in the quadratic Gaussian setting. However, to the best of our knowledge, the application of quantization splitting is mainly restricted in the theoretical domain as a conceptual apparatus, and its practical implementation has not been addressed in the literature, at least for the problem under consideration (namely, the CEO problem under log-loss). In this work we mainly focus on the setting where the source is binarysymmetric and is corrupted by independent Bernoulli noises. It is worth emphasizing that this simple setting captures the essential features of the CEO problem and the methodology underlying our proposed scheme is in fact broadly applicable.
The organization of this paper is as follows. The problem definition and the concept of quantization splitting are presented in Section II. The proposed scheme is described in Section III. Section IV contains some analytical and numerical results. We conclude the paper in Section V.
A. Notations
Throughout this paper, the logarithm is to the base 2. Random variables and their realizations are shown by capital letters and lowercase letters, respectively. Sets and alphabet set of random variables are depicted by calligraphic letters. Furthermore, matrices are shown by bold-faced letters. The binary entropy function is shows the binary convolution of p and d. The list of symbols used in the paper is represented in Table I.
B. System Model
Let X n = (X 1 , X 2 , · · · , X n ) an independent and identically distributed (i.i.d.) remote source. L noisy observations of X n are available in L links that are mutually independent without any communication among them. These noisy observations, Y n l for l ∈ I L = ∆ {1, 2, · · · , L}, are generated by X n through independent memoryless channels. The block diagram of an L-link CEO problem is depicted in Fig. 1. In each link, an encoder maps its noisy observation to a codeword C l by using a function f l , as follows: The codewords C l , for l ∈ I L , are sent to a joint CEO decoder via noiseless channels. The CEO decoder produces a soft reconstructionX n = (X 1 ,X 2 , · · · ,X n ) of the original remote May 28, 2019 DRAFT source X n by using a function g, as follows: Specifically,X t is decoder's approximation of the posterior distribution of X t given (C 1 , C 2 , · · · , C L ), Definition 1: The log-loss induced by a symbol x ∈ X and a probability distributionx on X is defined as More generally, for a sequence of symbols x n = (x 1 , x 2 , · · · , x n ) and a sequence of distribu- Definition 2: A rate-distortion vector (R 1 , R 2 , · · · , R L , D) is called strict-sense achievable under log-loss, if for all sufficiently large n, there exist functions f 1 ,f 2 ,...,f L , and g respectively according to (1) and (2) such that where E(·) denotes expectation function. The closure of the set of all strict-sense achievable vectors (R 1 , R 2 , · · · , R L , D) is called the rate-distortion region of the CEO problem under log-loss and is denoted by RD ⋆ CEO .
Definition 3 ( [17, Definition 7]):
for some joint distribution where in (6a), Y A = {Y l : l ∈ A} and A c = I L \A.
Definition 4 ( [17, Definition 8]):
and (6b), for some joint distribution (7), where [x] + = max{0, x} and It is shown in [17] that moreover, there is no loss of generality in imposing the cardinality bounds |U l | ≤ |Y l |, l ∈ I L and |Q| ≤ L + 2 on the alphabet sizes of auxiliary random variables U l and timesharing variable Q, respectively.
Given test channels p U l |Y l , l ∈ I L , we define RD CEO (p U l |Y l , l ∈ I L ) to be the set of all where X, Y I L , and U I L are jointly distributed according to p X (x) L l=1 p Y l |X (y l |x)p U l |Y l (u l |y l ).
Note that (10) and (11) correspond respectively to (6a) and (6b) with timesharing variable Q set to be a constant. Therefore, RD i CEO (as well as RD o CEO and RD ⋆ CEO in light of (9)) can be expressed as the convex hull of the union of RD CEO (p U l |Y l , l ∈ I L ) over all (p U l |Y l , l ∈ I L ). Moreover, we define R CEO (p U l |Y l , l ∈ I L ) to be the set of all (R 1 , R 2 , · · · , R L ) satisfying (10) and define its dominant face, denoted by F CEO (p U l |Y l , l ∈ I L ), to be the set of . Due to the contrapolymatroid structure of R CEO (p U l |Y l , l ∈ I L ) [25], [27], F CEO (p U l |Y l , l ∈ I L ) is non-empty and
To achieve non-corner points of F CEO (p U l |Y l , l ∈ I L ), we employ the quantization splitting technique introduced in [25], which is a generalization of the source splitting technique [29] and a counterpart of the rate splitting technique in channel coding [30], [31]. Roughly speaking, the basic idea underlying the quantization splitting technique is that each noncorner point in the L-dimensional space can be projected to a corner point in the (2L − 1)-dimensional space. Specifically, it is known [25, Theorem 2.1] that, for any rate tuple , there exist random variables W l , l ∈ I L , and a well-ordered permutation σ 1 on the set {W 1 , W 2 , · · · , W L , U 1 , U 2 , · · · , U L } such that where {W l } − σ and {U l } − σ represent the set of random variables that respectively appear before W l and U l in the well-ordered permutation σ; moreover, W l is a physically degraded version U l , l ∈ I L , and at least one W l is independent of U l (and thus can be eliminated).
It is instructive to view U l as a fine description of Y l and view W l as a coarse description split from U l , l ∈ I L . Eq. (12) suggests that the given rate tuple (R 1 , R 2 , · · · , R L ) can be achieved via successive Wyner-Ziv coding with decoding order specified by σ. It should be emphasized that the successive Wyner-Ziv coding scheme for non-corner points is in general more complicated than that for corner points. First of all, the scheme for noncorner points involves more encoding and decoding steps. Secondly and more importantly, to realize the splitting effect, one needs to generate a coarse-description codebook and then, for each of its codewords, generate a fine-description codebook; as a consequence, the number of finite-description codebooks grows exponentially with the codeword length, causing a serious problem in practice. In this work we circumvent this problem by using a codebook construction technique inspired by the functional representation lemma [32], [33]. Successive refinement coding scheme is also a multi-terminal encoding problem for, basically, downlink, where terminals are classified into several groups, each having different distortion requirements. The remote source is encoded such that the description for the groups having higher distortion requirement can help recover another groups having lower distortion requirement.
Alternatively, our proposed coding scheme successively decodes binary observations and then softly reconstructs the remote source with a single value of distortion under the logloss criterion.
III. Description of the Proposed Scheme
Consider an L-link binary CEO problem, where a remote binary-symmetric source (BSS) is corrupted by independent Bernoulli noises with parameters p 1 , p 2 , ... , and p L , i.e., We make the following two assumptions.
1) A binary-symmetric test channel model is adopted for each encoder. More specifically, it is assumed that p U l |Y l is a binary-symmetric channel (BSC) with crossover probability , l ∈ I L , are mutually independent and are independent of (X, Y I L ) as well. This assumption is justified by the numerical results in [28].
2) A BSC model is adopted for each splitter. More specifically, it is assumed that p W l |U l is a BSC with crossover probability δ l , l ∈ I L . Hence, we can write are mutually independent and are independent of (X, Y I L , U I L ) as well. According to [31,Definition 2], this assumption incurs no loss of generality.
Since the coding schemes associated with different well-ordered permutations are conceptually similar, for ease of exposition, we focus on a specific permutation σ = (W 1 , W 2 , · · · , W L−1 , . Each conditional mutual information in (12) can be written as the difference of two terms, one associated with quantization and the other with binning. As an example, consider the second term of R 1 , i.e., where (14) is due to the degradeness of W l with respect to U l , l = 2, · · · , L − 1, and (15) is because of the fact that (U 1 , W 1 ) and (U 2 , · · · , U L ) are conditionally independent given Y 1 .
The term I(Y 1 ; U 1 |W 1 ) specifies the quantization rate needed to generate the fine description U 1 given the coarse description W 1 while the term I(U 2 , · · · , U L ; U 1 |W 1 ) specifies the amount of rate reduction achievable through binning.
We use a binary quantizer to map outputs of a BSS to codewords of an LDGM code with a minimum Hamming distance. These quantizers are utilized in the encoders of our proposed coding scheme. Practically, binary quantization can be realized by using some iterative message passing algorithms such as the BiP algorithm [21] or the survey-propagation algorithm [20]. Presence of side information can further reduce the compression rate required for a prescribed distortion constraint. Actually, this lossless source coding scenario can be practically realized by a binning operation based on channel coding schemes [4]. In our proposed coding scheme, binning is implemented by using LDPC codes with the syndrome generation scheme. This binning scheme is also used for the asymmetric Slepian-Wolf coding problem. In practice, the SP algorithm can be used to iteratively decode the LDPC coset code specified by the given syndrome.
A. The Proposed Coding Scheme: an Information-Theoretic Description
To elucidate the overall structure of the proposed scheme, we first give a short description using the information-theoretic terminology. First, let W L = ∆ U L . In the following description, all the ǫ quantities are small positive real numbers.
Codebook Generation:
1) For l ∈ I L , construct a codebook C W l of rate I(Y l ; W l ) + ǫ l,1 with each codeword generated independently according to n
Encoding:
1) For l ∈ I L and a given y n l , the l-th encoder finds a codeword w n l ∈ C W l that is jointly typical with y n l . Note that the Hamming distance between w n l and y n l is approximately n(d l * δ l ).
2) For i ∈ I L−1 , the i-th encoder finds a codeword u n i ∈ C U i (w n i ) that is jointly typical with (y n i , w n i ). Note that the Hamming distance between u n i and y n i is approximately nd i while the Hamming distance between u n i and w n i is approximately nδ i . 3) For l ∈ I L , the l-th encoder sends the index b(w n l ) of the bin that contains w n l (for l = 1, it only sends the index i(w n 1 ) of w n 1 , and for l = L nothing is sent), and the index b(u n l ) of the bin that contains u n l to the decoder.
Decoding:
1) The decoder first decodes w n 1 based on i(w n 1 ). 2) For i = 2, · · · , L, it decodes w n i by searching in the bin with index b(w n i ) for the unique codeword that is jointly typical with (w n 1 , w n 2 , · · · , w n i−1 ). 3) For j = L − 1, · · · , 1, it decodes u n j by searching in the bin with index b(u n j ) for the unique codeword that is jointly typical with (w n 1 , · · · , w n j , u n j+1 , · · · , u n L ). 4) Finally, it uses (û n 1 , · · · ,û n L ) to produce a soft reconstruction of x n by the following rule:x
B. The Proposed Coding Scheme: a Coding-Theoretic Description
Now we translate the above information-theoretic description of the proposed scheme to a coding-theoretic description. Along the way, we address certain practical issues encountered in codebook generation using a construction technique inspired by the functional representation lemma. For notional simplicity, the description is given for the case L = 3; the extension to the general case is straightforward.
Codebook Generation:
DRAFT May 28, 2019 1) For l ∈ I 3 , generate an LDGM codebook C W l with the rate of I(Y l ; W l ) + ǫ l,1 = 2) For i ∈ I 2 and each codeword w n i , construct a codebook C U i (w n i ) as follows 3 : where Note that the approximation in (17) can be made arbitrarily precise when K i → ∞. For each codeword c nK i = ∆ c 1 , c 2 , · · · , c nK i ∈ C ′ i , map c nK i to a codeword of length n by using φ i (·) as below: By doing this for all codewords in codewords, each of length n. Hence, the codebook C U i (w n i ) can be defined as w n i ⊕φ i (C ′ i ), which is a codebook obtained by adding w n i to each codeword in φ i (C ′ i ). Now consider the backward channels , and W i are mutually independent. It can be verified that 3) For i = 2, 3, to partition 3 ] bins with each bin containing 3 This construction is inspired by the functional representation lemma. 4 This is known as Gallager's mapping [34], which is widely used to construct source or channel codes with nonuniform empirical distribution [35], [36]. 5 The representation of such backward channels can be viewed as a manifestation of the functional representation lemma. Moreover, it is instructive to view φi(C ′ i ) as a codebook generated by V ′ i .
with parity-check matrix It can be verified that Encoding: Different from the information-theoretic description in Section III-A, we shall interpret joint typicality encoding as minimum Hamming distance encoding, which is then implemented using the BiP algorithm.
1) For l ∈ I 3 and a given y n l , the l-th encoder finds a codeword w n l ∈ C W l from an LDGM code that is closest (in the Hamming distance) to y n l .
2) For i ∈ I 2 , find a codeword c nK
is the closest (in the Hamming distance) to y n i ⊕ w n i . 3) Send the index of w n 1 and the syndrome c nK 1 1Ĥ1 from the first link to the decoder; note that w n 1 = i(w n 1 )G W 1 , where G W 1 is generator matrix of LDGM code C W 1 . Also, send the syndromes w n 2Ĥ 2 and c nK 2 2Ĥ ′ 2 from the second link to the decoder. Finally, send the syndrome u n 3Ĥ 3 from the third link to the decoder.
The block diagram of the proposed encoding scheme is depicted in Fig. 2.
Decoding: Different from the information-theoretic description in Section III-A, we shall interpret joint typicality decoding as maximum a posteriori decoding, which is then implemented using the SP algorithm.
2) It then finds the most likely choice of w n 2 , denoted byŵ n 2 , based onŵ n 1 and w n 2 H 2 (which can be deduced from w n 2Ĥ 2 and the fact that w n 2H 2 is a zero vector). This can be realized via conventional Slepian-Wolf decoding with H 2 defining the factor graph andŵ n 1 serving as side information (see, e.g., [37]). 3) It then finds the most likely choice of u n 3 , denoted byû n 3 , based onŵ n 1 ,ŵ n 2 , and u n 3 H 3 (which can be deduced from u n 3Ĥ 3 and the fact that u n 2H 3 is a zero vector). This can be realized via conventional Slepian-Wolf decoding with H 3 defining the factor graph and (ŵ n 1 ,ŵ n 2 ) serving as side information. 2 is a zero vector). This can be realized via joint demapping and decoding with (H ′ 2 , φ 2 ) defining the factor graph and (ŵ n 1 ,ŵ n 2 ,û n 3 ) serving as the channel output (see, e.g., [38]). Set (which can be deduced from c nK 1 1Ĥ1 and the fact that c nK 1 1H1 is a zero vector). This can be realized via joint demapping and decoding with (H 1 , φ 1 ) defining the factor graph and (ŵ n 1 ,û n 2 ,û n 3 ) serving as the channel output. Setû n 1 =ŵ n 1 ⊕ φ 1 (ĉ nK 1 1 ).
The block diagram of the proposed decoding scheme is depicted in Fig. 3.
C. Analysis of the Proposed Coding Scheme
Now we proceed to specify the sizes of generator matrices and parity-check matrices used in the proposed scheme and other relevant parameters, assuming that d 1 , d 2 , d 3 , δ 1 , and δ 2 are given.
For the LDGM codes C W l , shown in Fig. 2, their generator matrices are of size m l × n, l ∈ I 3 , respectively, where Furthermore, size of the generator matrix of the LDGM code C ′ i is M i × nK i , for i ∈ I 2 . By properly designing these LDGM codes and increasing the block length n, one can ensure For the LDPC codes shown in Fig. 2, the sizes of their parity-check matrices are given as follows: In the syndrome-decoding part of our proposed scheme, which is implemented by successive SP algorithms, if the optimized degree distributions for the BSC are used with sufficiently long LDPC codes, the bit error rate (BER) for the reconstruction of {U 1 , U 2 , U 3 } can be made very close to zero, i.e., BER l ≈ 0 for l ∈ I 3 . In such a case, the total distortion of the l-th link approximately equals d l . In designing procedure of LDPC codes that are employed for the syndrome-generation and the syndrome-decoding, the following relations are considered in their code rates, where P l = p l * d l for l ∈ I 3 . Note that, there are four compound LDGM-LDPC codes 7 in the proposed scheme for a 3-link binary CEO problem. They comprise the LDGM codes
IV. Analytical Results
It is clear that, for the proposed scheme, there is freedom in choosing (d 1 , · · · , d L ) and (δ 1 , · · · , δ L ). The role of (d 1 , · · · , d L ) is to specify the dominant face F CEO (p U l |Y l , l ∈ I L ) (and consequently the sum rate) while the role of (δ 1 , · · · , δ L ) is to specify the location of the target rate tuple (R 1 , · · · , R L ) on the dominant face. Note that for any (R 1 , R 2 , · · · , R L , D) ∈ RD CEO (p U l |Y l , l ∈ I L ), we have rate and distortion associated with a given (d 1 , · · · , d L ). Therefore, it is natural to choose (d 1 , · · · , d L ) that achieves an optimal tradeoff between R th and D th , which motivates the following definition.
We shall derive several analytical results surrounding Definition 5. An investigation along this line was initiated in [28] for the case L = 2.
Note that denotes the l-th digit in the binary expansion of j, and For example, when L = 3, we have It can be verified that Lemma 1: For the objective function F defined in (26), its minimum value is equal to 1 when µ ≥ 1.
Proof: Assume this is not true, and thus there exits i such that d * i > d * i+1 . We prove that by swapping d * i and d * i+1 , the objective function F = D th + µR th decreases which is a contradiction. Note that Based on Lemmas 1 and 2, term (µ − 1)H(U I L ) decreases by swapping d * i and d * i+1 . Also, the term −µ L l=1 h b (d * l ) clearly remains unchanged by this replacement. Without loss of generality, let us assume i = 1. Therefore, it is enough to show that By defining the following variables z 1 and z 2 , we have: Since, h b (x) is a concave function in x, from (33) Furthermore, h b (x) is an increasing function in the interval [0, 0.5]. Thus, From (34) and (35), the inequality (32) is concluded. Hence, the proof is completed.
V. Numerical Results
Now we provide some numerical examples of optimal d-allocations. Without loss of generality, we assume p 1 ≤ p 2 ≤ · · · ≤ p L . It follows by Lemma 4 that d * 1 ≤ d * 2 ≤ · · · ≤ d * L for the resulting optimal d-allocation. Obviously, d * l equals 0 for all l's when µ = 0. There exists a µ 0 > 0 such that for 0 ≤ µ < µ 0 , all the L links are involved in information sending, i.e., d * l < 0.5 for l ∈ I L , while d * L = 0.5 for µ = µ 0 . Therefore, the L-th link becomes inactive for µ ≥ µ 0 . Accordingly, the problem is reduced to an (L − 1)-link case. By increasing µ, the noisy links are eliminated one-by-one. Finally, it is reduced to the case of L = 2. We illustrate this phenomenon through the following simple example. Similarly, if µ 1 < µ < µ 2 ≈ 0.4245, then d * 1 < d * 2 < 0.5 and d * 3 = 0.5. Next, for µ 2 ≤ µ < µ max = 0.64, the first link is only involved in sending the information, i.e., 0.023 < d * 1 < 0.5 and d * The next example illustrates the sum-rate-distortion tradeoffs under equal d-allocation Example 2: Let L = 3 and p 1 = p 2 = p 3 . The sum-rate distortion curves under equal d-allocation are depicted in Fig. 4(a) for various noise parameters. In Fig. 4(b), the sumrate distortion curves under equal d-allocation are shown for the case of p l = 0.25 with L = 3, 5, 7, 9.
Example 3: Based on the numerical and the analytical results presented in [28], for a two- link binary CEO problem, the equal allocation, i.e., d * 1 = d * 2 , is not an optimal d-allocation for some values of sum-rate and distortion, even in the case of equal noise parameters p 1 = p 2 .
Here, it is shown that this surprising result is also authentic for the multi-link case. In Fig. 5, the sum-rate distortion curves are shown for some cases. As it is seen, involving all the links does not necessarily provide minimum values of the sum-rate and the distortion. Tables II and III. In particular, each choice of (d 1 , d 2 , · · · , d L ) corresponds to an optimal d-allocation. The rate of each encoder is calculated as follows: Table II In order to achieve a corner point, there is no need to the splitter, i.e., δ 1 = δ 2 = δ 3 = 0.
VI. Conclusion
We have proposed a practical coding scheme for the binary CEO problem under the logloss criterion based on the idea of quantization splitting. The underlying methodology is in fact quite general and is applicable to the non-binary case as well. It should be emphasized that, to implement the proposed scheme, one needs to first specify the test channel model for each encoder. In general, it is preferable for the system to operate in a mode that corresponds to a certain boundary point of the rate-distortion region. Identifying the boundary-attaining test channel models is an interesting research problem worthy of further investigation.
DRAFT May 28, 2019 Note that in this case, P ′ 1 P ′ 2 − P ′ 2 + P 2 − P 1 P 2 = P ′ 1 −P ′ 2 +P 2 −P 1 2 ≥ 0. Similarly, we can Due to the concavity of the function f (x) = −x log(x), it is concluded that By doing a summation over all possible values of Ψ in the mentioned 4-tuple groups, (37) is proved.
|
2018-12-30T18:38:28.000Z
|
2018-12-30T00:00:00.000
|
{
"year": 2018,
"sha1": "c06d53ad34ddd910f5ddfda231af29df3ea381c6",
"oa_license": null,
"oa_url": "http://jultika.oulu.fi/files/nbnfi-fe2019121146690.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4b413ab924eeff6a87b3aac3ed4565a6000c6244",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
49416360
|
pes2o/s2orc
|
v3-fos-license
|
Study of global cloud droplet number concentration with A-Train satellites
Cloud droplet number concentration (CDNC) is an important microphysical property of liquid clouds that impacts radiative forcing, precipitation and is pivotal for understanding cloud–aerosol interactions. Current studies of this parameter at global scales with satellite observations are still challenging, especially because retrieval algorithms developed for passive sensors (i.e., MODerate Resolution Imaging Spectroradiometer (MODIS)/Aqua) have to rely on the assumption of cloud adiabatic growth. The active sensor component of the A-Train constellation (i.e., Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP)/CALIPSO) allows retrievals of CDNC from depolarization measurements at 532 nm. For such a case, the retrieval does not rely on the adiabatic assumption but instead must use a priori information on effective radius ( re), which can be obtained from other passive sensors. In this paper,re values obtained from MODIS/Aqua and Polarization and Directionality of the Earth Reflectance (POLDER)/PARASOL (two passive sensors, components of the A-Train) are used to constrain CDNC retrievals from CALIOP. Intercomparison of CDNC products retrieved from MODIS and CALIOP sensors is performed, and the impacts of cloud entrainment, drizzling, horizontal heterogeneity and effective radius are discussed. By analyzing the strengths and weaknesses of different retrieval techniques, this study aims to better understand global CDNC distribution and eventually determine cloud structure and atmospheric conditions in which they develop. The improved understanding of CDNC can contribute to future studies of global cloud–aerosol– precipitation interaction and parameterization of clouds in global climate models (GCMs).
Introduction
Cloud droplet number concentration is one of the most important cloud microphysical properties as it is intimately related to the cloud droplet size distribution, chemical composition of condensation nucleation nuclei (CCN), and the thermodynamical and dynamical state (i.e., updraft velocity, mixing rates) of the cloudy air during its formation (Seinfeld and Pandis, 1998).This property is directly linked to cloud evolution (i.e., water vapor condensation, droplet nucleation and drizzling processes), impacts cloud radiative properties and precipitation development, and it is pivotal in cloudaerosol interactions.
Better representations of clouds and cloud-aerosolprecipitation interactions would help process modeling and improve understanding of regional/global climate changes and daily weather forecasts.Many studies have identified that aerosol-cloud interactions constitute the largest source of uncertainties in estimating radiative forcing of the earthatmosphere system (Penner et al., 2011).For the same total cloud water content, increasing the number concentration of precursor aerosols may lead to a decrease in cloud effective radius and, therefore, to an increase in cloud albedo (i.e., the first aerosol indirect effect; Twomey, 1977).Recent research discusses whether or not the marine biosphere plays a non-negligible role in regulating cloud microphysical properties in a pristine oceanic atmosphere, and so far many studies have examined biogenic influence on cloud microphysics (Charlson et al., 1987;Lana et al., 2012;Ayers and Cainey, 2007).Validation of relationships between cloud microphysics and marine biogenic aerosols that serve as CCN Published by Copernicus Publications on behalf of the European Geosciences Union.
S. Zheng et al.: Study of global cloud droplet number concentration
can improve our understanding of the ocean-atmosphere interaction.
Until recently, studying Cloud Droplet Number concentration (CDNC) on global scales has been challenging.Field measurements provide more accurate CDNC information but their temporal and spatial coverage is limited.Satellites could provide broad sampling coverage of continuous observations; however, retrieval algorithms from passive sensors (e.g., MODerate Resolution Imaging Spectroradiometer (MODIS)/Aqua) suffer from important uncertainties because they rely heavily on assumptions regarding adiabatic or subadiabatic cloud growth.As a matter of fact, most clouds in the atmosphere do not grow adiabatically.Real clouds are predominantly subadiabatic because of warm rain process and droplet evaporation/breakup processes associated with the cloud top entrainment (Pruppacher and Lee, 1976).Therefore, it is highly relevant to investigate and understand the retrieval bias due to cloud diabatic growth.The active sensor Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP)/CALIPSO, another A-Train member, permits retrievals of CDNC from the depolarization measurement at 532 nm.This technique differs and has a weak dependence upon an adiabatic assumption.On the other hand, the CDNC retrieval methodology requires a priori information on the r e , which cannot be derived from CALIOP.This information is retrieved independently from other sensors, such as MODIS/Aqua or POLDER/PARASOL.The CDNC retrieval accuracy, therefore, strongly depends on the accuracy of the r e retrieved from other sensors.This calls for a careful evaluation and intercomparison of CDNC data sets derived on a global scale from passive (MODIS) and active (CALIOP) sensors based on these different retrieval techniques.
In Sect.2, the CALIOP/CALIPSO, MODIS/Aqua and POLDER3/PARASOL data are presented, and algorithms, their theoretical basis and main characteristics are summarized.Comparison methodology of CDNC between CALIOP and MODIS and the corresponding results are given in Sect.3. Impacts of cloud entrainment/drizzling, horizontal heterogeneity and r e are discussed in Sect.4, and conclusions are drawn in Sect. 5.
Data and algorithms
In our study, collocated data (CALTRACK data from ICARE Data and Service Centre; Zeng, 2011) from level-2 CALIOP/CALIPSO, MODIS/Aqua and POLDER3/PARASOL cloud products extracted along the CALIOP track at 5 km horizontal resolution are being considered for the period from November 2007 to December 2008.Overcast water clouds are filtered for the study with a combination of CALIOP, MODIS and POLDER cloud products.We also remove thin clouds with optical thickness of less than 5 as detected by MODIS because those thin clouds have large uncertainties when retrieving cloud optical thickness and effective radius (Zhang et al., 2011).Hereafter, we provide a brief summary of the CDNC retrieval algorithms of CALIOP, MODIS and their theoretical basis, including their advantages and limitations.
CALIOP/CALIPSO
CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) is an active two-wavelength polarization-sensitive lidar with a horizontal resolution of 333 m and vertical resolution of 30-60 m (Winker et al., 2003).The level-2 cloud layer products are derived at a horizontal resolution of 5 km.CALIOP uses a level-2 layer integrated depolarization ratio (δ) and collocated droplet effective radius (r e unit: µm) from passive sensors to retrieve the CDNC (N; unit: cm −3 ; see Eq. ( 1), corresponding to Eq. ( 9) in Hu et al., 2007).The retrieval sensitivity of the r e from the CALIOP extinction and layer-integrated depolarization ratio is very low; there is no r e product derived directly from CALIOP.Collocated r e values retrieved from MODIS (Nakajima and King, 1990;Platnick et al., 2003) or POLDER (Bréon and Doutriaux-Boucher, 2005) are used to constrain the CDNC retrieval of CALIOP.The method is based on the fact that the total extinction (β; unit: km −1 ) is a sum of extinction for each single droplet (β s ; β s = 2 π r 2 e ).The total extinction coefficients of water clouds can be retrieved from δ and r e (β = r 1/3 e (1+135δ 2 /(1−δ 2 )); see Eq. (3) in Hu et al., 2007) where δ is related to multiple scatter and r e determines the backward proportion of single scatter and absorption.Droplet number concentration is therefore the quotient of β and β s as shown in the following equation: As real cloud droplets are not monodispersely distributed, the true droplet number concentration can be represented as the product of N (unit: cm −3 ) and a factor k (see Eq. 8 in Hu et al., 2007).k is the ratio of effective radius to volume radius and is assumed constant at 0.6438 in our formula by considering a gamma distribution of the droplets with an effective variance of size distribution (v) equal to 0.13 for MODIS (k =1/((1-v) × (1-2 × v)); Hu. et al. 2007).We used the r e from both passive sensors (Figs.1-6 from MODIS and Fig. 7 from POLDER) for our calculations in the following.
The main advantage of CALIOP retrieval is that an adiabatic assumption is not required for the retrieval.The CDNC can be accurately retrieved if the layer-integrated depolarization ratio and the r e are accurate.Since collocated r e values retrieved from MODIS or POLDER are used to constrain the CDNC retrieval of CALIOP, the retrieval accuracy depends strongly on the correctness of the r e derived from these passive instruments.In addition, as CALIOP signal could only detect the uppermost top of clouds with τ < 5 (Winker et al., 2009), the effective radius corresponding to this layer is needed to calculate the CDNC corresponding to this layer.If CDNC is vertically constant, the retrieval can represent the true value for the whole cloud.In reality, due to cloud top entrainment, CDNCs at the cloud top are smaller than those in clouds, leading to a negative retrieval bias.
MODIS/Aqua
MODIS (MODerate Resolution Imaging Spectroradiometer) is a relatively high spatial resolution (1 km) and wide spectral (0.41-15 µm) imaging radiometer that provides global observations of atmospheric properties (Platnick et al., 2003).The level-2 cloud products are derived at a resolution of 1 km (for both cloud optical thickness and r e ) or 5 km.Inference of CDNC from MODIS uses both cloud optical thickness and droplet effective radius, which are obtained directly from a bispectral technique using bidirectional solar visible reflectance and near-infrared absorption (Nakajima and King, 1990).MODIS r e is retrieved from three bands in the near infrared (at 1.6 µm, 2.1 µm and 3.7 µm bands).The 3.7 µm retrieval is expected to represent the droplet size closest to the cloud top (Platnick, 2000) and to be the least sensitive to the 3-D radiative bias (Zhang and Platnick, 2011), and it is therefore the best choice for the CDNC calculation.
CDNC retrievals from Eq. ( 2) are valid under the assumption that clouds develop in adiabatic conditions, implying that liquid water content (LWC: the sum of single water droplet mass; LWC = N ad × 4/3πr 3 e ρ w ; ρ w (kg m −3 ) is water density; r e (µm) is effective radius; N ad (cm −3 ) is droplet number concentration in clouds developed in adiabatic conditions) increases linearly with height above cloud base (LWC = C w × H ; H (km) is height above clouds; C w (kg m −4 ) is moist adiabatic condensation rate; Benartz, 2007).In an adiabatic cloud model, cloud optical thickness (τ ) is a function of N ad and H (τ =3/5π 1/3 Q(3/4C w/ ρ w ) 2/3 (kN ad ) 1/3 H 5/3 ; Q is extinction efficiency ≈ 2; see Eq. ( 5) in Benartz, 2007).The three independent relationships above allow retrieving the three variables, N ad , H and LWC.Therefore, N ad is a function of τ and r e as shown in Eq. ( 2).Real N in clouds developed in adiabatic and diabatic conditions is a product of N ad and the degree of adiabaticity (f ad ).f ad is in range of 0 < f ad ≤ 1 and f ad = 1 means adiabatic.It is assumed that it has a constant value of 0.8 in our calculation (Painemal and Zuidema, 2011).
(2) C w used above is defined by Grabowski (2007, see Appendix A5) and is a function of temperature (using cloud top temperature from MODIS), pressure (using cloud top pressure calculated from CALIOP cloud top altitude) and water vapor saturation pressure (using a function of T defined by Linblom and Nordell (2006); see Eq. 8).Same to Eq. ( 1), the true droplet number concentration is a product of N and k when a gamma distribution of the droplets is considered.Results of deviations from this hypothesis have been investigated through comparison with the in situ observations (Painemal and Zuidema, 2011;Min et al., 2012).
The MODIS CDNC values (derived with f ad = 0.8) are quite close to in situ observations for stratocumulus over the Chile-Peru coast.Concerning this method, the retrieval suffers from the adiabatic assumption (i.e., what the real value of f ad is) and uncertainties in both τ and r e retrievals, e.g., biases due to 3-D radiative transfer and surface reflectance.Underestimation of cloud entrainment (a smaller f ad ) causes positive bias for the MODIS retrieval; as mentioned above, the opposite is the case for CALIOP retrieval.As cloud entrainment increases (f ad decreases), the positive bias due to the underestimation of cloud entrainment also increases.Droplet effective radius derived from MODIS tends to be larger than the true value, mostly because of neglecting cloud entrainment and horizontal photon transport (the 3-D radiative bias) within heterogeneous clouds (Zhang and Platnick, 2011).In addition to the MODIS r e , we also investigated CALIOP CDNC retrievals using the product of the r e and the effective variance of size distribution derived from POLDER3/PARASOL.
POLDER3/PARASOL
POLDER (Polarization and Directionality of the Earth Reflectance) is a multipolarization, multidirectional (16 directions) and multispectral (443-1020 nm) imaging radiometer with a native resolution of 6 km × 7 km to provide global and repetitive observations of the solar radiation and polarized radiance reflected by the earth-atmosphere system (Deschamps et al., 1994).From POLDER observations, the r e and effective variance of size distribution can be retrieved using an angular polarization signal near cloudbow directions (Bréon and Doutriaux-Boucher, 2005), which are very sensitive to the microphysical properties of droplets at the very top of clouds.Since the CALIOP signal also detects the uppermost top of clouds, POLDER r e is relevant for use in combination with CALIOP measurements.The angular position of maxima and minima in the polarized phase function is only sensitive to r e .This method is applicable only to homogeneous clouds with narrow size distributions, which are required to produce significant polarization supernumerary bows, on which the technique relies.Also, due to the angular sampling required to analyze the polarized phase function, the POLDER retrievals currently available are significantly coarser (200 km 2 × 200 km 2 ) than the MODIS ones.These two factors, namely the high sensitivity to narrow size distribution and the large area required to perform retrievals, can potentially also bias the r e retrieved from POLDER in a way that might have been underestimated before.In practice, for large areas within which cloud optical thickness varies significantly, the average polarization signal will be an average of the polarized reflectance produced on smaller scales.Because polarized reflectance gets saturated rapidly compared to total radiance, the resulting polarization signal is not a radiative weighted average of individual contribution but a simple mean.Therefore, thin clouds that tend to contribute fewer signals in total radiance measurements are in line with thicker clouds when it comes to polarization reflectance.In conclusion, although on rather small scales it is true that polarization is less subject to 3-D effects than total radiance, the fact remains that using polarized reflectance averaged over large areas can induce some nonintuitive biases.As a simple example, if we assume that thinner parts of a cloud field have a smaller r e than thicker parts, then the r e retrieved from polarization might be biased low compared to an r e retrieved from a bispectral technique which is inferred from total radiance and corresponds to an r e which, to a first order, is more weighted by total cloud water content.This type of bias could be even more important in the case of correlation between cloud optical thickness and droplet size distribution width, to which the polarization technique is very sensitive.Until higher resolution polarization measurements or retrievals can be obtained, the POLDER r e retrievals shall not be considered free of potential biases and the above considerations shall be kept in mind when trying to draw conclusions from the comparison of POLDER and MODIS r e and derived CDNC values.With that in mind, the POLDER r e are found smaller than the MODIS ones, which calls for further understanding of these differences as the selection of r e is quite critical for the accuracy of CDNC retrieved from CALIOP.
Results
In this section, we will show geographical distributions, seasonal variations, and the observed relationship between CALIOP and the MODIS CDNCs.Discussions about different factors that impact the CDNC retrieval are provided in the next section.
Geographical distributions of CDNC and their differences
In Fig. 1, we present geographical distributions of the CDNC derived from MODIS (a) and CALIOP (b) and their relative differences calculated as the ratio of CDNC differences (CALIOP minus MODIS) to the mean CDNC of the two sensors.In general, we see that the two sensors show similar geographical distributions of CDNC, with the MODIS values globally larger than the CALIOP ones (bar scales are different).Higher droplet number concentrations are found over land, around continents over ocean and in the storm tracks that agree with model simulations and observations of aerosols (Barahona et al., 2011;Moore et al., 2009Moore et al., , 2013;;Vignati et al., 2010;Remer et al., 2008).However, over the open ocean, values are as low as fewer than 100 cm −3 for both sensors.Relative differences are smaller (close to −20 %) in those regions where homogenous clouds and adiabatic conditions are known to occur, i.e., off the western coasts of continents and in the subsidence regimes of storm tracks.
Seasonal variations of CDNC
In Fig. 1, we have illustrated that MODIS and CALIOP CD-NCs have similar geographical distributions.We also investigate whether they have similar seasonal variations.Figure 2 presents the geographical distributions of the correlation coefficients (a) and the slopes (b) of linear relationships of monthly MODIS and CALIOP CDNCs, and seasonal variations of MODIS (dashed line) and CALIOP (solid line) CDNCs for four specific regions (c, d, e and f).The correlation coefficients and the slopes are calculated from linear relationships (the CALIOP CDNC as a function of the MODIS one) of monthly mean CDNCs of MODIS and CALIOP (12 months counted).Seasonal variation is represented as the ratio of differences between monthly and annual mean values to the annual mean value.In Fig. 2a, we see that MODIS and CALIOP CDNCs have similar seasonal variations over the whole globe with correlation coefficients superior to 0.5, in particular for the regions to the west of continents where correlation coefficients are as high as more than 0.9.In Fig. 2b, we see the slopes are as high as about 0.7 in the regions to the west of continents, which means the CALIOP CDNC is about 0.7 of the MODIS one, but values are quite low in the other regions.Seasonal cycles of CDNC show similar trends between CALIOP and MODIS for different regions (Fig. 2c, d, e and f) though they differ from region to region: to the east of China, CDNCs are higher in winter and lower in summer ; to the west of California, CDNCs are higher in spring and autumn and lower in winter ; to the west of Peru, CDNCs are higher in January, May and September; and to the west of Namibia, CDNCs are higher in April and July.The underlying reasons for CDNC seasonal variations will be examined in a future study that is related to seasonal changes of different CCN sources.However, this is not the objective of this paper and will not be discussed.
Relationship between MODIS and CALIOP CDNC
In Fig. 3, we present the two-dimensional relationships between the MODIS and the CALIOP CDNCs over ocean (a) and over land (b).Both linear relationships are significant (p < 0.001) according to Student's t test.From Fig. 3a, we clearly see that CALIOP and MODIS CDNCs are strongly correlated over ocean with a correlation coefficient as high as 0.75.The CALIOP CDNC is on average about three fourth (0.75) of the MODIS one.It supports relationships shown in Figs. 1 and 2, which indicate that, despite using very different techniques for the retrievals, CDNCs derived from the two sensors are similar to a certain degree to the MODIS values larger than the CALIOP ones.Over land, the slope (0.49) and correlation coefficient (0.53) of the two CDNCs are worse than over ocean.This may be due to fewer samples and larger uncertainties in the retrievals of r e and τ .Over land, uncertainties from the surface reflectance dominate the errors for thinner and broken clouds (Platnick and Valero, 1995).Overall, it is still hard to determine at this stage which sensor represents the most accurate CDNC values; however, their spatial and seasonal distributions can at least be observed consistently from both data sets.This allows us to quantitatively determine regions of highest CDNC compared to others.
Discussion
As was mentioned in Sect.2, the accuracy of CDNC retrieval depends on the accuracy of the derived r e for CALIOP and on the adiabaticity degree of the atmosphere for MODIS.In this section, we discuss possible impacts.
Impact of cloud entrainment and drizzling
Clouds in the atmosphere are predominantly subadiabatic for at least two reasons.First, entrainment of unsaturated environmental air into clouds dilutes and evaporates the droplets, leading to a decrease of r e and CDNC at the cloud top.Second, warm rain processes, such as drizzling, also produce a decrease of r e and CDNC at the cloud top.As mentioned in Sect.2, the subadiabaticity can impact the retrieval of the MODIS CDNC via the degree of adiabaticity f ad , and the real value would be smaller than retrieval if f ad were inferior to 0.8.The stronger the cloud entrainment (smaller f ad ) is, the larger the retrievals are compared to the real values.Furthermore, since CDNC is not vertically constant in clouds, also because of the entrainment, the CALIOP retrieval could not be representative for the whole cloud, and is always smaller than the true value for the whole cloud.Overall, cloud adiabaticity has an impact on both the r e and CDNC and biases the values at the cloud top in the same direction.
In Fig. 4a, we present the geographical distribution of relative r e differences between the 3.7 µm and the 2.1 µm bands (calculated as the ratio of r e differences (r e,3.7 − r e,2.1 ) to the mean r e of the two bands) from MODIS; these are the ratio of r e differences to r e mean values.In theory, r e,3.7 corresponds more closely to the effective radius at the cloud top than does r e,2.1 (Platnick, 2000;Zhang and Platnick, 2011), and its value should be larger than r e,2.1 according to the classic adiabatic growth model (Brenguier et al., 2000).From Fig. 4a, we find that in some well-known adiabatic and homogeneous cloud regions (i.e., in the storm tracks and to the west of continents), differences of r e,3.7 and r e,2.1 are close to zero or slightly positive, while in other places differences are negative.Comparing Fig. 4a to Fig. 1c, it is clear that CDNC differences and r e differences between 2.1 µm and 3.7 µm bands show similar geographic distributions.In the storm tracks and to the west of continents, both differences are small, partially due to less subadiabatic bias (f ad close to 0.8).Under extreme subadiabatic conditions in the other regions where f ad is inferior to 0.8, MODIS retrieval calculated with f ad equal to 0.8 is therefore larger than the real value (N = N ad × f ad ; f ad < 0.8).However, for CALIOP, CDNC retrieval does not depend on adiabatic assumption but CD-NCs at the cloud top are smaller than those in clouds.The differences between CALIOP and MODIS can to a certain extent indicate the degree of adiabaticity: larger differences appear when subadiabaticity tends to be important, while smaller differences appear when adiabatic conditions prevail.
In Fig. 4b, we show the two-dimensional histogram of the pixel number as a function of relative CDNC differences and relative r e differences between 3.7 µm and 2.1 µm bands.We selected overcast clouds over ocean for our analysis because of fewer uncertainties on the retrievals of τ and r e (Wolters et al., 2010;Zeng et al., 2012).From this figure, we see that most of the CDNC differences decrease when r e differences decrease.Linear relationship is significant (p < 0.001) according to Student's t test.The correlation coefficient of the linear relationship is 0.53.This means that the more important the subadiabaticity is, the larger the negative differences are between r e,3.7 and r e,2.1 , and the larger the negative differences are between CALIOP and MODIS CDNCs (f ad is smaller than 0.8).The CALIOP and MODIS CDNCs are quasi-equal when r e,3.7 is much larger than r e,2.1 , most likely corresponding to cases of adiabatic conditions.
For further verification of adiabatic effect on the CDNC retrieval, in particular for cases of drizzling, we plot in Fig. 5 the two-dimensional histograms of relative CDNC differences against r e,2.1 (a) and relative r e differences against r e,2.1 (b).The histogram is normalized for each r e bin.According to Nakajima et al. (2010a, b), with collocated Cloud-Sat observations, clouds with MODIS r e,2.1 > 15 are often found to be associated with drizzle.From Fig. 5, we see that both relative CDNC differences and relative r e differences are important when r e,2.1 is superior to 15.This suggests that increasing differences between r e,2.1 and r e,3.7 and between MODIS and CALIOP CDNCs with an r e are a result of increasing drizzle probability with increasing r e,2.1 .Large droplets linked to drizzling and subadiabatic conditions could lead to more important differences between the CALIOP and MODIS CDNCs (about 0.3 of bias) compared to the differences between r e , 3.7 and r e,2.1 (about 0.2 of bias).
Impact of 3-D radiative effect due to horizontal heterogeneity
A recent study from Zhang and Platnick (2011) has demonstrated that the r e,3.7 and r e,2.1 differences are not only a result of cloud entrainment and drizzling, but that they are also, to a large extent, attributable to the horizontal photon transport, namely the 3-D radiative bias caused by the planparallel cloud assumption in the retrieval (Zhang and Platnick, 2011;Di Girolamo, 2013).Cloud heterogeneity has a more important impact on the retrieval of r e,2.1 than on the r e,3.7 .It has been found that the r e derived from MODIS is slightly larger than the in situ measurements (Painemal and Zuidema, 2011).Compared to the 3-D radiative transfer models, r e bias is about 4-6 µm across the globe with different biases for different clouds, i.e., 1-2 µm for less heterogeneous marine stratiform clouds and 7-12 µm for more heterogeneous marine cumuliform clouds (Di Girolamo, 2013).
Conclusions
Cloud droplet number concentration is one of the most important key parameters of cloud microphysics.In this paper we examined their geographical distributions and seasonal variations with the MODIS and CALIOP observations.Although these two sensors use quite different techniques to retrieve CDNC, they show similar geographical distributions and seasonal variations.The CALIOP CDNCs are globally smaller than the corresponding MODIS retrievals, being about 0.75 of the MODIS values.The correlation between the two is as high as 0.75 over ocean.We discussed the possible differences from impacts of cloud entrainment/drizzling, 3-D radiative effect due to cloud horizontal heterogeneity, and selection of the r e .As the degree of adiabaticity increases, r e,3.7 is smaller than r e,2.1 , and the MODIS-retrieved CDNC is larger than the CALIOP one.As cloud heterogeneity increases, retrieved CDNC differences become important between CALIOP and MODIS.CALIOP has advantages in calculating CDNC at the cloud top in subadiabatic systems but its accuracy is highly controlled by the accuracy of the r e assumption used in the algorithm.Furthermore, the retrieval of CALIOP, which is the mean value at the cloud top, may also not represent the mean values in clouds in subadiabatic systems.Using the POLDER effective r e , retrieved CDNCs are much larger than using MODIS r e,3.7 .More accurate CDNC values from CALIOP would, in combination with MODIS, allow the study of important cloud processes such as cloud entrainment.This calls for the development of better r e retrievals and improved description of r e vertical profiles.Finally, the preliminary work reported indicates areas of future studies of cloud-aerosol interactions on a global scale -especially the impacts of marine biogenic aerosol on cloud microphysics.
Figure 2 .
Figure 2. Geographical distributions of the correlation coefficients (a) and the slopes (b) between monthly MODIS and CALIOP CDNCs, and seasonal variations of MODIS (dashed line) and CALIOP (solid line) CDNCs for a year from December 2007 to November 2008 and for four different regions: east of China (c; regions of 20-40 • N and 120-140 • E), west of California (d; regions of 20-40 • N and 110-130 • E), west of Peru (e; regions of 10-30 • S and 70-90 • W) and west of Namibia (f; regions of 10-30 • S and 5 • W-15 • E).
Figure 3 .
Figure 3. Relationship between the MODIS and CALIOP CDNCs over ocean (a) and over land (b).The dashed line is the x = y line (x: MODIS CDNC; y: CALIOP CDNC), and the solid line is the CDNC linear regression line.The color bar represents the logarithm of the pixel number."Num" represents pixel number and "R" is the correlation coefficient of the linear relationship of the two CDNCs.
Figure 4 .
Figure 4. Geographical distribution of the relative difference of effective radius between 3.7 µm and 2.1 µm bands (a) and twodimensional histogram between relative CDNC differences and relative effective radius difference between 3.7 µm and 2.1 µm bands (b).The color bar represents the normalized pixel number of each bin of effective radius difference.The solid straight line represents the linear relationship, and solid circle lines are isolines of the pixel number.
Figure 5 .
Figure 5. Two-dimensional histograms between relative CDNC differences and effective radius at 2.1 µm bands (a) and between relative effective radius differences and effective radius at 2.1 µm bands (b).The color bar represents the normalized pixel number of each effective radius bin.Solid circle lines are isolines of the pixel number.
|
2018-06-26T08:19:24.421Z
|
2014-07-16T00:00:00.000
|
{
"year": 2014,
"sha1": "69535ca5adc606fc029db2c3e3a196a0025852de",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/14/7125/2014/acp-14-7125-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "69535ca5adc606fc029db2c3e3a196a0025852de",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
6447986
|
pes2o/s2orc
|
v3-fos-license
|
Expression and Functional Study of Extracellular BMP Antagonists during the Morphogenesis of the Digits and Their Associated Connective Tissues
The purpose of this study is to gain insight into the role of BMP signaling in the diversification of the embryonic limb mesodermal progenitors destined to form cartilage, joints, and tendons. Given the importance of extracellular BMP modulators in in vivo systems, we performed a systematic search of those expressed in the developing autopod during the formation of the digits. Here, we monitored the expression of extracellular BMP modulators including: Noggin, Chordin, Chordin-like 1, Chordin-like 2, Twisted gastrulation, Dan, BMPER, Sost, Sostdc1, Follistatin, Follistatin-like 1, Follistatin-like 5 and Tolloid. These factors show differential expression domains in cartilage, joints and tendons. Furthermore, they are induced in specific temporal patterns during the formation of an ectopic extra digit, preceding the appearance of changes that are identifiable by conventional histology. The analysis of gene regulation, cell proliferation and cell death that are induced by these factors in high density cultures of digit progenitors provides evidence of functional specialization in the control of mesodermal differentiation but not in cell proliferation or apoptosis. We further show that the expression of these factors is differentially controlled by the distinct signaling pathways acting in the developing limb at the stages covered by this study. In addition, our results provide evidence suggesting that TWISTED GASTRULATION cooperates with CHORDINS, BMPER, and NOGGIN in the establishment of tendons or cartilage in a fashion that is dependent on the presence or absence of TOLLOID.
Introduction
In living organisms chondrogenesis occurs in the context of complex morphogenetic processes associated with the formation of other connective tissues [1]. The formation of digits in the developing vertebrate limb illustrates this phenomenon. In the embryonic limb autopod, mesodermal cells that share a unique origin from the lateral mesoderm, form phalangeal cartilages and the associated perichondrium, interphalangeal joints and tendons. Hence, in terms of tissue differentiation, the formation of a digit, includes the following: formation of and subsequent differentiation of prechondrogenic condensations; differentiation of the perichondrium; the formation and subsequent differentiation of tendon blastemas, including the establishment of the entheses (i.e., the zone where the tendon attaches to the bone primordia); and the formation of joints (hyaline articular cartilage, synovium, and fibrous capsule). Different BMP genes exhibit regulated expression patterns in the undifferentiated and interdigital mesoderm, joints and tendon blastemas [2,3]. There is compelling evidence that BMP signaling plays a key role in each of the aforementioned morphogenetic events [4][5][6][7][8][9][10][11]. However, the mechanism by which BMPs function to establish divergent cell fates during developmental processes such as chondrogenesis, joint differentiation or tenogenesis while using the same population of cell progenitors remains to be clarified [5]. In avian limbs, the overexpression of BMPs results in a dramatic increase in chondrogenesis [12][13][14], and loss-of-function experiments cause severe skeletal truncations [14]. In mammals, the alterations observed in naturally occurring or experimentally induced genetic mutations of members of the BMP signaling pathway, confirmed its role in skeletogenesis [4,5,7,[15][16][17][18][19][20]. However, digit phenotypes are not very informative, which is most likely due to the functional redundancy of these molecules [16,21].
In the canonical signaling pathway, active BMPs are released in the extracellular space and subsequently bind to transmembrane type I and type II serine-threonine kinase receptors, triggering an intracellular cascade that results in the phosphorylation and nuclear translocation of Smad 1/5/8 proteins, which in conjunction with Smad 4 and other transcription factors, regulate target gene expression. In addition, BMPs activate non-Smad pathways that involve signaling via mitogen-activated protein kinases (MAPK; see [22]). Taking into account that the signaling cascade activated by the different BMPs is constant, it is believed that the differences in the response of BMP target cells may reside largely on the intensity of the signal. Hence, gradients of BMP signaling play a key role during vertebrate development to establish the dorso-ventral axis of the gastrula, and most likely, function in other embryonic models [23,24]. The morphogenetic gradient relies largely on the functional interactions between BMPs and secreted BMP-binding molecules, often called ''extracellular BMP antagonists'' (see: [25,26]).
Several BMP antagonists are expressed at advanced stages of limb skeletogenesis [27][28][29]. In the course of digit development, Noggin [14], Chordin [30], Chordin-like 1 [31,32], BMPER [33], Sost [34], Sostdc1 [35,36], Dan [37,38], Follistatin [39,40], and Follistatin-like 1 [41] have been detected; however, except for Noggin [4] and Follistatin-like 1 [42], mice that are mutant for these factors lack a digit phenotype. This lack of a phenotype is indicative of intense functional redundancy. Therefore, an appropriate understanding of the role of BMP antagonists in digit morphogenesis requires a comprehensive analysis of the BMP modulators that are expressed in the course of digit formation. The goal of this study was to analyze the involvement of extracellular modulators of BMP signaling during the early differentiation of the structural components of the embryonic digits. In an initial systematic gene expression study we identified 13 different BMP antagonists with dynamic expression patterns associated with the differentiation of the phalanges, interphalangeal joints and tendons. The role of these factors and their regulation by major signaling pathways that are involved in limb morphogenesis was next explored through gain-off-function experiments in highdensity cultures of digit mesodermal progenitors. Our findings provide new insights that clarify the role of BMPs in the divergent differentiation of the connective tissue progenitors into cartilage, tendon, and joint tissues, which is a process of major importance in regenerative medicine of the locomotor apparatus.
Materials and Methods
In this work, we employed Rhode Island chicken embryos from day 4,5 to day 8 of incubation (id) equivalent to stages 24 to 32 HH. This study was approved by the Cantabria University Institutional Laboratory Animal Care and Use Committee and carried out in accordance with the Declaration of Helsinki and the European Communities Council Directive (86/609/EEC).
In situ Hybridization
In situ hybridization was performed in 100 mm vibratome sectioned specimens. Samples (a minimum of 5 sectioned limbs of each stage) were treated with 10 mg/ml of proteinase K for 20 minutes at 20uC. Hybridization with digoxigenin labeled antisense RNA probes was performed at 68uC. Alkaline phosphataseconjugated anti-digoxigenin antibody (dilution 1:2000) was used (Roche). Reactions were developed with BM Purple AP Substrate precipitating (Roche). No variability among the samples was appreciated in the expression pattern of the analyzed genes was constant in all the samples analyzed.
The above mentioned BMP antagonists have been identified in the genome of most the analyzed vertebrates (Homo sapiens, Pan troglodytes, Macaca mulatta, Mus musculus, Rattus novergicus, Gallus gallus, Xenopus laevis) except for the zebrafish (Danio rerio) where there is only a Chordin-like [43], which is thought to represent Chordin-like 1 and Chordin-like 2 of mammals.
Phospho Smad 1/5/8 and p-c-Jun Immunolabeling
Limb buds between 6 and 8 days of incubation were fixed in 4% PFA O/N at 4uC, washed in PBS and sectioned with a vibratome. Sections were incubated O/N at 4uC with the primary antibody. Specimens were next washed in PBS, incubated O/N in the secondary antibody washed for 2 h in TBS, dehydrated, cleared and examined with the confocal microscope (LEICA LSM 510). Polyclonal antibodies against phospho-SMAD1/SMAD5/ SMAD8 (Ser463/465; Cell Signaling) and p-c-Jun (Sc-822, Santa Cruz Biotechnology) were employed. For double labeling purpose, we employed actin staining using 1% Phalloidin-TRITC (Sigma).
Experimental Induction of Ectopic Digits
In vivo analysis of gene regulation preceding the formation of an ectopic digit was performed in samples of interdigital tissue 10, 14, and 20 hr after implantation at 5.5 id of heparin beads (Sigma) incubated for 1 hr in 2 mgr/ml TGFb1 (R&D Systems). This treatment leads to the formation of ectopic digits detectable by alcian blue staining 20 hr or later after bead implantation [44]. The contralateral left limb or limbs treated with beads incubated in PBS, were employed as controls.
Micromass Mesodermal Cultures
Progenitor mesodermal cells of the digit tissues were obtained from the progress zone region located under the apical ectodermal ridge of chick leg buds of embryos at 4.5 id (25 HH). Cells were dissociated and suspended in medium DMEM (Dulbecco's modified Eagle's medium) with 10% fetal bovine serum, 100 units/ml penicillin and 100 mg/ml streptomycin. Cultures were made by pipetting 10-ml drops of cell suspension at a density of 2.0610 7 cells/ml into each well of a 24-well plate. The cells were left to attach for 2 hr and then 200 ml serum-free medium was added. In gene overexpression experiments (see below) cultures were performed with DMEM medium containing 10% fetal bovine serum and 50 mgr/ml of ascorbic acid. We employed these cultures for analyzing the effects of adding BMP modulators on gene regulation, cell proliferation and cell death and to study the regulation of BMP modulators by major signaling pathways acting in the autopod.
The effect of BMP modulators and BMP2 were analyzed by adding recombinant protein to the medium in 24 hr cultures. Treatments were maintained for another 24 hr period. After testing different protein concentrations we selected the following: human recombinant BMP2 200 ngr/ml (Peprotech); human recombinant NOGGIN, 200 ngr/ml (R&D Systems); human recombinant CHDL-1, 2400 ngr/ml (R&D Systems); mouse recombinant CHDL-2 1200 ngr/ml (R & D Systems); human recombinant TSG 1000 ngr/ml (R & D Systems); mouse recombinant DAN 3000 ngr/ml (R & D Systems); Follistatin 800 ngr/ml (Peprotech). After these treatments we analyzed by Q-PCR changes in the expression of cartilage markers (Sox9, type 2 Collagen; and Bmpr1b), fibrogenic markers (Scleraxis, type 1 Collagen and Tgfb2), and joint markers (Activinba, Gdf5, and Jaws). The selected genes are well known markers of the corresponding morpho-developmental processes. Only Jaws has not been used very often as joint marker, but it has been shown that it is essential for the formation of interphalangeal joints [45].
Cell Transfections
Gain-of-function experiments for Tsg, and BMPER were performed by overexpression constructs containing the mouse coding sequences. We employed ''Addgene plasmid 25778'' for Tsg and ''Addgene plasmid 25776'' for BMPER (both made by Dr Edward De Robertis). For Fstl-1 overexpression we used a construct based on the coding sequence of the human gene cloned into the pCMV6-XL5 vector (Origene, MD, USA). For Dan overexpression we used a construct based on the coding sequence of the mouse gene cloned into the pCMV6-ENTRY vector (Origene, MD, USA) Control samples were transfected with empty plasmids. Limb mesodermal cells were electroporated employing the Multiporator System (Eppendorf) and cultured in high-density conditions as indicated above. After 48 hr of cultured the level of gene overexpression and the expression of cartilage, joint, and tendon markers were evaluated by Q-PCR.
Flow Cytometry
Cell proliferation and cell death was deduced from measurement of DNA content by flow cytometry in control Micromasses and in Micromasses treated with CHDL-1, TSG, or both CHDL-1 and TSG. For this purpose cultures were dissociated to single-cell level by treatment with Trypsin EDTA (Lonza). 1 million cells (5 Micromasses) were used in each test. For propidium iodide (PI) staining the cells were washed twice in PBS and centrifuged at 405 g, 5 min at 4uC. The samples were then incubated overnight at 4uC with 0.1% sodium citrate, 0.01% TritonX-100 and 0.1 mg/ml PI. Cell suspension was subjected to flow cytometry analysis in a Becton Dickinson FacsCanto cytometer and analyzed with Cell Quest software. This technique allows the titration of apoptotic (hipodiploid) and proliferating (hiperdiploid) cells according to their DNA content deduced from PI staining [46].
Real time Quantitative PCR (Q-PCR) for Gene Expression Analysis
In each experiment total RNA was extracted and cleaned from specimens using the RNeasy Mini Kit (Qiagen). RNA samples were quantified using a spectrophotometer (Nanodrop Technologies ND-1000). First-strand cDNA was synthesized by RT-PCR using random hexamers and M-MulV reverse transcriptase (Fermentas). The cDNA concentration was measured in a spectrophotometer (Nanodrop Technologies ND-1000) and adjusted to 0.5 mg/ml. Q-PCR was performed using the Mx3005P system (Agilent) with automation attachment. In this work, we have used SYBRGreen (Agilent) based Q-PCR. Gapdh had no significant variation in expression across the sample set and therefore was chosen as the normalizer in our experiments. Mean values for fold changes were calculated for each gene. Each value in this work represents the mean 6 SEM of at least three independent samples obtained under the same conditions. Samples consisted of 4 Micromass cultures or 15 interdigital spaces. Data were analyzed using one-way analysis of variance followed by Bonferroni tests for post-hoc comparisons or Student's t test, for gene expression levels in overexpression experiments. Statistical significance was set at p,0.05. All the analyses were done using SPSS for Windows version 18.0. Primers for Q-PCR are included as Supplementary Table 1.
Extracellular BMP Modulators
In an initial PCR expression screening, we identified 13 different extracelular modulators of BMP signaling that were expressed at high levels in mesodermal tissues of the developing chick autopod at day 6 of incubation, including: Noggin, Chordin (Chd), Chordin-like 1 (Chdl-1; ventroptin), Chordin-like 2 (Chdl-2); Twisted gastrulation (Tsg; Twsg 1); DAN (differential screening-selected gene aberrative in neuroblastoma); BMPER (BMP binding endothelial regulator; Crossveinless 2), Sost (Sclerostin), Sostdc1 (Sclerostin domain containing-1; Uterine sensitization associated gene-1; Wise; Ectoidin), Follistatin (Fst), Follistatin-like 1 (Fstl-1; Flick), Follistatin-like 5 (Fstl-5), and Tolloid (Tll1;Colloid; Tolloid-like 1). Although Tolloid is not a BMP antagonist but rather is a protease, it was included here because it is responsible for the release of the BMP ligands that are bounded to BMP antagonists into the target tissues [26]. Although the presence of some of these factors was known from previous studies, we next performed a systematic study of all of them using in situ hybridization to demonstrate a complete picture of their spatial distribution within the developing autopod ( Fig. 1). We excluded the members of the CCN secreted factors and the members of the HtrA serine proteases from the study because they have been analyzed in detail elsewhere [44,47].
Noggin and Chdl-1, are prominently expressed in the central region of the differentiating phalanges, excluding the peripheral subperichondrial region and the zones of joint formation ( Fig. 1 A and C). Noggin is also expressed in the proximal region of the tendons at advanced stages of differentiation ( Fig. 2A).
Chdl-2, is expressed in the hyaline articular cartilage of mouse embryonic and in human adult osteoarthritic joints [29]. Here we found a wide and well-defined expression domain for this BMP antagonist in the digit blastemas that precede the identification of interphalangeal joints (Fig. 1D). The expression of Chdl-2 is also noted in the diaphysis of metatarsals preceding the ossification of hypertrophic cartilage forming a collar under the perichondrium (Fig. 2E).
At the beginning of digit formation, Chd is expressed at low levels in the digit rays (Fig. 1B), with a slight intensification in the zones of joint formation. Small Chd expression domains are also prominent in the zone of the tendons located close to the foot muscles (Fig. 2C). Chd transcripts are also present in the diaphysis of the digit cartilages preceding the ossification of hypertrophic cartilage forming a collar under the perichondrium (Fig. 2C).
Tsg is expressed in zones of cartilage and tendon differentiation ( Fig. 1 E and I). In the chondrogenic regions Tsg forms a tenuous expression domain in the cartilage subjacent to the perichondrium of the digit rays. This peripheral digit expression domain is intensified in the digit tip marking the zone of recruitment of cartilage progenitors, the previously termed, ''digit crescent'' [48], and also in the developing joints. Additionally, its expression is remarkable in the subectodermal mesenchyme with zones of high intensity marking the tendon blastemas.
Dan forms a continuous expression domain in the mesenchyme subjacent to the dorsal and ventral ectoderm with zones of increased expression marking the tendon blastemas ( Fig. 1 F and J). BMPER is expressed in small but intense domains located in the most proximal zone of the interdigits and along the lateral margins of the extensor and flexor tendons (Fig. 1, G and K). It is also strongly expressed in the lateral margins of the autopod at the borderline with the zeugopod. BMPER is also expressed in the diaphysis preceding the ossification of hypertrophic cartilage forming a collar under the perichondrium (Fig. 2D).
Sost expression has been studied in early stages of limb development [34]. Here we show that at advanced stages of digit development, Sost is expressed in the maturing tendons (Fig. 1H) and in the subperichondral region of the diaphysis, which is undergoing hypertrophic differentiation (Fig. 2F).
Sostdc1 is expressed mainly in the ectoderm with the highest transcript levels in the interdigit region ( Fig. 1 L). Transcripts are also observed in the mesenchymal peridigital tissue in a fashion resembling that of BMPER (Fig. 1L).
Tll1 is expressed in the contour of the immature phalanges, including the early developing joints and the tip of the digit (Fig. 1, M and Q). In this distal digit region, the transcripts form a cap encompassing the condensing mesenchyme. Tll1 transcripts are also present in the mesoderm subjacent to the dorsal and ventral ectoderm of the interdigital and digital regions excluding the zone of tendon formation (Fig. 1Q). However, in the proximal regions where tendon maturation is advanced, including the zone of myotendinous junction, tll1 transcripts are also detected in the tendon tissue (Fig. 2G).
Fst shows restricted expression domains in the tendon blastemas ( Fig. 1 N and R). Fstl-1 is highly expressed in the interdigital mesoderm with additional domains in the developing joints ( Fig. 1 O and S), while Fstl-5, is expressed at low levels in the core of the differentiating digit cartilages (Fig. 1 P and T). However, in proximal regions where tendon maturations is advanced, Fstl-5 transcripts are also detected in the tendon tissue and in the subperichondral region of the diaphysis undergoing hypertrophic differentiation (Fig. 2B). The zones of active BMP signaling were monitored by BMP effectors immunolabeling (Fig. 3). Active BMP signaling domains that were marked by phospho-smad 1/5/8 are present in the interdigital mesenchyme and in the core of the developing cartilages, with greater intensity in the tip of the growing digit ( Fig. 3 A and D). The perichondrium shows poor labeling and the interface between the cartilage and the perichondrium is relatively devoid of signaling (Fig. 3 B). In the course of digit differentiation, the labeling in the cartilage is reduced while it increases in the outer layer of the perichondrium (Fig. 3C). The tendon formation zones are almost negative, but are encompassed at both sides by zones of intense signal in a pattern closely paralleling the expression of BMPER, including stronger expression in the dorsal than in the ventral regions of the autopod (compare Fig. 1K and Fig. 3A). However, a weak positive stain is observed in the tendons at advanced maturation stages and in the zone of myotendinous junction (Fig. 3C). As expected from previous studies [22], the zones of joint formation appear as bands of low phospho-Smad 1/5/8 positivity (Fig. 3D), but exhibit intense labeling by phospho-c-Jun, which is activated by the JNK MAP kinase (Fig. 3E).
As shown in Figure 3 (F-M), all the structures of the autopod exhibited transcripts of at least one of the four type 1 receptors of this signaling pathway, including Bmpr1b (Alk6), Bmpr1a (Alk3), Alk2 (ACVR1), and Alk1 (ACVRL1; see [49]), supporting the function of BMP antagonists in the establishment of the zones of active signaling.
The Expression Sequence of Extracellular Bmp
Modulators during the Formation of an Ectopic Digit (Table 1) To gain insight into the significance of BMP modulators during digit morphogenesis we monitored the temporal sequence of activation of BMP antagonist genes when an ectopic digit is induced in the interdigital regions by implantation of a Tgf b bead [32]. An analysis of expression was performed using Q-PCR at 10, 14 and 20 hr after interdigital implantation of a Tgfb-bead (Table 1). We selected these time points because 10 hr marks the stage at which genes encoding for the cartilage matrix become upregulated, and 20 hr corresponds with the period when the extra digit condensation become identifiable by specific histological dyes (i.e., Alcian Blue; [44]). At this period the connective tissues located around the ectopic cartilage; including the perichondrium and the pretendinous aggregates, initiates differentiation. Chdl-1, Chdl-2 and Fst genes were upregulated at 10 hr after bead implantation, coincident with the upregulation of the most precocious extracellular matrix markers of cartilage differentiation [44]. By14 hr after bead implantation Noggin, BMPER, and Tll1 were up-regulated. Dan was the last of the examined BMP antagonists to be up-regulated during the formation of the ectopic digit (20 hr after bead implantation), consistent with a function in the differentiation of tendons. In the period covered by our study, Tsg appeared moderately upregulated from 10 hr, but without reaching levels of statistical significance. Fstl-5 was transiently downregulated at 10 hr after bead implantation. There were no changes in the expression of Fstl-1, Sost and Sostdc1. Transcripts of those three genes are present in the interdigital mesoderm from the beginning of the experiment, and the formation of an ectopic digit may not generate detectable changes in their levels of expression.
Gene Regulation after Administration of Extracellular BMP Modulators
The structure and functional properties of the BMP modulators are not uniform. Antagonism of BMP signaling may be variable due to differential affinity for distinct ligands. Hence, to gain insight about their role in the developing digits, we monitored the effects on the regulation of most of BMP modulators by the addition of recombinant proteins to Micromass cultures of digit progenitors. In some cases, protein-addition based experiments were complemented or substituted by overexpression approaches, employing vectors containing the full gene coding region. Several preliminary experiments were performed to adjust the dose of BMP regulators to obtain clear and reproducible effects on the genes explored. Tables 2 to 4 summarize the changes in the expression of the cartilage (Table 2), joint (Table 3), and tendon (Table 4) markers selected in this study. The effects of BMP2 on the expression of the mentioned markers were also analyzed to allow a clear distinction between effects that were dependent or independent of the inhibition of BMP signaling (data are shown at the bottom of each table). The following aspects can be emphasized from results: 1) The overexpression of the mouse Tsg gene induced downregulation of chondrogenic markers (Sox9, Collagen2 a1, Bmpr1b), Gdf5 and Jaws. In addition, the tenogenic master gene Tgfb2, which was not regulated by BMP2, was strongly downregulated following Tsg overexpression.
2)
At the highest doses tested (up to 1000 ng/ml) rhTSG, was much less effective than gene overexpression experiments. Treatments caused only a mild downregulation of Tgfb2 and Gdf5. However, the gene regulation that was induced by CHDL-1, CHDL-2, or NOGGIN was strongly potentiated when they were administered in combination with rhTSG (1000 ng/ml). This effect supports the functional association between TSG and CHD that has been observed in other systems [50] and supports also the interaction of TSG with NOGGIN, which to our knowledge, has not been reported in previous studies. This interplay potentiated the BMP antagonistic effect of CHDL-1 and NOGGIN, and also caused modifications in the regulation of genes not induced by BMP2 (see below), and even regulations not observed in separate treatments of TSG and CHDL-1, or NOGGIN (i.e. the regulation of Activin ba in combined treatments with CHDL1-TSG, Table 3; and see below for interactions NOGGIN-TSG).
3)
NOGGIN, was the most intense antagonist of gene regulation induced by BMPs in Micromass cultures (see tables 2-4); however, treatments also induced the upregulation of Scleraxis, type 1 Collagen, and Tgfb2 which are not regulated by BMP2. As mentioned above, the effects of NOGGIN were intensified when administered in combination with TSG. Intensification was appreciated even for the effects like the up-regulation of Scleraxis, type 1 Collagen, or Tgfb2 which are not induced by BMP2. In addition, NOGGIN in combination with TSG down-regulate the expression of Jaws. A detailed analysis of the molecular bases for the interplay between TSG and NOGGIN is out of the scope of this study. However, considering the roles of TSG in other systems (see discussion for references), a tentative explanation is that the addition of TSG may protect sequestering and/or degradation of NOGGIN in the extracellular matrix.
6)
Previous analysis in different systems has provided controversial information concerning the targets of DAN [38,51,52]. It has been proposed that the production of bioactive DAN protein is cell-specific [52]. In our analysis the BMP targets analyzed here were not significantly regulated by mouse recombinant DAN even at doses up to 3000 ng/ml. However, overexpression of the mouse Dan gene down-regulated chondrogenic markers (Sox9, type 2 Collagen, and Bmpr1b) and up-regulated the expression of Scleraxis more than twofold. 7) BMPER gain-of-function experiments were done by overexpressing the mouse BMPER gene and the effects on the chondrogenic markers corresponded largely with those of a typical BMP antagonist. However, Scleraxis, and Activin ba, which are not regulated in treatments with BMP2, were also up-regulated by this factor. 8) FST exhibited an intense inhibitory effect on Sox9 gene expression, but the expression of other genes up-regulated by BMPs was not down-regulated. Remarkably, no effects of FST were recognized in tendon markers in spite of its restricted expression in the tendon blastemas.
9)
The overexpression of Fstl-1 had similar, but less intense, effects than the addition of rh-NOGGIN, except for a negative influence in the expression of Activin ba which did not occur with NOGGIN.
Cell Proliferation and Cell Death in Micromass Cultures following Treatments with BMP Antagonists
The low anti-BMP influence on gene regulation in rh-CHDL-1 alone treatments prompted us to check whether it has an effect promoting cell proliferation as reported in cultures of human mesenchymal stem cells [53]. However, no changes in cell proliferation or apoptosis were observed in cultures treated with different doses of CHDL-1 (Fig. supplementary 1).
In view of the functional potentiation of the effects of CHDL-1 when administered in combination with TSG, we further analyzed the effects of the combined treatments of CHDL-1 and TSG.
Neither cell proliferation or cell death was modified by this treatment (Fig. supplementary 1). To the light of the regulated expression patterns of BMP antagonists and their distinct effects on mesodermal tissue differentiation, we sought to known if, as reported in other models [54], there is an interactive signaling network which establishes the level of expression of the different regulatory components of the pathway, or if additional signaling pathways regulate their expression.
The regulation of BMP modulators was studied in 2 days old Micromass cultures after 6 hr treatments with the signaling molecules that are responsible for growth and differentiation of the autopod mesoderm, including the following: FGFs which are implicated initially in the maintenance of the undifferentiated state of the mesoderm in the distal margin of the bud and in inhibiting the chondrogenic differentiation of digit progenitors [14], and they are involved later in tendon differentiation [55]; all-trans-Retinoic Acid which, similar to FGF signaling, is a primary inhibitor of mesodermal differentiation [56] and later regulates the differentiation of tendons [57]; ACTIVIN A, which is an early signal for the formation of digit chondrogenic aggregates and next a prominent joint marker [40]; TGF bs, which are responsible for the formation of tendons but also promote the formation of prechondrogenic aggregates [58,59]; BMP2, which together with other BMPs are responsible for cartilage differentiation [8]; and WNT 5a, which together with other members of the family are involved in cartilage and joint differentiation [60].
The following data can be stressed from our results ( Table 5): 1) In the FGF treated cultures, the expression of BMP modulators including Noggin, Chordins and Fstl-1 was inhibited. The inhibition was also strong for Dan and Tll1.
In contrast, BMPER, Fstl-5, and Sost were highly upregulated by FGF treatments. Other antagonists (Sostdc1, Fst, and Tsg) were not regulated at significant levels by FGFs.
2)
Consistent with the functional similarity between retinoic acid and FGF signaling in cartilage and tendon differentiation at the stages studied here, the addition of all-trans-Retinoic Acid (RA) to the culture medium was followed by a downregulation of Noggin, Chordins, Tll1 and Dan. In contrast with findings obtained by FGF treatments, Sostdc1 was the only antagonist upregulated by RA. The expression of the remaining studied antagonists was not modified by RA treatments.
3)
Treatments with ACTIVIN A resulted in a strong upregulation of Fst expression. There was also an increase in the expression of the BMP modulators Noggin, Chordins, Tsg and Fstl-1. In contrast the BMP modulators that are expressed in or around the tendon blastemas, including Dan, and BMPER were downregulated.
4)
Tgfb1 treatments upregulated Noggin, Sostdc1 and Fst while Chordins, Tll1 and BMPER were downregulated. The expression levels of the other studied antagonists were not modified.
5)
BMP2 treatments induced a strong up-regulation of Noggin, Chdl-1, and Fst while all the remaining antagonists were downregulated. It is remarkable from these findings that BMP2 was the only treatment which downregulated the expression of Tsg 6) WNT5a treatments induced up-regulation of BMPER and Sost. Expression of other BMP modulators was not modified.
Discussion
We show that 9 representatives of the three recognized subfamilies of BMP antagonists [61] together with Tll1, Fst, Fstl-1 and Fstl-5 are expressed in a regulated fashion during the early histogenesis of the digit tissues. These factors are precociously induced during the formation of an extra digit, preceding the appearance of changes identifiable by conventional histological procedures, such as the establishment of phalanges, joints and tendons. These findings support the role of BMP modulators in digit morphogenesis. However, except for Noggin [4], Fstl-1 [42], and Sost [62], mice with genetic alterations in these factors, including BMPER [63], Tsg [63,64], Chd [65], Dan [38], Fst [66], Sostdc1 [67], and Tll1 [68] lack a digit phenotype. The lack of a phenotype in these mutants is likely explained by functional redundancy [42,69]. In line with this interpretation, we show overlapping expression of Noggin/Chdl-1 in the developing cartilage, Chdl-2/Fstl-1 in the developing joints, Tsg/Dan/Fst in the tendon blastemas, BMPER/Sostdc1/Fstl-1 in the peritendinous mesenchyme, and Chd/Chdl-2/Fstl-5/BMPER/Sost under the perichondrium of the diaphysis that is undergoing hypertrophic differentiation and subsequent ossification.
The functional properties of the different BMP antagonists often includes crosstalk or the inhibition of other signaling pathways which results in tissue-and developmental-context dependent responses to their local administration [70,71]. Consistent with the occurrence of functional specializations, the analyzed antagonists downregulated the expression of BMP target genes during skeletogenesis at different levels and their expression was differentially controlled by the distinct signaling pathways acting in the developing limb at the stages covered by this study (see Table 5). Together these findings favor the view considering that morpho-histogenesis of cartilage/bone, joints, and tendons in the embryonic limb is generated by a cascade of autoactivation and lateral inhibition signals resulting from local interactions of mesenchymal progenitors (''reactor-diffusion'' model, see [72,73], and references therein). The formation of the prechondrogenic aggregate constitutes the first step of this process and is followed not only by the inhibition of chondrogenesis in the adjacent tissue, but also by signals which regulate its divergent differentiation to form joints and tendons.
Considering the pattern of expression in the autopod (Table 6), the different BMP modulators can be grouped as follows: 1) cartilage associated, which include: Noggin, Chdl-1, and Fstl-5; 2) joint associated, represented by Chdl-2 and Fstl-1, although the later is also expressed in the undifferentiated interdigital mesenchyme, and Noggin is expressed in the cartilage encompassing the developing joints; 3) tendon or associated peridigital connective tissue, which include the following: Dan, BMPER, Sost, Sostdc1, Fst, and at stages of advanced differentiation, also Noggin, and Fstl-5; and, 4) a group which can be termed ''mixed distributed'', is represented by Tsg, Tll1, and Chd which may function in concert with the other antagonists.
TSG, Chd, and Tll1 and the Differentiation of Cartilage and Tendons
Due to their association with other BMP modulators in developing cartilage, joints and tendons, the functional significance of TSG, CHD, and TLL1, merits individual discussion. TSG is a multifunctional BMP modulator, which interacts with other antagonists to potentiate or inhibit their function depending on the proteolytic activity of TLL1 (reviewed in [74]). TSG forms heterotrimeric complexes with BMPs and CHD which potentiates the antagonistic effect of CHD and facilitates the diffusion of BMP ligands through the extracellular space to reach the appropriate targets [28,75]. In addition, when the complex is subjected to the action of TOLLOID metalloprotease, CHD is cleaved, delivering active BMPs [76]. TSG also forms trimolecular complexes with BMPER and BMP4 [69] but in this case the complex does not promote BMP diffusion [26].
We show that the contour of the phalanges at the stages of initial differentiation express Tsg, Tll1 and Chd. Taking into account that BMP ligands are predominantly expressed outside the cartilage rods in the undifferentiated interdigital mesoderm [3], this expression pattern is consistent with a function of the complex TSG/CHD in the transport and subsequent delivery of BMPs into the differentiating cartilages, which are positive for phospho Smad 1/5/8 immunolabeling. In our culture model, the overexpression of Tsg down-regulates markers that are activated by BMPs. In addition, consistent with previous studies [64,77,78], we also noted the occurrence of pro-BMP responses, such as the intense downregulation of Gdf5, which is an effect that is also induced by BMP2 treatments.
The distribution of ''mixed distributed'' BMP modulators in the zones of tendon formation is more complex. We show that the core of the tendon blastemas expresses high levels of Tsg accompanied, in a stage-dependent manner, with other BMP antagonists (see discussion below). Remarkably, at initial stages of formation, the tendon blastemas lack Tll1 transcripts, suggesting that in this tissue TSG exerts only an anti-BMP function. This interpretation is consistent with the poor labeling of the tendon blastemas with phospho Smad 1/5/8. Taking into account that overexpression of TSG in mesoderm progenitor cultures results in the inhibition of chondrogenic markers without a positive influence on tendon markers, the expression of Tsg could preclude chondrogenic differentiation of the pretendinous mesenchymal aggregates, which is the default fate of the undifferentiated limb mesoderm. Mice deficient in TSG lack a digit or tendon phenotype [64], but this could be explained by the overlapping expression of additional antagonists (see below).
During the differentiation of the tendon blastemas, in the subectodermal mesenchyme of the proximal region of the interdigit intercalated between two neighbor tendons, there are intense expression domains of Tll1 overlapping with low levels of Tsg and high levels of BMPER. This gene expression pattern correlates with strong immunolabeling for phospho Smad 1/5/8, suggesting that the presence of TOLLOID reverses the influence of TGS on BMP signaling from negative to positive. Furthermore, the negative regulation of Tll1 by FGFs, TGF bs, and RA which are all key signals in the differentiation of tendons [55,[57][58][59] reinforces the idea that TSG functions in the initial differentiation of the tendon blastemas requires the absence of TOLLOID. At more advanced stages of differentiation and coinciding with the Table 6. Semi-quantitative association of gene expression intensity in the autopodial tissues.
Cartilage Joint
Early Tendon Late Tendon
BMP Antagonists and the Formation of Phalanges and Joints
Digits develop as cartilage rods, which become segmented into jointed structures by local de-differentiation of cartilage. Hence, the zones of joint formation appear as strips of fibrous-like connective tissue, which constitute the substrate for subsequent cavitation and differentiation of the joint tissues (fibrous capsule and synovium). It has been shown that Jaws exerts a central role in the formation of joints, although lack specific expression domains in these regions [45]. The formation of phalanges is a direct effect of BMP signaling via Smad 1/5/8 [48]. Differentiation of the joints is also controlled by BMP signaling [5,79], but it is directed by the activation of MAP kinases [22]. Furthermore, the reduced phalangeal size and loss of interphalangeal joints that is observed in humans and mice deficient in NOGGIN [4,17], indicates that regulation of BMP signaling is a central function in the formation of phalanges and joints.
Our findings reveal that the outer layer of the developing phalanges expressing Tsg/Tll1/Chd encompass a core of differentiating chondrocytes expressing Chdl-1 and Noggin in the body of the phalanx, and Chdl-2 in the zones of joint formation. We show that CHDL1 alone, or even in combination with TSG, does not modify proliferation, at difference of studies in other systems [53]. The effect of NOGGIN and both CHDL-1 and CHDL-2 on cultures of digit progenitors, concerns tissue differentiation, and becomes intensely potentiated by TSG. Together these findings suggest that, in addition to the previously discussed role of TSG/ CHD complexes in the transport of BMP ligands, TSG also functions in a concerted fashion with NOGGIN and CHORDINS to modulate the intensity of BMP signaling in the differentiating cartilage.
We further show that cartilage-expressed (Noggin and Chdl-1) and joint-expressed antagonists (Chdl-2 and Fstl-1) are regulated in an opposite fashion by BMP2. The positive regulation of Noggin and Chdl-1 by BMP2 is consistent with the occurrence of a negative feed-back loop tuning the level of BMP signaling in the differentiating cartilage as observed in different systems [80]. Conversely, the negative regulation of Chdl-2 by BMP2, suggest that joints are formed in zones of the digit cartilage templates with the lowest BMP signal.
BMP Antagonists and the Establishment of Tendons and Intertendinous Mesoderm
Tendon blastemas are formed in the mesoderm subjacent to the dorsal or ventral ectoderm. We have previous shown that all the subectodermal tissue of the autopod has the potential to differentiate into tendons, but that in normal conditions, tendons differentiated only in the digit regions [40,81]. The present study shows a regionalization of the mesoderm subjacent to the ectoderm into digit and interdigit regions characterized by distinct distribution of BMP antagonists accompanied, at the beginning of tendon formation, by differences in Smad 1/5/8 phosphorylation. Tendon blastemas are formed in zones expressing high levels of Dan, Fst, and Tsg, which recruit Chd, Noggin and Sost at advanced stages of differentiation. During the differentiation of these tendonforming regions, the subectodermal mesenchyme intercalated between the tendon blastemas shows intense domains of BMPER expression accompanied by reduced expression of Dan, TSG, Tll1, and Sostdc1. As mentioned above, the different patterns of gene expressions correlate with a dramatic change in the intensity of BMP signaling. This finding might be of interest in regenerative medicine to direct the differentiation of connective tissue progenitors. However, there is no tendon phenotype in mice deficient for these factors [38,63,66,67]. The difference in the intensity of phospho Smad 1/5/8 immunolabeling might be due to the antichondrogenic and profibrogenic action of DAN in conjunction with and the functional interplay between BMPER with TSG and TLL1. BMPER has been characterized as a CHD related BMP modulator with context-dependent pro-BMP or anti-BMP activities [63,69,[82][83][84]. Our findings show that its expression marks zones of very high BMP activity. In previous studies the expression of BMPER in the autopod has been functionally related with interdigital cell death [33]. However, the spatial distribution of BMPER transcripts and their maintenance after the period of interdigital cell death, support the involvement of this BMP modulator in the formation of the peridigital connective tissues.
Conclusion
In conclusion our findings reinforce the morphogenetic importance of BMP antagonists in the establishment of a molecular signaling scaffold that is responsible for the allocation of the cell fate of digit mesodermal progenitors. The information drawn from this study provides a basic view of this functional signaling network but further work is required to unravel the exquisite extracellular regulation of BMP signaling during the histotypic differentiation of digit precursor mesoderm. A subject to be addressed in future studies is that the function of the different BMP modulators in tendon development may result from a combination BMP and Wnt modulation, as several factors such as Sost [34,85], Noggin [86], Chd [87], and Sostdc1 [85,88], have been shown to exert both functions.
|
2018-04-03T03:29:23.372Z
|
2013-04-03T00:00:00.000
|
{
"year": 2013,
"sha1": "4268440564da520eb3a43175cd017370be3be313",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0060423&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6747b16dd3d149d1f1255b7fc842149f6c75934e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
55000401
|
pes2o/s2orc
|
v3-fos-license
|
DFT Chemical Reactivity Analysis of Biological Molecules in the Presence of Silver Ion
Because of its diverse properties silver is a versatile and safe agent with different uses. Silver ions had been used in biosensors [1], clothing [2], food industry [3-5], stainless steel coatings [6], beauty products [7], and in medical devices [8]. Silver is usually inert in its metallic form; however, it ionizes when in contact with skin moisture or fluids from a wound. As a result, it becomes highly reactive leading into an antibiotic behavior [9-12]. Studies of the inhibitory mechanism of silver ions on gram-positive and gram-negative bacteria have been reported [13,14], showing morphological changes in cytoplasm, cell membrane, and wall.
Introduction
Because of its diverse properties silver is a versatile and safe agent with different uses. Silver ions had been used in biosensors [1], clothing [2], food industry [3][4][5], stainless steel coatings [6], beauty products [7], and in medical devices [8]. Silver is usually inert in its metallic form; however, it ionizes when in contact with skin moisture or fluids from a wound. As a result, it becomes highly reactive leading into an antibiotic behavior [9][10][11][12]. Studies of the inhibitory mechanism of silver ions on gram-positive and gram-negative bacteria have been reported [13,14], showing morphological changes in cytoplasm, cell membrane, and wall.
Currently, there is few information about how silver becomes bioactive. Some authors have reported that silver ions act when they penetrate the cell and get between purine and pyrimidine bases; thus, denaturing the deoxyribonucleic acid (DNA) [15]. Other authors indicate that bioactivity comes from deactivation of respiratory enzymes; this by forming complexes with the sulfur of the thiol group of cysteine [16,17]. It is also been reported that silver can be involved in catalytic oxidation reactions resulting from the formation of disulfide bonds (R-S-S-R). It catalyzes the reaction between oxygen molecules in the thiol groups. In such reaction, water is released as a by-product and two thiol groups are covalently joined through a disulfide bond [18]. The catalyzed form of silver in disulfide bonds might possibly change the three-dimensional structure of cell enzymes, and thus change their function. The effect of silver ions on bacteria can be difficult to understand. However, observation of morphological and structural changes can yield useful information for understanding the antibacterial effect and the inhibitory process of silver ions. Theoretical studies about the affinity of silver ions with DNA at a molecular level were performed to determine the interaction of silver ions with a cytosine and an adenosine basis, using ab initio calculations and Density Functional Theory (DFT) [19]. Another quantum chemical study, focused to shed light on the electronic and energetic properties of silver upon DNA adenosine and cytosine bases, was performed through DFT using the Becke three parameter Lee, Yang, and Parr functional (B3LYP) and the Minnesota family M06-L functionals [20]. According with the generated results with previous theoretical calculations and the experimental information mentioned above, it is considered imperative to extend the study of biological molecules in presence of silver ion, in an attempt to define the antibacterial mechanism, so, the aim of this research is the study of a silver ion antibacterial mechanism through an oxidation in the presence of biological molecules such as proteins, polysaccharides, lipids, and nucleic acids. In the particular case of biopolymers, they were analyzed according to their constituent monomers through a theoretical study; aiming to determine which parts of the bacterium cell react in the presence of the silver ion. To accomplish this goal, the calculation of the chemical reactivity parameters was done, among them: chemical potential (I), electron affinity (EA), electronegativity (χ), electrophilicity (ω), chemical hardness (η), and electron donor potential (ω-), which aid in understanding the oxidation process between silver ions and biological molecules. Also, highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO), which allow to observe electron transfer. All these parameters were determined through DFT.
Materials and Methods
The studied biological molecules were selected considering them as representative of each group of the different kind of components of the bacterial cell structure: proteins, carbohydrates, lipids and DNA basis.
Proteins
From the non-polar aliphatic R group, alanine was used, since it is abundant in living matter and it is the smallest chiral amino acid [21]. From the aromatic R group, the phenylalanine, since its structure contains an aromatic ring which allows to see its interaction with the silver ion. From the uncharged polar R group, cysteine and its structure contains sulphur and it has a high presence in peptides and proteins. It also has a reactive nature, and its inclusion of a thiol group has a preponderant role in the synthesis of proteins. Moreover, it is known that the thiol group interacts strongly with noble metals such as gold, silver, and cooper [22]. From the negatively charged R group, aspartic acid, since it contains dicarboxylic acids. Also, from the positively charged R group histidine was selected; it contains an imidazole functional group, and this feature will allow to evaluate interactions with the silver ion.
Carbohydrates
Among monosaccharides, D-glucose was analyzed since it has dramatic effects on carbon metabolism regulation and it is responsible for cellular respiration regulation. Also, it has an effect on gluconeogenesis, making it capable of transporting and catabolizing sugars [23]. From polysaccharides, sucrose was analyzed because it has a high molecular weight and it is constituted by glucose and fructose. Also it is certainly involved in metabolic processes. Polysaccharides interfere in adenosine triphosphate (ATP) synthesis, and according to some researchers, silver ions inhibit oxidation of glycol, glucose and other molecules.
Lipids
Lipids are fundamental structural components of cell membranes, they are little oxidizable molecules, and they serve as an energy reservoir for the cell. It is important to analyze the constituting elements of bacterial cell membranes for when there is an attack by foreign agents they become injured. The lipids analyzed in this research were palmitic acid, a saturated lipid, and palmitoleic acid, an unsaturated lipid. Palmitic acid promotes cell cycle progression, it accelerates cell proliferation, and it induces a transient and sequential activation of a series of kinases [24]. Palmitoleic acid is to be analyzed because it is biosynthesized through palmitic acid.
DNA basis
In DNA, purine and pyrimidine bases (adenosine and cytosine) were analyzed. Adenosine monophosphate (AMP) was selected because it has an important role in energy metabolism. Through enzymatic reaction AMP forms bonds with other phosphate groups and plays an important role in incorporating amino acids into proteins [25]. Cytosine monophosphate (CMP) was selected since it is involved in the biosynthetic processes of phospholipids and uracil for carbohydrates [26]. There have been research papers that state silver ions bind to DNA transcription, while in the blocks they bond to the cells of the surface, thus disrupting bacterial respiration, and also they cause interference in ATP synthesis.
Results and Discussion
The geometry optimization of the biological molecules in gas phase and frequency calculation were performed to make sure molecules were in their lowest energy level, Figure 1 shows the optimized geometries of the studied molecules. Condensed Fukui functions are mathematical expressions that define the sensitivity a molecular system has to experience changes in its electronic density, in different points of its structure. In a chemical reaction, a change in the number of electrons involves the addition or subtraction of at least one electron in the frontier orbitals [39]. Thus, calculating Fukui functions helps us determine the reactive sites of a molecule, based on the electronic density changes experienced by it during a reaction. The dual descriptor for nucleophilicity and electrophilicity was defined in terms of the variation of hardness with respect to the external potential; such dual descriptor was defined as the difference between nucleophilic and electrophilic Fukui functions, allowing to characterize both reactive behaviors [40]. The dual descriptor predicted the site reactivity induced by different donor and acceptor groups of the biological molecules. The condensed results of the Fukui indexes and dual descriptor showed which atoms are most susceptible to an electrophilic attack, see Table 1. These results were obtained with the Hirschfeld charge distribution [41]. The definition for these atoms was performed to establish the zone where the silver ion attraction was more likely to create the biological molecule-silver ion complex. Also, energy calculations were performed to obtain the most stable structure of the complex at a specific point, such as exemplified with the Ala-Ag + amino acid. Figure 2. Qualitative chemical concepts such as electronegativity and hardness have been widely used in understanding various aspects of chemical reactivity. The theoretical basis for these concepts has been provided by DFT [42]. These reactivity indices are better appreciated in terms of the associated electronic structure of atoms and molecules such as electronegativity and hardness [38]. The obtained chemical reactivity parameters for the biological molecules are: ionization potential (I), which is defined as the energy needed to remove an electron from a molecule; electron affinity (EA), which measures the ability of a molecule to accept electrons and form anions; electronegativity (χ), representing the tendency of atoms or molecules to attract electrons; electrophilicity (ω), that gives an idea of the stabilization energy when the system acquires electrons from the environment up to saturation; and electron donor potential (ω-). These reactivity information shows if a molecule is capable of donating charge. The reactivity parameters mentioned above were obtained using vertical approximation, in which the energy of the molecule in its anionic, cationic, and neutral states is calculated, keeping in mind the geometry of the fundamental state. Results are shown in Table 2. According with these results, the biological molecule Ala showed the highest ionization potential value, thus showing this amino acid would require the highest amount of energy to rip an electron off its structure, and therefore this amino acid will not oxidize easily in the presence of silver ions. On the other hand, Adenosine and Cytosine show the lowest ionization potential value, which indicates they will oxidize more easily in the presence of the silver ion [43]. There are some EA negative values because the energy of biological molecules is not absorbed, but released in the process of electron acceptance, namely it is required to supply energy in order to form the anion [43]. Biological molecules adenosine and cytosine are capable of donating electrons, getting oxidized more easily in contact with the silver ion. According to the electronic affinity results, biological molecules are more capable of donating electrons and therefore, more prone to oxidization. Agreeing to Dunning et al. [44], in order to obtain a reliable calculation of electronic affinity, it is necessary to use base complexes with high angular momentum scattered functions. In order to discard that the 6-31G (d) were the cause of the negative electron affinity results, the scattered function 6-31++G(d) was used on the Ala biological molecule. This calculation yield to -0.4479 eV EA result, a value that corroborates that even using a scattered function that allowed changing the angular momentum and the shape of the orbital, EA still results in a negative value. Regarding electronegativity, the highest value was 7.71 eV for Asp, this means that Asp presented the highest difficulty to be oxidized in the presence of the silver ion, since this amino acid tends to attract electrons more strongly. On the other hand, the purine and pyrimidine bases (adenosine and cytosine) show electronegativity values of 3.56 eV and 3.47 eV respectively, which indicates they can be oxidized more easily in the presence of the silver ion. The chemical hardness values (η), one of the reactivity parameters considered to determine the oxidation process (namely a measure of the resistance of a system to transfer charge), are higher in D-Glucose and sucrose with 5.94 eV and 5.34 eV respectively. This indicates that D-Glucose and sucrose are not prone to yield charge. Adenosine and cytosine show the lowest η values (4.51 eV), which makes them the molecules more prone to yield electrons, thus being the molecules that are more easily oxidized in the presence of the silver ion. According to the results obtained, the reactivity order, expressed as the ease to be oxidized, as for value η is: DNA>Proteins>Carbohydrates>Lipids Adenosine, cytosine and Asp show the highest (ω) value. About the electron donor potential, low values indicate an antioxidant behavior, thus, results show that lipids tend to stabilize the electron loss in the other parts of the bacterium. The analysis of the reactivity results confirm that the DNA is the effortlessness bacterium fragment to be oxidized. Figure 3 shows the proposed chemical reaction of the adenosine and cytosine basis interaction with silver ion through the heterocycles. This proposed is based in the Jeffrey et al. [45] work where they found that silver ions favor bonding strongly with heterocyclic bases and not with the phosphate groups. The association of silver with the purine base (adenosine) forms complexes via the N19, while with the pyrimidine bases (cytosine) the complex is formed by the O24. These sites agree with the electrophilic attack sites defined in this work. Also, the reaction with oxygen in the heterocycles is an association confirmed as well by the Jeffrey et al. in the cited paper above. A distribution analysis of the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) of complexes of biological molecules-silver ion was performed in an attempt to observe the difference of this frontier orbitals distribution in presence of the silver ion. In all cases studied, the HOMO is localized over the structure of the amino acids, lipids, sugars, and purine and pyrimidine bases and the LUMO is found over the silver ion. If we also take into account that a global reaction called oxide reduction takes place (in every chemical reaction, the electrons one molecule loses, another must gain; in other words, one molecule is oxidized whereas the other one is reduced) [26], then we propose that the LUMO distribution over the silver ion, indicates an oxidation process, in which the biological molecules are oxidized and the silver is reduced. Figure 4 shows HOMO and LUMO orbitals for cytosine-Ag + and adenosine-Ag + complexes.
Conclusions
The oxidation process of the silver ion upon the parts that constitute a bacterium were analyzed in this work, considering the analyzed amino acids as part of a protein, the purine and pyrimidine bases as part of DNA, D-glucose and sucrose as carbohydrates, and palmitic and palmitoleic acids as the lipids. The molecular characterization of the biological molecules includes the calculation of the molecular structure and chemical reactivity parameters, also the calculation of the complexes biological molecule-silver ion reactivity properties. Low chemical hardness values indicate which constitutive parts are more prone to yield electrons, thus generating an oxidation process. The results show that the process order is: DNA>Proteins>Carbohydrate s>Lipids. In all the cases, the HOMO orbital is found in the biological molecules, whereas the LUMO orbital is found in the silver ion, indicating there is a HOMO to LUMO electron transfer, suggesting an oxidation process.
|
2019-04-08T13:07:59.130Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "e2eb6827f0810f241b07a322726aa0bada1b7ec0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2161-0401.1000153",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c1c024568611fe3f2e321cd0f3d8d0f2c4e0790c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
211572607
|
pes2o/s2orc
|
v3-fos-license
|
Non-perturbative contributions to vector-boson transverse momentum spectra in hadronic collisions
Experimental measurements of Drell-Yan (DY) vector-boson production are available from the Large Hadron Collider (LHC) and from lower-energy collider and fixed-target experiments. In the region of low vector-boson transverse momenta $q_T$, which is important for the extraction of the $W$-boson mass at the LHC, QCD contributions from non-perturbative Sudakov form factors and intrinsic transverse momentum distributions become relevant. We study the potential for determining such contributions from fits to LHC and lower-energy experimental data, using the framework of low-$q_T$ factorization for DY differential cross sections in terms of transverse momentum dependent (TMD) distribution functions. We investigate correlations between different sources of TMD non-perturbative effects, and correlations with collinear parton distributions. We stress the relevance of accurate DY measurements at low masses and with fine binning in transverse momentum for improved determinations of long-distance contributions to Sudakov evolution processes and TMDs.
Introduction
The production of photons, weak bosons and leptons at large momentum transfer Q Λ QCD in high-energy hadronic collisions is described successfully by factorization [1] of short-distance hard-scattering cross sections, computable at finite order in QCD perturbation theory as power series expansions in the strong coupling α s , and non-perturbative long-distance parton distribution functions (PDFs), determined from fits to experiment. It was realized long ago, however, that even for Q Λ QCD additional dynamical effects need to be taken into account to describe physical spectra in the vector-boson transverse momentum q T when the multiple-scale region q T Q is reached [2][3][4][5]. These amount to i) perturbative logarithmically-enhanced corrections in α k s ln m Q/q T (m ≤ 2k), which go beyond finite-order perturbation theory and call for summation to all orders in α s , and ii) non-perturbative contributions besides PDFs, which correspond to the intrinsic transverse momentum distributions in the initial states of the collision and to non-perturbative components of Sudakov form factors.
The summation of the logarithmically-enhanced corrections to Drell-Yan (DY) lepton pair hadroproduction has since been accomplished systematically by methods based on the CSS formalism [6]. It has been fully computed through next-to-next-to-leading-logarithmic (NNLL) accuracy, which requires calculations up to two-loop level, and partial results at three and four loops are already available for some of the coefficients needed for higher logarithmic accuracy [7,8]. On the other hand, nonperturbative effects besides PDFs in DY production are included in the formalism of transverse momentum dependent (TMD) parton distribution functions [9]. Intrinsic transverse momentum distributions enter as boundary conditions to the renormalization group evolution equations for TMDs, while nonperturbative Sudakov effects are taken into account via non-perturbative contributions to the kernel of the evolution equations associated with TMD rapidity divergences [10][11][12][13][14].
The purpose of this work is to examine the combined determination of the nonperturbative rapidity-evolution kernel and intrinsic transverse momentum k T distribution from fits to measurements of transverse momentum spectra in DY lepton-pair production at the Large Hadron Collider (LHC) and in lower-energy experiments, including Tevatron, RHIC and fixed-target experiments. To this end, we employ the calculational framework developed in [15][16][17][18][19][20]. We investigate to what extent the two sources of non-perturbative effects are correlated, and study the role of different data sets, from the high-precision DY LHC data to the lower-energy DY data, in disentangling them. We also analyze how these two non-perturbative contributions are correlated with non-perturbative contributions encoded in PDF sets. Quantifying these effects will be important both for strong interaction investigations of hadron structure and for determinations of precision electroweak parameters, as the low-q T DY region is relevant for the extraction of the W-boson mass at the LHC.
The paper is organized as follows. In Sec. 2 we briefly describe the factorization formula, evolution equations and perturbative coefficients which constitute the theoretical inputs to our analysis. In Sec. 3 we present the results of the numerical studies and fits to experimental data. We give conclusions in Sec. 4.
Theoretical inputs
We start from the TMD factorization formula for the differential cross section for DY lepton pair production where Q 2 , q T and y are the invariant mass, transverse momentum and rapidity of the lepton pair, and the TMD distributions F f ←h fulfill evolution equations in rapidity and in mass We further perform the small-b b b operator product expansion of the TMD F f ←h as follows, where f f ←h are the PDFs, C f ← f are the matching Wilson coefficients, and f NP are functions 1 to be fitted to data, encoding non-perturbative information about the intrinsic transverse momentum distributions. The non-perturbative component of the rapidity-evolution kernel D f and the distribution f NP are the main focus of this paper. The TMD distributions in Eq. (1) depend on the scales µ, ζ. To set these scales, we will use the method of the ζ-prescription proposed in [15]. (See e.g. [21] for recent examples of alternative scale-setting.) The summation of the logarithmically-enhanced corrections at low q T is achieved through Eqs. (1)-(4) by computing perturbatively the quantities H, C, γ F and Γ cusp as series expansions in powers of α s . In Table 1 we summarize the perturbative orders used for each of these quantities in the calculations that follow. We refer to the logarithmic accuracy specified by these orders as NNLL. 2 Table 1: Summary of perturbative orders used for each part of the DY cross section.
The rapidity evolution kernel D contains perturbative and nonperturbative components. The perturbative expansion for D is currently known up to three-loops [22][23][24][25]. Using the b * prescription [6], we model D as where D f res [26] is the resummed perturbative part of D f , g is an even function of b vanishing as b → 0, and with the parameter B NP to be fitted to experimental data. For the function g(b) we will use the models and fitting respectively the parameters g K , c 0 and g * K to experimental data. The quadratic model in Eq. (7) has traditionally been used since the pioneering works [27][28][29][30]. The model in Eq. (9) contains the perturbative quadratic behavior at small |b| but it goes to a constant behavior at large |b|, fulfilling the asymptotic condition ∂D/∂ ln b 2 = 0, in a similar spirit to parton saturation in the s-channel picture [31] for parton distribution functions. The model in Eq. (8) is an intermediate model between the previous two, being characterized by a linear rise at large |b|. In the following we will refer to the non-perturbative component of the rapidity-evolution kernel, modeled according to Eqs. (7)-(9), as D NP .
The nonperturbative contribution to D f in Eq. (5) also influences the rapidity scale fixing with the ζ-prescription [18]. In fact, once the nonperturbative correction is included in D f , one is to use ζ NP given by [18] Only the perturbative part ζ pert , computed in [16], was used in the fits [17]. The expression in Eq. (10) converges to ζ pert in the limit b → 0. We will use this expression in the fits of the next section.
The modeling of the TMD through the function f NP allows one to fit data at different energies. In particular it allows the nonperturbative behavior of the TMD to be described for large values of b. In [15,17,32] it has been observed that a modulation between Gaussian and exponential models is necessary. This can be provided by the following model, where the interpolation of Gaussian/exponential regimes is dependent on the Bjorken x-variable, and λ 1,..,5 > 0.
Determination of f NP and D NP from fits to experiment
We next present results of performing TMD fits to experimental data for DY differential cross sections, by employing the theoretical framework described in the previous section. We consider DY measurements both at the Figure 1: Results of the TMD global fit to DY measurements from LHC and lower-energy experiments. PDF 1.14 HERAPDF2.0 [62] 0.97 CT14 [63] 1.59 MMHT14 [64] 1.34 PDF4LHC [65] 1.53 3 We restrict the fit to data in the low transverse momentum region by applying the cut q T /Q < 0.2 to the data sets. 4 The values of the fitted TMD parameters in Eqs. (6),(8) (for D NP ) and in Eq. (11) (for f NP ) and their associated uncertainties are shown in Fig. 1. Since PDFs enter the TMD fit through Eq. (4), the results in Fig. 1 are presented for different PDF sets. The corresponding χ 2 values are given in Table 2. We observe that the values of the fitted parameters λ i (see Eq. (11)) in Fig. 1 vary more significantly among different PDF sets than the values of the fitted parameters B NP and c 0 (see Eqs. (6), (8)), corresponding to the fact that the λ i parameters in f NP are related to the x-dependence of the distributions, while the rapidity evolution kernel is x-independent.
The correlations among TMD parameters for different PDF sets are illustrated in Fig. 2. Light colors in the pictures of Fig. 2 indicate low correlations; dark colors indicate high correlations. Shades of blue denote negative correlations; shades of brown denote positive correlations. In particular, the correlation between the parameters c 0 (controlling the long-distance behavior of the rapidity evolution kernel in Eq. (8)) and λ 1 (controlling the intrinsic transverse momentum distribution in Eq. (11)) is fairly low in the case of the HERAPDF set, but it increases in the NNPDF3.1 case, and is higher still in the CT14 and MMHT14 cases. We note that the latter two PDF sets do not include LHC data in the fits, while the NNPDF3.1 does. The χ 2 values in Table 2 are lowest for the HERAPDF and NNPDF3.1 cases. Next, we wish to focus on the role of present (and future) LHC measurements to investigate the sensitivity to the nonperturbative contributions in D NP and f NP . To this end we will perform fits to LHC data only, using a smaller number of parameters. That is, we model D NP as in Eqs. (5)-(9), depending on two parameters, B NP and either g K or c 0 or g * K , and we take a form for f NP which is simplified with respect to Eq. (11), namely, we take an x-independent simple gaussian depending on one parameter λ 1 only, which provides a measure of the intrinsic transverse momentum in terms of a gaussian width. We then perform 3-parameter fits to LHC DY data [33][34][35][36][37][38][39], fitting λ 1 , B NP and either g K or c 0 or g * K , as well as 2-parameter fits to the same data, fitting only B NP and either g K or c 0 or g * K , and fixing λ 1 to λ 1 = 0.001 GeV 2 to simulate the cases of nearly zero intrinsic transverse momentum (as in purely collinear approaches). The results from the 3-parameter and 2-parameter fits, using the PDF set NNPDF3.1, are summarized in Table 3. We see that the 3-parameter fits (cases 2, 4 and 6 in Table 3) yield results, both for the χ 2 values and for the values of the fitted TMD parameters, which are not dissimilar from the global fit results given earlier, supporting the overall consistency of the TMD picture of low-energy and high-energy DY data. These three cases correspond to the three different long-distance behaviors of the rapidity-evolution kernel D(µ, b) in Eqs. (7)- (9). Case 2 and case 6, in particular, while giving fits of comparable quality, correspond to very different physical pictures of the nonperturbative component of D. Case 2 extends the quadratic behavior to large distance scales (see Eq. (7)). In contrast, case 6 fulfills the saturating condition ∂D/∂ ln b 2 = 0 at large |b| (see Eq. (9)). This is, to our knowledge, the first time that a full fit to low-q T DY data is performed in the hypothesis of long-distance saturating behavior of the rapidity-evolution kernel. The 2-parameter fits (cases 1, 3 and 5 in Table 3), on the other hand, show significantly different behaviors, characterized by somewhat higher χ 2 values and especially by significantly different values of the D NP fitted parameters. This indicates that, although most of the sensitivity to the intrinsic transverse momentum distribution comes from the lower-energy measurements, non-negligible f NP effects are present at the LHC too. In particular, Table 3 suggests that without any intrinsic transverse momentum distribution it may be possible to describe DY data at the LHC but this would lead to a different determination for B NP and the rapidity evolution kernel. That is, intrinsic transverse momentum effects may be reabsorbed by changes in the D NP fit.
To further analyze the sensitivity of LHC DY measurements to f NP and gain insight into the results of Table 3, we next consider the ratio where dσ T MD is the DY differential cross section computed from the full TMD fit, and dσ test is the DY differential cross section computed by setting f NP = 1 in the full fit. In Fig. 3 we plot the numerical results for the ratio (12) versus the DY lepton-pair transverse momentum q T for different values of the DY lepton-pair invariant mass Q. For reference, in Fig. 3 we also plot the theoretical uncertainty band on the full TMD result which comes from scale variation, taken according to the ζ prescription of Sec. 2. We see that in the lowest q T bins the nonperturbative effects, evaluated according to the ratio in Eq. (12), exceed the perturbative uncertainty, evaluated from scale variation in the ζ prescription. The comparison of Table 3 and Fig. 3 confirms that sensitivity to f NP is present in LHC data but may be reabsorbed by varying D NP . We explore the above point, associated with correlations between D NP and f NP , by analyzing the b dependence of the rapidity evolution kernel D(µ, b) in Fig. 4. We plot results for D from the different cases in Table 3, at µ = M Z and µ = 5 GeV. Consider first the upper right panel (µ = M Z ). The two red curves correspond to the nonperturbative quadratic model in Eq. (7). The solid red curve is the result of the 3-parameter fit in Table 3 (case 2), while the dashed red curve is the result of the 2-parameter fit in Table 3 (case 1). Similarly, the two yellow curves correspond to the saturating model in Eq. (9) (solid yellow is the 3-parameter fit, while dashed yellow is 2-parameter), and the two blue curves correspond to the linear model in Eq. (8) (solid blue is the 3-parameter fit, while dashed blue is 2-parameter).
For each of the three modeled large-distance behaviors of D(µ, b), the difference between the solid and dashed curves in the upper right panel of Fig. 4 measures the correlation between the D NP and f NP nonperturbative effects, namely, it measures the impact of the intrinsic k T on the determination of the rapidity evolution kernel. We see that in each case this impact is non-negligible. If we look at the analogous results for lower masses in the upper left panel Table 3. In the lower panels the result for the global DY+SIDIS fit [20] is also plotted.
(µ = 5 GeV), we see that for the quadratic model particularly (red curves) the impact of intrinsic k T increases (even exceeding the uncertainty bands). That is, although the quality of the fit from the quadratic model is shown in Table 3 (case 2) to be comparable to that of the saturating and linear models, the quadratic model requires a much more pronounced dependence than the others on the intrinsic k T distribution, which is revealed especially at low masses. Apart from the intrinsic k T correlations, the differences among the three solid curves in the upper panels of Fig. 4 illustrate the current status in the determination of the large-|b| behavior of the non-perturbative rapidity evolution kernel from fits to experimental data. As expected, the sensitivity of current LHC measurements to the long-distance region is limited, which results into sizeable uncertainty bands at large |b|. This sensitivity could be enhanced by precision measurements of the low-q T DY spectrum at the LHC, with fine binning in q T , for low masses µ M Z (see e.g. first results from LHCb [66]).
For comparison, in the lower panels of Fig. 4 we also report the result for D which is obtained from the global fit to Drell-Yan and semi-inclusive deep inelastic scattering data [20] (grey curves in the two lower panels of Fig. 4). The global fit includes, besides LHC data, also data from low-energy experiments. This fit is performed assuming the linear model in Eq. (8). It is interesting to observe that the grey curves at µ = M Z and µ = 5 GeV in the lower panels, compared to the blue curves obtained from the same linear model, are lower and closer to the yellow curves (saturating behavior), reflecting the role of low-energy data in determining long-distance features of D.
Conclusion
Transverse momentum spectra in DY lepton pair production have been measured at the LHC and at lower-energy collider and fixed-target experiments. The low-q T end of DY spectra is important for the extraction of the W-boson mass and for hadron structure investigations.
In this paper we have carried out a study of low-q T DY spectra based on the TMD factorization approach in Eq. (1), using the ζ prescription (10) to treat the double scale evolution in Eqs. (2), (3). This approach contains the perturbative TMD resummation through the coefficients in Table 1 and the non-perturbative TMD contributions through f NP (intrinsic k T ) and D NP (non-perturbative Sudakov) in Eqs. (4) and (5) (besides the non-perturbative collinear PDFs in Eq. (4)). As such, it can be contrasted with other approaches in the literature: on one hand, low-energy approaches based on fixed-scale parton model which include non-perturbative TMD contributions but do not include any perturbative resummation and/or evolution of TMDs; on the other hand, high-energy approaches based on purely perturbative resummation and non-perturbative collinear PDFs, which do not include any non-perturbative TMD contributions.
We have limited ourselves to considering the low-q T region q T Q, and not addressed issues of matching with finite-order perturbative corrections which are essential to treat the region q T ∼ Q (see e.g. [52,54,56,58]).
Using this theoretical framework, we have performed fits to low-q T DY measurements from the LHC and from lower-energy experiments. The ultimate goal of these fits is to extract universal (non-perturbative) TMD distributions to be used in factorization formulas of the type (1), much in the spirit of the approaches discussed in [67][68][69]. This will be essential to bring the use of TMDs for phenomenological analyses on a similar level as that of ordinary parton distributions. The determination of non-perturbative TMDs from fits to experimental measurements is complementary to determinations from lattice QCD -see e.g. ongoing lattice studies of D NP [70,71]. In this work we have focused on studying the sensitivity of LHC and lower-energy DY experiments to non-perturbative f NP and D NP contributions, and examining their correlations with different extractions of collinear PDFs. To this end, we have defined model scenarios for D NP in Eqs. (7)-(9) and f NP in Eq. (11).
We have presented results from global DY fits (Figs. 1,2 and Table 2) and from LHC fits ( Table 3 and Fig. 3). These results indicate that, while the strongest sensitivity to the intrinsic k T is provided by the low-energy data, neglecting any intrinsic k T at the LHC worsens the description of the lowest q T bins in the DY spectrum, giving higher χ 2 values in the fit (see differences between cases 1 and 2, between cases 3 and 4, and between cases 5 and 6 in Table 3), and causes a potential bias in the determination of the rapidity evolution kernel D(µ, b) (see differences between cases 1 and 2, between cases 3 and 4, and between cases 5 and 6 in Fig. 4). A quantitative measure of the size of nonperturbative TMD effects is provided in Fig. 3 and compared with perturbative theoretical uncertainties estimated from scale variations. Given the strong reduction of these uncertainties achieved through the high logarithmic accuracy of perturbative resummations and the use of the ζ prescription for scale-setting, the residual uncertainty due to nonperturbative TMD effects is found to play a non-negligible role for the DY spectrum at the LHC in the low-q T region, which increases with decreasing DY masses.
On the other hand, we see from the comparison of cases 2, 4 and 6 in Fig. 4 that the large-|b| behavior of D is not yet constrained at present by available data both at low energy and at the LHC. We have investigated and contrasted the hypotheses of quadratic behavior, which has traditionally been considered by extrapolation from the perturbative result, and saturating behavior at long distances. We have observed in particular that the latter, besides being consistent with current LHC fits, is also compatible with the result of a global fit based on an intermediate linear model, but including low-energy DY and SIDIS data. Given the extraordinarily high experimental accuracy achieved in DY processes at the LHC, this opens new opportunities for future LHC analyses. Specifically, extending measurements of the DY transverse momentum q T , for low q T Q and with fine binning ≤ 1 GeV, into the so far unexplored region of low masses Q < 40 GeV will provide valuable new information on D at large |b|, and thus enable improved determinations of TMDs.
|
2020-03-02T02:00:30.535Z
|
2020-02-28T00:00:00.000
|
{
"year": 2020,
"sha1": "1c3accf3d10f8f41df16422252b6e4f6ad8ed70a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2020.135478",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "e283c4038a9a3fd90a7eb9e81139aa11d813f029",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249812772
|
pes2o/s2orc
|
v3-fos-license
|
Application of Metal-Based Nanozymes in Inflammatory Disease: A Review
Reactive oxygen species (ROS) are metabolites of normal cells in organisms, and normal levels of ROS in cells are essential for maintaining cell signaling and other intracellular functions. However, excessive inflammation and ischemia-reperfusion can cause an imbalance of tissue redox balance, and oxidative stress occurs in a tissue, resulting in a large amount of ROS, causing direct tissue damage. The production of many diseases is associated with excess ROS, such as stroke, sepsis, Alzheimer’s disease, and Parkinson’s disease. With the rapid development of nanomedicine, nanomaterials have been widely used to effectively treat various inflammatory diseases due to their superior physical and chemical properties. In this review, we summarize the application of some representative metal-based nanozymes in inflammatory diseases. In addition, we discuss the application of various novel nanomaterials for different therapies and the prospects of using nanoparticles (NPs) as biomedical materials.
INTRODUCTION
The oxygen-containing mono-electron by-products produced by cells in the process of respiration and organism metabolism are called reactive oxygen species (ROS) (Panth et al., 2016). ROS are composed of superoxide-free radicals, hydroxyl-free radicals, peroxide-free radicals, hydrogen peroxide, hypochlorous acid, and ozone (Panth et al., 2016) (Figure 1). ROS production and scavenging maintain a dynamic balance. As a result, the cell's antioxidant system is disrupted, prompting the cell to produce excessive ROS, which destroys the redox state and causes oxidative stress. The oxidative stress can cause severe damage to cells, leading to cell structure damaged by an injury to the cell components, including the cell membrane and nucleus (Floyd and Carney, 1992;Cabiscol et al., 2000;Leutner et al., 2001). This damage to the cell structure leads to damage to cell function that results in a series of serious diseases, for instance, Parkinson's disease, heart/kidney ischemia-reperfusion injury, diabetes, inflammation, cardiovascular disease, and cancer (Barnham et al., 2004;Giordano, 2005;Rolo and Palmeira, 2006;Jaeschke, 2011;Wang and Choudhary, 2011).
ROS are generated by both intracellular and extracellular metabolic pathways (Gligorovski et al., 2015;Nosaka and Nosaka, 2016). Substances that eliminate, inhibit, and prevent ROS from reacting with cells are called antioxidants (Flora, 2007;D'Autréaux and Toledano, 2007;Lambeth, 2004). Antioxidants are used to remove excess reactive oxygen species produced in living organisms. There are a few free radical scavengers in organisms' antioxidant systems: endogenous free radical scavengers such as superoxide dismutase (SOD) and vitamin E, exogenous free radical scavengers such as polyphenols, and Chinese herbal medicine that has antioxidation action similar to glossy Ganoderma and Salvia miltiorrhiza. Furthermore, the natural antioxidant enzyme system catalyzes free reactions to produce harmless products in removing ROS and reducing the damage caused by ROS. Among them, the leading natural antioxidant enzymes include superoxide dismutase (SOD), catalase (CAT), peroxidase (POD), and glutathione peroxidase (GPx) (Valko et al., 2006;Tidwell, 2013;Hamada et al., 2014;Kong et al., 2020;Hou et al., 2021). Although traditional natural enzymatic antioxidants are widely used, they are easily oxidized and have low bioavailability, poor modification, and stability. In addition, they are challenging to target scavenging oxygen free radicals, challenging to cross the blood-brain barrier, and easy to be neutralized by cell culture medium. To make up for the deficiency of traditional natural enzymes, several studies find the substitutes or mimic enzymes that can compensate for the shortcomings of natural enzymes and develop new antioxidants to make them better applied in production and life .
With the rapid development of nanotechnology, nanomaterials have been widely used in biomedical, optics, catalysis, and other fields because of their excellent physical and chemical properties, their ability to penetrate cell membranes, high activity, and low production cost (Yang et al., 2019;Liu et al., 2021;Yi et al., 2022). Because researchers found that these nanoparticles have the inherent ability to mimic the catalytic activity of certain biological enzymes, they are called nanomimicase enzymes, or nanozymes in short. In 2007, Xi Yunyan et al. found that iron tetroxide nanoparticles show natural HRP activity, which can catalyze the reaction between a substrate and hydrogen peroxide. Thus, they demonstrated that the nanomaterials themselves could simulate the functional activity of some biological enzymes (Gao et al., 2007). In previous studies, nanomaterials, such as fullerenes, gold nanoparticles, and ferromagnetic nanoparticles, also have been found to have the activity of some natural enzymes. These nanomaterials with the activity of natural enzymes are called nanozyme (Dugan et al., 1996;Comotti et al., 2004;Manea et al., 2004;Li et al., 2015), and nanozyme antioxidants, as nanozyme preparation, make up for the deficiency of traditional natural enzymes. In addition, nanozyme antioxidants have the advantages of low production cost, high modification degree and surface activity, targeted enrichment in specific tissues, high biocompatibility, and scale production. Thus, these advantages make the nanozyme antioxidants widely used in cancer treatment, biological science, drug carrier, biological antioxidant, and other fields. Nanoparticles could be a potential efficient therapeutic option for clinical treatment because they alter the biological distribution of antioxidants and have the inherent ability to remove electrons. According to the different catalytic substrates, the existing nanozymes can be divided into mimic peroxidase enzymes, mimic oxidase enzymes, catalase mimic enzymes, and superoxide dismutase mimic enzymes. Among them, peroxidase can catalyze the oxidation of hydrogen peroxide oxidation substrate; the oxide mimic enzyme catalyzes the oxidation of oxygen to the substrate; the CAT enzyme can catalyze the hydrogen peroxide decomposition reaction; and SOD enzymes can catalyze the superoxide anion disproportionation to produce hydrogen peroxide and oxygen. Therefore, these NPs could be applied to disease diagnosis, treatment, and biomedicine (Duan et al., 2015;Wang et al., 2016a;Cervadoro et al., 2018) (Figure 2).
According to recent studies, three kinds of nanozymes are commonly used: Prussian blue, nanocerium dioxide particles, and manganese-based analyses. In 1996, the Okuda team found that fullerenes could eliminate the superoxide radicals (Okuda et al., 1996); the Seal team (2005) showed that nanocerium dioxide (CeO 2 ) could prevent cell damage by radiation and attributed to the scavenging of free radicals (Okuda et al., 1996). Additionally, Gu et al. (2016) revealed that Prussian blue nanoparticles (PBNPs) have properties of catalase and superoxide dismutase (Watanabe et al., 2009). Qu Xiaogang and Ren Jinsong (2016) constructed a Donanamil complex system, which effectively eliminates overexpressed ROS in cells and prevents cells from oxidative stress damage (Huang et al., 2016). This article reviewed the types of nanoparticles with antioxidant capacity, the mechanism of action of nanozymes, and their clinical applications. This article mainly introduces the research progress of manganese NPs, ceria NPs, Prussian blue NPs, and other NPs in inflammation, Parkinson's disease, brain/liver ischemia/reperfusion, and other diseases.
Manganese-Based NPs
Manganese-based nanomaterials are nanomaterials with manganese as the active center and natural enzyme activity (Huang et al., 2016). In addition to playing an essential role in photosynthesis, divalent manganese plays a crucial role in hydrolysis and phosphotransferase. The more expensive manganese is the redox center of ribonucleotide reductase, catalase, peroxidase, and SOD in the mitochondria (Li et al., 2017;Wan et al., 2012). Manganese phosphate (Mn 3 (PO 4 ) 2 ) is the first reported manganese nanoscale with superoxide dismutase activity (Miriyala et al., 2012). In 2014 (Figure 3), Wu et al. used bovine serum protein to wrap manganese dioxide (BSA-MnO 2 ), which can regulate the anaerobic status of the tumor microenvironment and has a specific effect on tumor treatment (Prasad et al., 2014). In 2016, researchers found that octahedral manganese oxide (MnO) nanomaterials have the properties of superoxide dismutase, and the relaxation time of MnO increases when it is in contact with superoxide radicals (Prasad et al., 2014). In addition, the researchers found that the Frontiers in Bioengineering and Biotechnology | www.frontiersin.org June 2022 | Volume 10 | Article 920213 dendritic structure of Mn 3 O 4 nanoparticles had a larger pore size and activities of superoxide dismutase and catalase (Yan et al., 2013). Furthermore, the elimination rate of Mn 3 O 4 on the superoxide anion can reach 50-60% (Singh et al., 2017). These advantages make up for the defects of natural enzyme antioxidants, so the researchers have researched the application of manganese-based nanozymes in treating clinical diseases and provided corresponding theoretical support for the application of manganese-based nanozymes in clinical treatment. Based on this, this article reviews the recent progress in synthesizing Mn 3 O 4 nanoparticles by relevant teams in treating Parkinson's disease, inflammation, and other diseases.
Parkinson's disease (PD) is a neurodegenerative disease characterized by the substantia nigra striatum pathway degeneration, closely related to oxidative stress (Barnham et al., 2004). In the brain, dysregulation of redox balance can lead to oxidative stress in neuronal cells, resulting in neuronal loss (Yan et al., 2013;Li et al., 2020). Currently, there is no effective clinical treatment for PD. Moreover, the treatment for the disease is also the focus of clinicians and scientists. A few nanomaterials have been reported to mimic the activity of heme peroxidase, CAT, oxidases, or SOD enzymes (Gao et al., 2007;Asati et al., 2009;Wei and Wang, 2013;Dong et al., 2014). In addition, it has been reported that under pathophysiological conditions, the proper control of H 2 O 2 levels through CAT-GPx cooperativity is essential (Baud et al., 2004). Given the in vitro antioxidant enzyme activity of the nanomaterials, Namrata Singh et al. (Singh et al., 2017). The phase and morphology of Mnf did not change during the reaction, which proved that the high activity of Mnf was closely related to its morphology and multi-pore size. Human neuroblastoma-derived cell line SHSY-5Y was selected in the cell experiment, and the PD cell model was established by using the neurotoxin MPP + targeting dopamine neurons (Singh et al., 2017). MTT verified that Mnf nanoparticles were nontoxic, and the results of the DCFDA-H2 probe experiment showed that the loss of neuronal cell processes induced by MPP+ was repaired, confirming the protective effect of nanoparticles on cells. More importantly, the research results of Namrata Singh et al. proved that Mnf has better scavenging capacity of reactive oxygen species than traditional natural enzymes, which provides the effective reference data and theoretical guidance for the clinical application of Mnf in the treatment of neurodegenerative diseases caused by oxidative stress.
Inflammation has been demonstrated to cause various diseases, such as rheumatoid arthritis (Choy and Panayi, 2001), cardiovascular diseases (Taube et al., 2012), and even cancer (Colotta et al., 2009). The inflammatory response in the body is also closely related to reactive oxygen species. The production of free radicals in inflammatory sites is one of the pathogenesis of diseases. Mn 3 O 4 nanoparticles (NPS) have a variety of enzyme activities to simulate the removal of oxygen free radicals, hydrogen peroxide, and hydroxyl radicals (Gorecki et al., 1991;Miriyala et al., 2012). Jia Yao et al. synthesized Mn 3 O 4 NPs by the hydrothermal method and found that Mn 3 O 4 NPs are more stable and have higher activity than natural enzymes, showing a good ROS scavenging effect in vitro (Yoshimura et al., 2021). These findings provide a basis for clinical studies on ROS-mediated inflammatory use of drugs. The DLS and zeta potential results proved that Mn 3 O 4 NPs had good long-term storage stability. Also, the high crystallinity was confirmed by TEM detection (Yao et al., 2018;Yoshimura et al., 2021). The CO 2 -specific probe hydroethidine (HE) was used to characterize the CO 2 scavenging capacity of Mn 3 O 4 NPs. The experimental results showed that Mn 3 O 4 NPs had a better SOD-like activity. Terephthalic acid (TA) reacted with hydrogen peroxide to prove that Mn 3 O 4 NPs had a Catalin-like activity. The absorption spectrum and EPR spectrum were used to detect the hydroxyl group level in the presence of Mn 3 O 4 NPs. In in vitro experiments, PMA was applied to the ears of mice, and the local inflammatory response was typical in the treated area. The fluorescence intensity of the DCFDA-H2 probe before and after comparison proved the scavenging ability of Mn 3 O 4 NP reactive oxygen species, and HE staining showed that PMA induced lymphocytosis in the ear of mice, and the ear treated with Mn 3 O 4 NPs could significantly alleviate the inflammatory symptoms. Jia Yao et al. not only demonstrated the scavenging activity of Mn 3 O 4 analyses but also provided a promising therapeutic strategy for redox using the redox active analyses.
Inflammatory bowel disease (IBD) is a nonspecific chronic inflammatory disease of the intestinal tract, mainly including ulcerative colitis (UC) and Crohn's disease (Sheng et al., 2019;Chen and Shen, 2021). The pathogenesis of IBD is very complex and is generally believed to be related to immune, environmental factors, and genetic and genetic defects or changes (Podolsky, 2002). There are few effective treatments for IBD. Oxidative stress is a fundamental cause of IBD and plays an essential role in some of the characteristic signs and symptoms of IBD, such as abdominal pain, diarrhea, and big toxic colon (Zhao et al., 2019). Excessive ROS production leads to oxidative damage to the DNA, proteins, and lipids, which may promote the initiation and development of IBD (Mittler, 2017). Therefore, targeting inflammatory sites and scavenging reactive oxygen species may be effective strategies to reduce IBD. Natural enzymes in organisms, such as superoxide dismutase and catalase, can precisely remove O 2 − and H 2 O 2 , respectively, to protect the body from ROS damage. However, ROS in the focal area of IBD is often excessive, and natural enzymes in the organism are often tricky to control ROS in the normal physiological range, leading to the difficulty in clearing inflammation. At the same time, due to the high specificity of natural enzymes, a natural enzyme usually can selectively catalyze only one substrate, and it is challenging to remove multiple ROS simultaneously. In order to make Mn 3 O 4 @OLA@DSPE-PEG-COOH (hereinafter referred to as DPOMn 3 O 4 nanoenzymes) which is stable in aqueous solution, Jia Yao et al. modified the surface of Mn 3 O 4 @OLA by using clear, good biocompatibility, and structure of small molecule lecithin derivative two stearic acyl phosphatidyl ethanolamine carboxy end grouppolyethylene glycol (DSPEPEG-COOH) group. They confirmed the ability of the DPO-Mn 3 O 4 nanoenzyme to scavenge superoxide anions in vitro. Subsequently, inflammatory bowel disease induced by dextran sulfate sodium (DSS) was established in mice (Mittler, 2017).
Cerium-Based NPs
Cerium oxide (CeO 2 ) nanoparticles, also called nanoparticles of cerium oxide nanoparticles (CNPs), are well-known catalysts that show significant pharmacological potential due to their antioxidant properties (Ferreira et al., 2018). Cerium oxide itself has oxidation, as a combustion catalyst or catalyst carrier; which is widely used to remove industrial harmful gases or formaldehyde in the room, also is favored in the chemical industry. Compared with the oxidability of ceria, nanoceria has oxidability and reducibility. Oxygen in the nanometer cerium oxide lattice quickly falls off, resulting in oxygen hole defects. For the charge balance of the crystal, a small amount of Ce 4+ is converted to Ce 3+ . Ce 3+ is reductive and will be oxidized to Ce 4+ in the oxidation condition.
Consequently, Ce 3+ is easy to be oxidized, and Ce 3+ itself is reductive (Celardo et al., 2011). Therefore, CNPs scavenge free Frontiers in Bioengineering and Biotechnology | www.frontiersin.org June 2022 | Volume 10 | Article 920213 6 radicals by reversibly binding oxygen through moving between Ce 3+ (reduction) and Ce 4+ (oxidation) forms on the particle surface. This ability is comparable to that of biological antioxidants. Because of this, CNPs are believed to exhibit the activity similar to dismutase and catalase, protecting cells from superoxide ions but hydrogen peroxide, two significant reactive oxygen species (Korsvik et al., 2007;Singh et al., 2011). Ceria nanoparticles are widely used in biomedicine for treating antioxidant-related diseases. Confirmed by the electron spin resonance, cerium oxide has the nature of the scavenging free radicals (Colon et al., 2009). In 2009, the United States, led by J Colon, consisting of the chemical, medical research team, provided in the United States "nano-drug" academic journals, published an article "using cerium oxide nanoparticles that protect normal cells from pneumonia caused by radioactive," which created new in the field of nanometer cerium oxide in biomedical applications (Colon et al., 2009;Colon et al., 2010). With the increasing popularity of cerium oxide nanoparticles in clinical use, the research team carried out relevant research on the application of cerium oxide in the treatment of liver ischemia/ reperfusion. Moreover, the treatment of ischemic stroke also made significant progress. Consequently, this article reviewed the research results of relevant teams.
Hepatic ischemia-reperfusion injury (IRI) is one of the crucial causes of liver injury during liver transplantation, resection, and hypovolemic shock. Excessive production of reactive oxygen species is an essential factor leading to liver ischemiareperfusion injury (IRI) (Eltzschig and Eckle, 2011;Zhai et al., 2013). Antioxidant treatment to improve the oxidative stress state of the injured liver is an effective therapeutic measure to improve IRI. The mononuclear phagocytic cell system (MPS) can affect the delivery of nanomaterials to the desired disease area (Bourquin et al., 2018;Feliu et al., 2016) (Figure 4). The direct prevention of liver ischemia-reperfusion injury (IRI) through nano-antioxidants with priority for liver uptake is an important research direction for applying nanomaterials in the treatment of liver IRI. Combining the redox activity characteristics of cerium oxide nanoparticles and the advantages of targeted enrichment in disease regions, Dalong Ni et al. selected representative nano-antioxidant cerium oxide nanoparticles (Ni et al., 2019). The establishment of the mouse liver IRI model specifically discussed the specific mechanism of cerium oxide NPs to prevent IRI and introduced the method of using the black box effect of nanomaterials to treat the live animal liver IRI. Positron emission tomography (PET) imaging real-time noninvasive assessment of cerium oxide NP biological distribution, using radionuclide 89 zr to mark ceria NPs, proves that polyethylene glycol (PEG) is changed to make these cerium oxides on the surface of the NPs with good blood circulation. Quantitative PET image analysis shows that NPs were cleared from the liver of mice from 1 to 21 days. These characteristics show that cerium oxide has the character of explicit according to the long-term accumulation in the liver to support the applications of cerium oxide in the body against hepatic IRI. The protective effect of cerium dioxide NPs on IRI in the mouse liver IRI model was compared. Aspartate aminotransferase (AST) and alanine aminotransferase (ALT) results showed liver injury in untreated IRI mice and effective prevention of IRI by cerium oxide NPs in the treated group. HE staining of the liver tissue showed liver sections of IRI mice in the PBS group, showing large areas of severe damage and significant lipolysis, necrosis, and bleeding of liver cells. The liver tissues of the IRI group treated with cerium oxide NPs were only slightly damaged, which indicated that cerium oxide NPs had a protective effect on IRI and proved the biocompatibility of cerium oxide NPs in vivo. The team's research will also provide references for the potential applications of nano-anticoagulants in preventing the liver IRI (Ni et al., 2019).
Acute ischemic cerebrovascular syndrome, also known as ischemic stroke, is a severe cerebrovascular disease that can lead to loss of brain function, disability, and even death. Oxidative damage is one of the important mechanisms leading to ischemic brain injury (Ngowi et al., 2021). During ischemia, the accumulation of superoxide anions, hydroxyl radicals, and hydrogen peroxide induces oxidative damage and leads to cell apoptosis (Barnham et al., 2004; Nathan and Cunningham- These results indicate that PEG-MeNPs have good antioxidant stability. In vitro, neuro 2A cells were cultured using cobalt chloride (CoCl 2 ) to evaluate PEGmaps' neuroprotective effect on ischemia directly. Cobalt chloride (CoCl 2 ) is a common in vitro model used to investigate the mechanism of antioxidation therapy for the ischemia-related neuronal diseases. CoCl 2 significantly increased the ROS levels in neuro 2A, followed by cell death due to oxidative stress, while PEG-MeNPs significantly reduced the intracellular ROS levels, demonstrating the antioxidant capacity of PEG-MeNPs in cells. In vivo, PEG-MeNPs were injected into the lateral ventricles of experimental rats to evaluate their efficacy in the ischemic brain of experimental rats. The cerebral tissue sections of rats and TTC staining showed that the area of ischemic cerebral infarction of rats pretreated with PEG-MeNPs was about 14%. Nevertheless, that of the control group was about 32%. The area of cerebral infarction of rats in the PEG-MeNPs-pretreated group was significantly lower than that in the control group. Accordingly, the generation of O 2 was also effectively inhibited, proving the possibility of treating ischemic brain injury with PEG-MeNPs. Through the tail vein injection of PEG-MeNPs, security in the body was assessed, and it was found that PEG-MeNPs have good blood compatibility, and do not change in serum alanine aminotransferase (ALT), aspartate aminotransferase (AST), and alkaline phosphatase (ALKP), blood in urine nitrogen (BUN), total protein (TP), or the level of albumin (propagated) in different organs of the histological analysis showed no NP morphological changes after the treatment or signs of inflammation, and proved that cerium oxide nanoparticles can be
Iron-Based NPs
Prussian blue (PB), also known as ferric ferrocyanide, is the only antidote for clinical thallium poisoning, approved by the food and drug administration (FDA). In recent years, different types and functional nanomaterials based on PB have been widely used in inflammatory therapy, tumor therapy, molecular imaging, and other biomedical safety fields (Patra, 2016). PB can simulate various enzyme activities, including SOD, CAT, and POD. For example, Zhang W et al. found that Prussian blue nanoparticles (PBNs) can catalyze H 2 O 2 with the artificial POD activity (Zhang et al., 2014). Further study by the team found that PBNP does not produce ROS such as OH through Fenton reaction, but has the POD activity of inhibiting the generation of OH, as well as a variety of biological enzyme activities such as CAT and SOD, which can effectively remove a variety of ROS ordinarily, and is widely used in inhibiting cell damage caused by ROS.
The mechanism of ROS scavenging by PB nanozyme particles shows that the surface volume of the nanozyme is a crucial factor in determining ROS scavenging efficiency. Consequently, hollow PB nanozyme is considered to have good ROS scavenging efficiency (Li and Shi, 2014;Wang et al., 2016b). The crystallinity of the hollow PB nanozyme particles synthesized by the microemulsion method is poor. Accordingly, they need to be synthesized in multiple steps at high temperatures. More importantly, it is not easy to achieve mass production (McHale et al., 2010;Roy et al., 2011). Under this and also inhibited the production of reactive oxygen species. These results indicated that HPBZs could promote cell protection through antioxidant activity. Considering HPBZ in vitro antioxidant and anti-inflammatory activity, resistance to apoptosis, HPBZs' team further studied ischemic stroke in rats induced by the middle cerebral artery disease-modifying function in the model. Through the PET experiment, it was found that three pretreatment HPBZs lower ischemic injury in rats, cerebral infarction significantly less than the injected saline volume of MCAO rats, prompt HPBZ pretreatment can improve the nerve dysfunction; further comparing three groups of rats brain NO level and antioxidant capacity, HPBZ pretreatment can effectively suppress the RNS induced by MCAO generates, The anti-ROS activity of the brain tissue was significantly increased. Consequently, these results demonstrated the antioxidant properties of HPBZs in the treatment of ischemic stroke in vivo. In summary, Kai Zhang et al. constructed monodisperse HPBZs with good crystallization using a simple, formless synthesis strategy and applied it to treating ischemic stroke in rats, guiding the large-scale application of PB nanoparticles. As mentioned previously, targeting intestinal inflammation and clearing ROS play key roles in IBD treatment. CAT, GPX, SOD, POD, and other antioxidant enzymes are unique proteins with catalytic activity and high selectivity, and they all require the participation of the iron-containing auxiliary groups in the construction to exert excellent catalytic activity . Therefore, iron-based nanoenzymes have become the focus of research in recent years. On this basis, Zhang W et al. (Figure 7) designed a Prussian blue nanoparticle (PBNP) with a diameter of about 50 nm, which has activities of SOD, POD, CAT, and other enzymes, which can effectively remove ROS, and can effectively control the inflammation in vivo of lipopolysaccharide (LPS)-induced inflammatory mice by injection through the tail vein. PBNPs can target the aggregation of DSS-induced mouse colon inflammatory tissues, exert the activity of artificial nanozyme, effectively remove the excessive ROS produced in inflammatory tissues, block ROS-related inflammatory response and oxidative stress response, and inhibit the colon's progression, which improves the level of intestinal inflammation. Later, Zhao Jiulong et al. (Zhao et al., 2019) (Figure 8) developed a new and effective strategy of nanoenzymecatalyzed nanotherapy for IBD. Based on the activity of PBNPs, a novel manganese-Prussian blue nanoenzyme (MPBZs) was synthesized by using PBNPs as the basic framework and introducing Mn 2+ with solid productivity. With the nanoscale size and good physiological stability, MPBZs can reach the inflammatory site of the colon smoothly through the robust acidic environment in the stomach and alkaline environment in the intestine, and specifically accumulate in the inflammatory site of the colon by using the EPR effect and charge effect of the intestinal epithelium. Due to the introduction of Mn 2+ in MPBZs, the reducibility of MPBZs was more substantial than that of PBNPs, and the ROS scavenging ability was improved. Therefore, after the oral administration of MPBZs in DSSinduced acute IBD mice, the drugs can target intestinal inflammatory tissues, effectively remove ROS, and improve intestinal inflammation. At the same time, MPBZs can effectively reduce the expression levels of MPO, MDA, IL-1β, IL-6, IFN-γ, and TNF-α in the inflammatory colon tissue and effectively inhibit the progression of inflammation. Moreover, Zhao Jiulong synthetic MPBZs have many kinds of enzymes such as SOD and CAT activity, which can effectively eliminate ROS, are a high-quality artificial enzyme, inhibit oxidative stress in the process of inflammation reaction, and affect the oxidation of inflammatory-related signaling pathways and related critical molecular expression. Signaling pathway to reduce inflammation, especially for the ROS-related diseases, IBD provides a new thought of treatment, which has broad application prospects.
Platinum-Based NPs
Pt-NPs have activities similar to mitochondrial electron transport complexes and can be used as inhibitors of SOD and catalase. Watanabe et al. studied the antioxidant effects of PAA-protected platinum nanoparticles (PAA-Pt) (Watanabe et al., 2009). In in vitro evaluation, PAA-Pt cleared AOO generated by AAPH thermal decomposition in a dose-dependent manner. The concentration of NPs at 50% of any event was recovered (IC 50 ) 584 M, while the control showed no antioxidant activity. PAA-Pt is at least six times more active than other metal nanoparticles. The inhibition of PAA-Pt on linoleic acid peroxidation induced by AOO was further evaluated by determining the oxygen consumption and phenobarbital acid reactive substance (tar). Oxygen consumption decreased significantly after adding NPs, but there was no significant difference between oxygen consumption and the control group. The authors suggest that the peroxide inhibition mechanism of PAA-Pt particles may mainly be the elimination of toilets, which inhibits the peroxidation of linoleic acid. The thiobarbituric acid (TBA) test was designed to assess the production of free malondialdehyde (MDA) during lipid peroxidation. Accordingly, the results showed that PAA-Pt reduced the production of lipid peroxides by inhibiting the proliferation of the nitrite peroxidation chain induced by AOO. In vivo, Pt-NPs also act as a scavenger for RON. Katsumi et al. demonstrated for the first time in a mouse model that these NPs could protect against the hepatic ischemia-reperfusion injury (Katsumi et al., 2014). In their study, two Pt-NPs of different sizes were injected intravenously into mice, causing liver damage by blocking the portal vein, and consequently then reperfused for six hours. The results showed that both NPs were accumulated in the liver nonparenchymal cells after injection. The smaller the Pt-NPs, the more significant the decrease in alanine aminotransferase (ALT) and aspartic aminotransferase (AST). Small NPs also inhibited the increase in the ratio of oxidized GSH to reduced glutathione in the ischemic liver, effectively reducing the increase in lipid peroxides.
Selenium-Based NPs
As part of the liver's antioxidant defense system, selenium plays an essential role in antioxidative stress. Many studies have shown that supplementation with selenium can increase enzymes such as GPx, preventing the accumulation of free radicals and thus reducing cell damage (Tapiero et al., 2003;Qin et al., 2015). Therefore, nanomaterials containing selenium have intrinsic antioxidant properties. Zhai et al. used chitosan (CS) of different molecular weights to stabilize the synthesis of SeNPs, and then evaluated the antioxidant capacity of these nanoparticles (Zhai et al., 2017). The results of in vitro cell tests showed that the generation of intracellular electrons was inhibited by selenium concentration, also cs-sends; CS(l)-sends were stable by external or oral use, which effectively protected the glutathione peroxidase activity in mice and accordingly prevented the formation of lipofuscin induced by ultraviolet light. Se-doped carbon quantum dots (Se-CQDs) are also scavenger free radicals (Rosenkrans et al., 2020). In a recent study, the radical scavenging ability of SE-CQDs was investigated using the electron spin resonance (ESR) technique. The results showed that after the addition of SE-CQDs, the ESR signal of the DEMPO/OH adder disappeared, which indicated that it had an intense OH scavenging activity. The in vitro cell models have shown that SE-CQDs can protect MDA-MB-231 cells from H 2 O 2 -induced oxidative stress. This was achieved by reducing H 2 O 2 -induced cell death consequently increasing cell viability (as determined by CCK8). In addition, quantitative analysis of ROS induction in MDA-MB-231 cells using the RONS fluorescent probe (DCFH-DA) showed that the fluorescence intensity of DCFH-DA significantly decreased after the addition, which confirmed the antioxidant capacity of SE-CQDs.
CONCLUSION
Reactive oxygen species (ROS) are metabolites of normal cells in organisms. Normal levels of ROS in cells are very important for maintaining cell signaling and other intracellular functions, so the dysregulation of ROS balance is involved in the pathophysiological processes of many diseases. The ideal antioxidant nanomaterials should be able to remove multiple primary and secondary electrons, maintain antioxidant activity against oxidative damage, be biocompatible, and have controllable properties such as size and modifiable surface. Choosing the proper nanomaterials for each disease is complicated because each disease has its unique process of oxidative stress, not the same, which is the most critical factor affecting the outcome of the disease. The selection of suitable nanomaterials for ROS scavenging should be based on their specific antioxidant capacity and their biological distribution, cyclic half-life, immunological properties, and other in vivo pharmacokinetics. With the development of nanometer research, various nanomaterials have been developed. Therefore, it is desirable to explore the antioxidant properties of these nanomaterials through a large number of in vivo studies.
AUTHOR CONTRIBUTIONS
WS, JG, and WJ conceived the idea, and critically revised and finalized the manuscript. RL and XH wrote the manuscript.
|
2022-06-18T15:21:39.297Z
|
2022-06-16T00:00:00.000
|
{
"year": 2022,
"sha1": "06cb8974e31574acbae165f609e63ddc7db61dbc",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2022.920213/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c20a235c085262dba21c5917e40a685dbb6cb04",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252783734
|
pes2o/s2orc
|
v3-fos-license
|
Isolating together during COVID-19: Results from the Telehealth Intervention Program for older adults
Background A pressing challenge during the COVID-19 pandemic and beyond is to provide accessible and scalable mental health support to isolated older adults in the community. The Telehealth Intervention Program for Older Adults (TIP-OA) is a large-scale, volunteer-based, friendly telephone support program designed to address this unmet need. Methods A prospective cohort study of 112 TIP-OA participants aged ≥60 years old was conducted in Quebec, Canada (October 2020–June 2021). The intervention consisted of weekly friendly phone calls from trained volunteers. The primary outcome measures included changes in scores of stress, depression, anxiety, and fear surrounding COVID-19, assessed at baseline, 4 and 8-weeks. Additional subgroup analyses were performed with participants with higher baseline scores. Results The subgroup of participants with higher baseline depression scores (PHQ9 ≥10) had significant improvements in depression scores over the 8-week period measured [mean change score = −2.27 (±4.76), 95%CI (−3.719, −0.827), p = 0.003]. Similarly, participants with higher baseline anxiety scores (GAD7 ≥10) had an improvement over the same period, which, approached significance (p = 0.06). Moreover, despite peaks in the pandemic and related stressors, our study found no significant (p ≥ 0.09) increase in stress, depression, anxiety or fear of COVID-19 scores. Discussion This scalable, volunteer-based, friendly telephone intervention program was associated with decreased scores of depression and anxiety in older adults who reported higher scores at baseline (PHQ 9 ≥10 and GAD7 ≥10).
Introduction
Older adults are at higher risk of social isolation compared to younger adults (1). The COVID-19 pandemic has further amplified this problem with lockdowns and social distancing measures designed to decrease risk of infection in older adults. As a result, an unprecedented number of older adults have found themselves isolated and disconnected from others. Recent literature shows that social disconnection in older adults is associated with higher levels of stress, suicide ideation, and self-harm (2). As well, social isolation in older adults is associated with an increased risk of medical and psychiatric comorbidity (3,4), premature mortality (5) and poor quality of life (6), relative to older adults who are not isolated. The scarcity of resources, the need for scalable, low-cost and effective interventions to help care for this population has never been greater. To address this need, there has been a large uptake in the use of technology to facilitate access to services (7). Telehealth has demonstrated evidence as a modality for reducing depression (8) and anxiety (9) in older adults. Technology has also been found to be successful in connecting socially isolated older adults (10)(11)(12) as the vast majority have access to a phone (e.g., mobile phone, home phone) (13,14). However, due to the digital divide, many older adults may have difficulty using more modern devices, limiting their capacity to engage in virtual meetings and therefore putting them at risk of heightened social isolation relative to the general population.
Another challenge in the development and provision of services to isolated older adults is the limited availability of health resources and trained personnel. Fortunately, layvolunteer interventions have been shown to improve depression and other mental health symptoms in older adults, making this a scalable approach (15). As such, telephone-based support with volunteers can be a potentially rapid, inexpensive, and convenient intervention option for the urgently required support for isolated older adults. This led to the development of the novel and scalable Telehealth Intervention Program for older adults (TIP-OA) in March 2020 by the GeriPARTy research group (16). TIP-OA is a friendly phone call service (providing social interaction for isolated older adults) that now serves >800 older adults with 350 trained volunteers in the province of Quebec, Canada. The aim of this multilingual (>15 languages, e.g., Hebrew, Chinese, Spanish, Arabic, Urdu, Punjabi, Italian, Russian, Greek, etc.) program is for volunteers to provide social interactions, connect older adults with existing community resources/networks, and help them navigate and access online resources (e.g., grocery delivery, pharmacy refills and delivery). Although comparable telehealth programs for older adults have been set up in North America and Europe throughout the pandemic, few have assessed their effectiveness on participants' mental health by means of empirical research. Of the few that did, most focused on cognitive functioning (17). The primary objective of this study was to evaluate the effectiveness of TIP-OA in reducing participants' stress from baseline to 8-week follow-up. The secondary objective was to evaluate effectiveness in improving depression, anxiety, and fear associated with COVID-19 scores. We hypothesised that participants would report decreased scores of stress, anxiety, depression, and fear of COVID-19 at 8-week follow-up compared to baseline.
Study design
This was a prospective, longitudinal 8-week study of a largescale volunteer-based telehealth intervention program for older adults (TIP-OA). Data collection occurred at baseline (prior to, or after receiving only one phone call from a volunteer), at 4weeks (±1 week), and at the 8-week mark (±1 week), which was the primary study endpoint.
Ethics
The protocol was conducted in accordance with the Declaration of Helsinki and approved by the Jewish General Hospital Research Ethics Committee on September 24, 2020. This study has been registered on clinicaltrials.gov, registration #NCT04523610.
Sample size and recruitment
Over 270 consecutive prospective TIP-OA program users were contacted contacted and 111 consented to participate in this study. Recruitment took place October 2020-February 2021 and participants were followed until 8-week follow-up. Older adults were referred to the TIP-OA program either through community partners (n = 11) or by providing consent to other referral sources such as their community workers or clinicians in long-term care facilities, geriatric psychiatry, geriatric medicine, family medicine clinics, local public primary care centres, or as self-referrals through a 1-800 number. The research team then reached out to them by phone to confirm eligibility and obtain verbal informed consent to participate in the study. "Risk rating" was coded as colours assigned by clinicians after phone assessment of confusion, psychotic thoughts, depression/anxiety, suicidality, functional impairment, COVID distress, and other conditions. Participants with no/mild ratings were coded as green (low risk), 2+ moderate ratings as orange (medium risk), and 1+ severe rating as red (high risk) (16). For more details, please refer to the previous publication by Dikaios et al. (16).
Inclusion/exclusion criteria
The inclusion criteria for this study was: (1) TIP-OA users aged ≥60 living in Quebec, and (2) spoke English or French. Individuals who had psychotic symptoms, severe hearing impairment, or active suicidal ideation were excluded.
Intervention TIP-OA involved weekly, friendly phone calls from trained volunteers to older adults (aged ≥60), including those experiencing mental health/cognitive issues. The volunteerbased telehealth calls primarily provided friendly social interaction lasting between 5 and 90 min (average 30 min), depending on participants' preferences. Volunteers inquired about clients' general wellbeing, provided updated public health recommendations about COVID-19, conducted a brief needs assessment (e.g., food delivery, medication from their pharmacy, transportation), offered support accessing community resources and fostered social connections through active listening, validation, and conversation. A total of 82 TIP-OA volunteers participated in the study. The majority (n = 65, 79%) were undergraduate/graduate students//employees from community organizations or retired healthcare professionals. A smaller proportion of volunteers (n = 17, 21%) were other members from the community. For protocol details, please refer to the publication by Dikaios et al. (16).
Volunteer training
Telehealth Intervention Program-OA volunteers underwent a rigorous application and selection process. Selected candidates attended a 2-h TIP-OA training session conducted by clinicians through Zoom. Each training group was composed of 4-6 trainees and led by two trainers, to ensure optimal opportunity for interactive training. Moreover, a detailed training manual was provided, which covered an overview of the program, sample conversations, client confidentiality, and an extensive list of community resources (e.g., grocery and pharmacy delivery services). Twice a week drop-in follow-up sessions were offered to all volunteers with the purpose of debriefing and receiving support from the clinicians and trainers. For protocol details, please refer to the publication by Dikaios et al. (16).
Measures Demographic variables
The following variables were collected: age, sex, living setting (e.g., living alone, long-term care), neighbourhood of residence, marital status, highest level of education, language(s) spoken, ethnicity, and baseline risk level (colour coding described in Dikaios et al. (16)). Additional information about digital access and literacy was obtained (e.g., access to and ability to use telephone, computer, internet, Facetime/Zoom).
Primary outcome measure
The Perceived Stress Scale (PSS) is a 14-item scale that was used to measure stress (18), inquiring how participants felt in the past month and the degree to which life events were experienced and appraised as stressful, with responses ranging from 0 (never) to 4 (very often) (18).
Secondary outcomes measures
The Patient Health Questionnaire-9 (PHQ-9) is a 9-item questionnaire that was used to measure depression symptom severity (19). The Generalized Anxiety Disorder-7 (GAD-7) is a 7-item scale that was used to measure anxiety symptom severity (20).
. /fmed. . The COVID Fear Scale is a 18-item scale that was used to measure the participants' anxiety, fear and concern around the current pandemic.
Items included: "Fear that I will be infected" and "Worry if I will be assigned to COVID wards if hospitalized" (21).
Subgroups
The higher-stress subgroup consisted of participants with a PSS score ≥14 (indicative of moderate stress) (18). The higherdepression subgroup consisted of participants with a score of PHQ-9 ≥10 (indicative of moderate depression) (19). The higher-anxiety subgroup consisted of participants with a score of GAD-7 ≥10 (indicative of moderate anxiety) (20).
Data analysis
Normality was tested using the Kolmogorov-Smirnov normality test (22). To evaluate the effectiveness of TIP-OA in reducing scores from baseline to 8-week follow-up, paired t-tests (23) were performed for all outcomes (PSS, PHQ-9, GAD-7, COVID fear scale). Two-tailed p-values <0.05 were considered as statistically significant. Last observation carried forward (LOCF) using 4-week data was used to handle missing 8-week data (16). All statistical analyses were performed using SPSS (version 28.0; SPSS, Inc., Chicago, IL).
Study sample/recruitment
As described in detail in Figure 1, a total of 229 interested study candidates were contacted and 111 individuals consented (48.47%) to take part in the study. There were a total of 14 dropouts (12.61%), 78 individuals completed the intervention (6/8 weeks) and four withdrew before 6 weeks of intervention.
Participant baseline demographic characteristics are described in Table 1. Participant "Neighbourhood" was the neighbourhood of residence that participants self-reported. "Schooling" was defined by participant's highest level of education completed (elementary school graduate, high school graduate, a Bachelor's degree, or a Master's/PhD). Participant's marital status was defined by if they were currently single, married/common-law, or separated. For "languages, " regardless of participants' spoken languages, all research participants spoke either English, French, or were bilingual (spoke English and French). "Minority status" is defined as self-identifying as a visible minority. Ethnicity was defined by patients selfidentifying as African, Caribbean, Caucasian, Southeast Asian, Mixed, or Other. For "living situation, " support was defined as living with another family member, a spouse, child, friend, or in shared housing. For details regarding "Risk rating, " please refer to the "Sample size and recruitment" section. Table 2 shows the results of the paired t-tests for all participants. The mean (SD) change score for PSS (n = 79) at 8 weeks was −0.58 (6.66). There were no significant differences in PSS scores between baseline and 8-weeks [ Table 3 shows the results of the paired t-tests for each participant subgroup. The mean (SD) change score for PSS subgroup (n = 58, PSS ≥14) at 8 weeks was −1.09 (7.15). There were no significant differences in PSS subgroup scores between baseline
Discussion
The aim of this study was to document the impact of a weekly friendly telephone support (TIP-OA) in improving mental health outcomes and fear of COVID-19 from baseline to 8-week follow-up. Our study found that participants did not develop higher scores of stress, anxiety, depression and fear associated with COVID-19 at 8-weeks, compared to baseline, despite the intervention time frame overlapping with stressors of the high peaks of the pandemic in Quebec. However, the subgroup of individuals with higher baseline scores of depression (PHQ-9 score ≥10) showed a statistically significant reduction in scores at 8-week follow-up [mean change score = −2.27 (±4.76), 95%CI (−3.71, −0.82), p = 0.00]. The subgroup of individuals with higher baseline scores of stress (PSS score ≥14) and anxiety (GAD-7 score ≥10) showed a reduction in scores at 8-week follow-up, however, this was not statistically significant. These results suggest that participants' mental health symptomatology stabilized over the 8-week intervention. These are positive findings as the intervention for this study was conducted between October 5th 2020 and June 7th, 2021, coinciding with record COVID-19 infections in Quebec (24), during which media-related stressors and isolation restrictions would have presumably been at their peak. Recent literature has shown that persistent isolation and exposure to worrying COVID-19 statistics and media coverage is associated with adverse mental health outcomes (25), The finding that participants' overall mental health scores did not significantly worsen during this time period suggests that TIP-OA may have played an important role in preventing mental health deterioration in at-risk older adults.
Furthermore, fear of COVID-19 scores for all participants decreased at 8-weeks, compared to baseline. Although this was not a statistically significant finding, it aligns with recent literature showing that being able to openly discuss your fear about COVID-19 with an informed individual who is able to provide accurate information decreases fear of COVID. (26) Besides providing regular friendly phone calls that increase social connection, the TIP-OA program showed two major strengths: (1) the program was able to provide reliable information regarding COVID-19 and government regulations and (2) it was able to facilitate access to community resources (e.g., connect with community programs, food/grocery delivery services, pharmacy delivery services), ultimately helping to mitigate the negative impact of the COVID-19 pandemic. A potential benefit of this intervention was that it could also decrease the burden on the healthcare system as older adults at higher risk of social isolation are associated with a 50% higher healthcare cost and increased resource requirements (27)(28)(29)(30)(31). This study found that the subgroup of individuals with high depressive scores at baseline showed the most significant improvement at 8-week follow-up. Individuals with high depressive scores are known to be most at risk for developing clinical depression which is associated with higher mortality, morbidity, and healthcare (32). Due to the preventative approach, it is possible that the program might have contributed in cutting down the medical cost. Late life depression is especially associated with higher caregiver burden and distress (33). Improving the mental health of those most at risk for clinical depression is the need of the hour, especially during the pandemic with an already overburdened healthcare system (34).
Together, these results speak to the strength of the TIP-OA intervention and its potential to stabilize and improve older adults' mental health outcomes, especially for the most vulnerable older adults, by means of a community-based program that has served over 800 older adults in Quebec, from diverse socio-economic and cultural backgrounds in ≥15 languages.
Strengths and limitations
Although an RCT design is the gold standard trial to assess the effectiveness of the intervention, the TIP-OA team opted for an uncontrolled prospective cohort study. This was meant to be able to provide the service to as many individuals as possible in a timely manner due to COVID-19's impact on mental health and social isolation. In the context of the beginning of the pandemic, having a control group in this study would have delayed access to the service for vulnerable participants with high risk for mental health decline by 8-weeks or more. Additionally, this study design decision was supported by funding avenues dedicated to enhancing accessibility to mental health services in the community. One major strength of this study is that this is a volunteer-based mental health program for older adults that does not place additional demands on the already strained healthcare system nor further burdens healthcare workers and caregivers. In fact, the program supported several social workers and community organizations serving older adults by providing the service to their clients and taking the stress off of their shoulders to deal with the emergency situation created by the pandemic.
Another strength of the TIP-OA program is that it serves individuals in >15 languages which opens a bridge to serve immigrant adults who arrived to Canada later in life and experience language barriers to access services. Future confirmatory RCTs with a larger sample size and a longer study period could be used to further evaluate changes in stress and other outcomes, the effectiveness of TIP-OA, and the impact on older adults who do not speak English/French and may be at greater risk for social isolation. In future studies, identifying if participating individuals were also caregivers may be important to assess the increased caregiver burden during the pandemic.
Conclusion
This study has found that a friendly phone call program (TIP-OA) can help stabilize and decrease mental health symptoms in older adults, with a most pronounced effect on depression scores. TIP-OA supports the most vulnerable community-dwelling older adults who are isolated and experiencing mental health distress through a trained lay volunteer community-based and easily scalable intervention.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Jewish General Hospital Research Ethics Committee. Registered: ClinicalTrials.gov, #NCT04523610. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Conflict of interest
Author SR receives a salary award from the Fonds de Recherche de Québec-Santé (FRQS), is a steering committee member for AbbVie, and is a shareholder of Aifred Health. HS has a CIHR fellowship award, MITACS fellowship award, and AGE-WELL award. Author EM receives salary support from the Fond de recherche Santé Québec. Author SK has received research support from Brain and Behavior Foundation, National institute on Aging, BrightFocus Foundation, Brain Canada, Canadian Institute of Health Research, Canadian Consortium on Neurodegeneration in Aging, Centre for Ageing and Brain Health Innovation, Centre for Addiction and Mental Health, an Academic Scholars Award from the Department of Psychiatry, University of Toronto, and Equipment support from Soterix Medical. Author SBo receives funds from the Canada research Chairs program and various provincial and federal granting agencies. He is president of, and owns equity in, Cliniques et Development In Virtuo; a company that distributes virtual reality environments. Conflicts of interest are managed under UQO's conflicts of interest policy.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
|
2022-10-11T13:35:39.269Z
|
2022-10-11T00:00:00.000
|
{
"year": 2022,
"sha1": "4d1efd1ee469750c058dcd8d1b3667d6cca2292f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4d1efd1ee469750c058dcd8d1b3667d6cca2292f",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
38015714
|
pes2o/s2orc
|
v3-fos-license
|
Decreased Gait and Function in Duchenne Muscular Dystrophy
Duchenne muscular dystrophy (DMD) is a genetic disorder linked to chromosome Xp21, due to absence of dystrophin production. It is clinically characterized by progressive muscle weakness, fatigue, and development of joint contractures that compromise general motor functionality, mainly the gait. Objective: To characterize the motor function and decrease gait in children with DMD using the Portuguese version of the Motor Function Measure scale (MFM-P). Methods: A review of medical records including chronological age and scores from MFM-P of children with a DMD who attended at the Neuromuscular Diseases Clinic at Campinas State University (UNICAMP), Brazil was performed in this study. A total of 36 medical records of male patients with confirmed clinical diagnosis of DMD, ambulatory or not, regardless of age; excluding those with other associated diseases or other types of muscular dystrophies were selected. Data were analyzed using Kolmogorov-Smirnov and Spearman correlation statistical tests. Results: Analysis of all data collected showed that 75% of our sample had D1 scores lower than 41.02%. There was a linear relationship between the scores of D2 and D3, but no association between D2 and D1 scores was noted. D1 score was between 40% and 80% in those patients presenting D2 scores between 80% and 100%. In all cases patients with low total score presented a greater risk for loss of gait and their functionality. Conclusion: The standing posture and the postural transfers were the worst activities observed in children with DMD, with positive correlation between proximal and distal motor function. Even with high scores according MFM-P in proximal function, the children showed strong predictors for loss of gait.
Introduction
According to Fernandes et al. [1] Duchenne muscular dystrophy (DMD) is a neuromuscular disease caused by a genetic disorder linked to the X chromosome, clinically characterized by progressive and irreversible muscle weakness that compromise the motor function.
It is caused by the absence of the protein dystrophin, due to a deletion in the dystrophin gene, located on chromosome Xp21 [2], leading to a disruption in the mechanism of calcium release.A controlled release of calcium is essential for the muscle fiber contraction and affected cells are susceptible to sarcolemma rupture during contraction.This excessive intake of calcium and inadequate activation of proteases and phospholipases damage the muscle fibers that are replaced by fibro-fatty tissue, featuring a pseudo-hypertrophy of the muscles involved.The absence of dystrophin in the region of synapses in the cerebral cortex also seems to contribute to the cognitive deficits found in patients with DMD [3].
Clinically it is observed progressive muscle weakness, predominantly in the lower limbs, resulting in a characteristic gait pattern (anserine gait) with excessive anterior pelvic tilt, increased hip flexion and abduction, in order to advance the limb; also increased lumbar lordosis, and knee hyperextension in balance are observed.The speed, cadence and stride length of the gait are reduced to improve balance [3].
The children affected by this progressive muscular dystrophy present weakness that interferes with normal functional action.In such case, assessment of the motor function and muscle strength is frequently used to monitor the progress of the disease in this population [4].
According to D'Angelo et al. [5] and Melanda et al. [6], changes in the locomotor system as progressive muscle weakness, muscle fatigue, and development of joint contractures, may alter the gait of these children and lead to loss of this ability between the first and the second decade of life.
Typically in the early stages of DMD, when children are still ambulatory, it is possible to observe a significant reduction in muscle strength.While in advanced stages, when the ability to walk is no longer present (nonambulatory), many functional abilities are equally lost, and motor skills are significantly decreased [7].
Due to this loss of overall functionality, it needs a specific and reliable tool to monitor therapeutic procedures and establish prognosis for recovery.To aim this goal, Berard et al. [8] created the Motor Function Measure (MFM) scale that was validated specifically for neuromuscular diseases by the research group of the Pediatric Rehabilitation L'Escale, Lyon, France.The Brazilian version of this scale was validated in 2008 by Iwabe et al. [9], and it analyzed the functions of the head, trunk, proximal and distal segments of members from a variety of neuromuscular diseases.The version for the Brazilian Portuguese (MFM-P) (Appendix) can be accessed in website http://www.mfm-nmd.org.
The MFM provides a numerical measure of motor skills for patients with neuromuscular diseases.It comprises 32 items, static and dynamic.These items are tested in lying, sitting or standing positions and are divided into three dimensions: Dimension 1 (D1)-standing position and transfers, with 13 items; Dimension 2 (D2)-axial and proximal motor function, with 12 items; Dimension 3 (D3)-distal motor function, with 7 items, 6 of them are related to the upper limb.Each item is graded on a four-point scale, the generic degree is defined by: 0cannot start the task or cannot maintain the initial position; 1-partially performs the exercise; 2-partially performs the requested movement or completely realized, but imperfectly (with compensations, insufficient maintenance of time position, slowness, lack of control of movement); 3-fully realized, "usual" exercise with controlled, perfect, objective, and performed with a constant velocity motion.The MFM scale allows the comprehensively evaluation, proximal, distal and axial motor dysfunctions, through evidence into three dimensions, thus being useful instrument for use in a broad spectrum of neuromuscular disorders, ranging from those predominant in girdle to distal ones.The MFM is adapted to patients with walking ability and those with partial or total restriction of movement [7].
Unfortunately there is no curative treatment for DMD at present, but there are palliative managements that can slow down some clinical signs and improving the quality of life and muscle strength, such as the use of corticosteroids and physiotherapy that may prolong the ability to walk [10] [11].
The aim of this study was to analyze the correlation between the gait and motor function in children with DMD by MFM-P scale.
Methods
A total of 36 medical records with confirmed clinical diagnosis of DMD, attended at the Neuromuscular Dis-eases Clinic from Campinas State University (UNICAMP) were reviewed.All patients were male, between 3 and 16 years-old (mean 11.19 ± 3.53), originating from the countryside and neighborhood of São Paulo and had previous MFM-P scores [7] data.Exclusion criteria comprised no confirmed DMD clinical diagnosis, and association with other diseases or other types of muscular dystrophies.
This retrospective study was approved by the Ethics and Research at Unicamp, by number 532419.
Statistical Analysis
Data were described and analyzed inferentially with the distribution of the scores of the dimensions (D1-D2-D3) and total score was normalized using the Kolmogorov-Smirnov test.Spearman's coefficient test with a significance level (p < 0.001) was used to verify the correlation between the variables: D1, D2, D3, total score and loss of ambulation.
Results
The scores of the dimensions and the total score did not show a normal distribution using adjustment Kolmogorov-Smirnov test (p-value < 0.001).
Information regarding the dimensions (D1, D2, D3) and functionality (total score) from each patient were analyzed with values measured in percentages.Low values were indicative of severe functional impairment of the child.
Table 1 shows the correlation between D1 -D2 -D3 scores and functional capacity (total score) from all patients.Values range (in percentage) observed in these patients for D1 were between 0% and 82.05%, for D2 between 16.66% and 100.00%, for D3 between 23.80% and 95.23% and for functional capacity between 12% and 88.54%.
Figure 1 shows the scatter plots of functional capacity versus scores (top graphics) and in between dimension scores (bottom graphics).We observed a positive linear association between functional capacity and D2 and D3 scores, but not with D1 scores.When analyzing the values among dimensions we noted a linear relationship between D2 and D3 scores; however no association was observed between D2 and D1 scores.The D1 scores were between 40% and 80% when patients had D2 score between 80% and 100%.
Table 2 shows the values of the total scores versus the dimensions scores when analyzed using the Spearman correlation test with p-values.We observed a positive and statistically significant correlation between dimension and total scores.
Table 3 shows the results analyzed between decreased gait (total score < 70% and D1 < 40.13%) with other dimensions using the Spearman correlation test.We also observed a positive and statistically significant correlation between decrease in gait and dimensions scores (p-value < 0.001).
Discussion
The availability of instruments for the functional assessment of patients with neuromuscular diseases such as MFM-P, are a great value as they allow the characterization, assessment and a more accurate follow-up from patients with DMD.Early detection of any motor disturbance, also permit earlier interventions in order to optimize motor function [9] [12]- [14].
Ganea et al. [15] assessed the gait pattern in children with DMD and correlated it with the MFM scale, founding that those with higher dimensions scores, especially in D1, had also higher values in both cadence and gait speed.
Fischmann et al. [16] agreed that the MFM scale is a promising metric instrument to be used in clinical trials to assess the effect of treatments due to the loss of ambulation time in DMD.Their results demonstrated the potential of the MFM scale as a parameter to distinguish the stage of the disease especially regarding to the walking function.
The clinical features of DMD are described since the mid-1800s, initially affecting the muscles of the pelvic girdle and progressing to the neck region, upper abdominal and upper limbs.According to D'Angelo et al. [5] and Bushby et al. [11] the clinical signs appear first proximal muscles affecting the posture and gait, followed by compromise of the distal, with total loss of this function around 8 -10 years of age.Changes in the locomotor system such as progressive muscular weakness, fatigue, and development of joint contractures leads to a postural compensation with difficulty to get up from the ground and especially during gait cycles [17] [18].These data corroborate our results in which lower scores were observed in the dimension that evaluates standing posture and gait (D1), followed by the proximal involvement (D2) in the majority of analyzed patients.
According to Mc Donald et al. [19], the walking motor activity is crucial for human independence that requires a complex interaction of different muscles from different muscle groups; especially from the proximal portion in order to stabilize the body axis and permit the body and the members to work as a single unit.Another study Bakker et al. [20] reported that musculoskeletal disorders; such as joint deformities and muscle weakness especially proximal segments, undermine the balance as well as the bipedalism and ambulation.
Another study [21] confirmed that loss of strength, especially from the hip extensors, as a predictor of loss of ambulation in DMD.This weakness progressively decreases the ability of a person to perform activities in standing and walking position, developing compensatory movements.Even in the early stages of the disease, it is possible to quantify the difficulties of standing and gait, using effective metric tools.
The execution of more complex tasks during the standing posture requires a sofisticate control of antigravity muscles.The observation of these compensatory movements during functional tasks; such as not being able to get up from the floor or climb up stairs, or the loss of speed, may allow the recognition of important changes in muscle synergies, as well as the prediction of future deficiencies [22].
In this study we were able to observe that the rapid loss of ambulation, as well as the decreased functionality of proximal segments with advancing disease, occurs due to strength deficits related to proximal muscles in the standing posture, observed at the lower values from D1 and D2 scores.
Conclusions
We concluded that there was a positive correlation between the loss of the ability to walk and functionality in children with DMD where deficits in gait are more evident when the patients have more difficulties in performing activities related to the standing posture and proximal member functions.
We also suggest that increasing the number of patients through a multicentric study will allow us to identify more risk factors that lead to the loss of gait in these patients.
In our study, the use of the MFM-P allowed us to detect different degrees of functional deficits, proving to be an efficient and reliable tool for scientific and clinical use.We consider the MFM-P as an important tool that will allow a better characterization of the severity of the disease, optimizing the physical therapy program to achieve a better result and delaying the progress of the disease.
Figure 1 .
Figure 1.Scatter plot comparing values between scores and functional capacity.
Table 1 .
Descriptive measures of scores D1-D2-D3 and functionality of children with DMD (values in %).
Table 2 .
Correlation between the dimensions and total scores values (p values).
Table 3 .
Correlation of loss of gait with dimensions scores (p values).
|
2017-08-15T22:36:05.295Z
|
2014-08-13T00:00:00.000
|
{
"year": 2014,
"sha1": "d551ca2e3fcef0969cc6ab7840cd2a3da19ef97b",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=48984",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d551ca2e3fcef0969cc6ab7840cd2a3da19ef97b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
45480166
|
pes2o/s2orc
|
v3-fos-license
|
Direct Reprogramming of Resident NG2 Glia into Neurons with Properties of Fast-Spiking Parvalbumin-Containing Interneurons
Summary Converting resident glia into functional and subtype-specific neurons in vivo by delivering reprogramming genes directly to the brain provides a step forward toward the possibility of treating brain injuries or diseases. To date, it has been possible to obtain GABAergic and glutamatergic neurons via in vivo conversion, but the precise phenotype of these cells has not yet been analyzed in detail. Here, we show that neurons reprogrammed using Ascl1, Lmx1a, and Nurr1 functionally mature and integrate into existing brain circuitry and that the majority of the reprogrammed neurons have properties of fast-spiking, parvalbumin-containing interneurons. When testing different combinations of genes for neural conversion with a focus on pro-neural genes and dopamine fate determinants, we found that functional neurons can be generated using different gene combinations and in different brain regions and that most of the reprogrammed neurons become interneurons, independently of the combination of reprogramming factors used.
INTRODUCTION
Direct cellular reprogramming provides a route to generate neurons from somatic cells in vitro (Vierbuchen et al., 2010) that opens up new possibilities to obtain patient-and disease-specific neurons, and several groups have reported successful reprogramming into functional neurons of distinct subtypes in vitro (reviewed in Masserdotti et al., 2016). More recently, it has been shown that non-neural cells can be reprogrammed into functional neurons in situ (reviewed in Grealish et al., 2016). Many of the neurons obtained acquire a GABAergic or glutamatergic identity (Grande et al., 2013;Torper et al., 2015), but the exact subtype identity and how fate specification is controlled during in vivo conversion remains an important question.
In this study, we performed a time-course analysis of NG2 glia reprogrammed into neurons using Ascl1, Lmx1a, and Nurr1 (ALN). We show that in vivo reprogrammed neurons functionally mature over time and that their ability to fire action potentials (AP) precedes circuitry integration. We also reprogrammed neurons in the dopamine (DA)-depleted striatum and in the midbrain, and tested different combinations of pro-neural genes and DA fate determinants. In all these conditions, we found only minor differences in the phenotype of the reprogrammed cells. A detailed analysis using electrophysiology, immunohistochemistry, and transcriptional profiling showed that most of the reprogrammed neurons acquire properties of fast-spiking (FS), parvalbumin (PV)+ interneurons (IntNs), a neuronal subtype that plays a highly interesting role in striatal function and with potentially important therapeutic roles.
Gradual Maturation into Functional Neurons
We injected NG2-Cre mice with CRE-dependent ALN conversion vectors and a GFP reporter that specifically labels reprogrammed neurons (Torper et al., 2015). To estimate the conversion efficiency, we also injected animals (n = 3) with a Cre-dependent GFP under the ubiquitous chicken beta-actin (cba) promoter rendering all targeted cells GFP+. We found that the vectors efficiently targeted NG2 glia ( Figure S1A), and estimated that 66.81% ± 38.38% of targeted cells converted into neurons ( Figure S1B).
Reprogrammed neurons were detected by their endogenous GFP expression ( Figure 1A). Biocytin neuronal filling of such GFP + neurons revealed mature neuronal morphologies and extensive dendritic trees of the reprogrammed neurons ( Figure 1B). Electrophysiological recordings performed on the GFP-expressing neurons 5, 8, or 12 weeks post-injection (w.p.i.). showed that membrane-intrinsic properties gradually matured: Membrane capacitance (Cm) increased ( Figure 1C), while the input resistance and the resting-membrane potential (RMP) decreased ( Figures 1D and 1E), indicating that the cells acquired more ion channels, increased in size, and gained more elaborate morphology with time. During the same time period, the frequency of spontaneous postsynaptic activity increased, suggesting added postsynaptic connections ( Figure 1F).
For all these parameters, cells were not significantly different from endogenous cells by 12 w.p.i. (Figures 1C-1F).
Spontaneous inhibitory currents could be blocked with picrotoxin, and spontaneous excitatory currents could be blocked with CNQX ( Figures 1G and 1H), indicating that reprogrammed neurons received synaptic input from both inhibitory and excitatory terminals, probably from nearby striatal neurons (GABAergic) and from more distal glutamatergic terminals (corticostriatal and thalamostriatal). While all cells but one (n = 17) showed repetitive firing from 5 w.p.i. (Figures 1I and S1C-S1F), few neurons at this time point showed postsynaptic activity. The proportion of neurons that displayed postsynaptic activity increased from 5 to 12 w.p.i. (Figures 1J and S1C-S1S). (legend continued on next page)
ALN Reprogrammed Neurons in the Striatum
properties characteristic of striatal medium-spiny projection neurons (MSNs, cell type A in Figure 2A), while most cells showed a firing pattern similar to striatal IntNs (Figures 2B-2D) (Kawaguchi, 1993). Most cells displayed hyperpolarized resting membrane potential and similar firing frequency and input resistance as fast-spiking IntN (FSI) (Povysheva et al., 2013), with relatively short AP even though the AP duration and spike after-hyperpolarization were not yet in the range of their endogenous counterpart (Kawaguchi, 1993) (cell type B in Figure 2B and Table S1). Some cells showed other firing patterns reminiscent of long-lasting (LA) after-hyperpolarization or persistent and low-threshold spiking (PLTS) cells (cell type C and D, Figures 2C and 2D and Table S1). Immunohistochemical analysis at 12w.p.i. revealed the presence of markers common to IntNs such as PV (marker of FS cells), ChAT (marker of cholinergic, LA cells), NPY (marker of PLTS cells), or the striatal projection neuron marker DARPP32 (Figures 2E-2H). Quantifications showed that the majority (41.27% ± 2.99%) co-expressed PV, whereas less than 10% of the GFP+ cells were co-labeled with any of the other markers ( Figure 2I). Thus, many of the IntN-specific markers that are not present at 6 w.p.i. (Torper et al., 2015) appear after additional maturation time in vivo.
DA Denervation or Reprogramming Region Does Not Affect Reprogramming Efficiency, Maturation, or Phenotype
We next tested if DA denervation that radically changes the striatal compartment and induces glia activation (Walsh et al., 2011) could affect reprogramming in vivo. NG2-Cre mice received a unilateral 6-OHDA toxin injection into the medial forebrain bundle (mfb lesion, n = 9), which produced a substantial loss of DA neurons in the SNc ( Figures S3A and S3B) and subsequent loss of their projections to the dorsolateral striatum. Littermate control animals were left intact (n = 10).
Three weeks after lesions, animals were injected with ALN into the striatum and analyzed 12 w.p.i. GFP+ neurons were abundant in the lateral striatum and found in equal numbers in both intact and lesioned animals (Figures 3A, 3B, and S3C). TH+ cell bodies were found in the striatum of lesioned animals ( Figure 3F) but not in intact controls ( Figure 3E), which could be indicative of reprogramming into DA neurons under these conditions, as has been suggested in a recent study using similar DA conversion factors to reprogram resident mouse astrocytes (Rivetti di Val Cervo et al., 2017). However, none of the TH+ neurons were co-labeled with the GFP reporter ( Figure 3F), suggesting that these were not in vivo reprogrammed neurons. Indeed, ectopic TH+ cell bodies were present in similar numbers in the striatum in control animals that were lesioned but not reprogrammed 15 weeks after lesion (Figures 3G and 3H). Like in the study by Rivetti di Val Cervo et al. (2017), most TH+ neurons in the control lesioned animals were negative for GABAergic IntN markers (Figures 3I and 3J) and positive for other DA markers, such as Nurr1 ( Figure 3K), but only weakly expressing DAT ( Figure 3L). Such ectopic striatal TH+ cell bodies have been found after lesion as reported in a number of studies (reviewed in Tepper and Koó s, 2010), and we also confirmed their presence after 15 weeks in lesioned wild-type mice from a separate experiment ( Figure S3K).
The reprogrammed neurons in both the intact and lesioned brains were analyzed using whole-cell patch clamp recordings. All neurons (n = 12) showed similar physiological properties with the ability to induce repetitive APs (Figures 3C and 3D) and also contained voltagegated sodium and potassium currents ( Figure S3D). Furthermore, the majority of cells (n = 9) displayed spontaneous postsynaptic activity ( Figure S3E). The cells in the lesioned mice showed similar frequency in spontaneous activity as in intact mice (1.44 ± 0.36 Hz for intact, 1.52 ± 0.14 Hz for lesioned) (Figures 3C and 3D). None of the reprogrammed neurons recorded from displayed any DA-specific functional properties ( Figure S3F).
We next tested if injecting the factors into the midbrain (homotopic environment for DA neurons) would influence the subtype toward DA identity. Analysis at 12 w.p.i. revealed the presence of GFP+ cells intermingled with endogenous nigral TH-expressing DA neurons ( Figure 3M). None of the reprogrammed neurons co-expressed GFP and TH ( Figures 3M-M 00 ), even though a significant number of cells co-expressed GFP and the reprogramming factors ALN (Figures S3G-S3J). (legend continued on next page)
Overexpression of Different Gene Combinations in Striatal NG2 Glia
The appearance of IntNs after reprogramming using factors that give rise to DA neurons in vitro is intriguing and raises the question of how cell-fate conversion is controlled during in vivo conversion. We therefore next tested if different combinations of pro-neural (Ascl1, Ngn2, NeuroD1) and DA fate-specifying genes (Lmx1a, Nurr1, FoxA2, En1), could affect the phenotype of the converted neurons. Four different combinations of conversion factors, NgLN (Neuro-genin2, Lmx1a, and Nurr1), ANgN (Ascl1, Neurogenin2, and Nurr1), NgN D1 (Neurogenin2 and NeuroD1), and AFLE (Ascl1, FoxA2, Lmx1a, and En1), were injected either alone or together with the midbrain-specific chromatin remodeler Smarca1 (Metzakopian et al., 2015) into the striatum of intact NG2-CRE mice ( Figures 4A and 4B). GFP+ cells with complex neuronal morphology (Figures 4C 00 -4F 00 ) could be observed in all groups. However, no co-expression of TH and GFP reporter could be detected ( Figures 1C-1F and Figures 1C 0 -1F 0 ). No GFP+ neurons were detected in control animals injected with the reporter vector alone ( Figures 1G-1G 00 ).
A more detailed phenotypic analysis of the reprogrammed neurons using the different factor combinations revealed that 9.03% to 27.01% of the reprogrammed cells expressed GAD65/67, while no VGlut1+ neurons were identified in any condition (data not shown). Similar to ALN, the largest proportion expressed the interneuron marker PV. ChAT+ and NPY+ neurons were found in lower percentages and CTIP2 was found in less than 10% of the reprogrammed neurons ( Figures 4H and S4A-S4E).
DISCUSSION
In vivo reprogramming has emerged as a future possibility for brain repair. However, the phenotype of the reprogrammed cells obtained in vitro often differs from that obtained in vivo (Su et al., 2014). For example, several factors that convert astrocytes into neurons in vitro (Berninger et al., 2007;Heinrich et al., 2010) fail to do so in vivo (Grande et al., 2013). Our group has previously reported successful reprogramming of resident NG2 glial cells into neurons in vivo, using ALN. Despite the fact that these genes give rise to TH-expressing DA neurons when fibroblasts and astrocytes are reprogrammed in vitro (Addis et al., 2011;Caiazzo et al., 2011), no TH-expressing neurons were generated via in vivo reprogramming (Torper et al., 2015). Here, we reprogrammed NG2 glia in the 6-OHDA lesion mouse model, a condition that was also used in a study published during the revision of this manuscript, to reprogram mouse astrocytes using a slightly different combination of genes and miRNAs (Rivetti di Val Cervo et al., 2017). In both these studies, reprogramming was shown to be achievable in the DA-denervated striatum, which supports the use of in vivo reprogramming for brain repair. However, the interpretation of the finding that TH+ cells appear after reprogramming in the lesioned striatum (this study and the Rivetti di Val Cervo et al., 2017) is complicated as non-dopaminergic TH+ neurons appear spontaneously in response to the DA-denervating lesion (Tepper and Koó s, 2010). We include a GFP reporter for identification of reprogrammed neurons and found that GFP did not co-label with TH. We conclude therefore that the neurons we observe are not TH+ DA neurons generated via reprogramming but rather striatal neurons that express TH in response to the lesion. In the Rivetti di Val Cervo et al (2017) study, the origin of the TH+ cells is unclear as no reporter was used, and no significant effect on DA-dependent behavior was observed. Thus, more work is needed in order to obtain functional DA neurons via in vivo reprogramming.
The functional properties of the reprogrammed neurons mature over time, and by 12 weeks the reprogrammed cells have phenotypic and functional properties of IntNs. The striatum is mainly composed of MSNs (95%) and, to a lesser extent, IntNs of different subtypes. Given the endogenous subtype distribution of striatal neurons, the selective conversion into neurons with properties of FSI expressing PV, (C and D) Neurons in both conditions, (C) intact and (D) lesioned, showed repetitive current-induced action potentials (AP; traces on the left) and spontaneous postsynaptic events (traces on the right), in the absence of any drugs or stimulation. (E) Reprogrammed neurons in the intact brain express GFP (E 0 ), but not TH (E 00 ). (F) TH+ neurons were observed in the striatum after 6-OHDA mfb lesion, but these cells do not co-express GFP (arrows). (legend continued on next page) which normally accounts for less than 1% in the striatum, is noteworthy. This raises the question of how cell fate is influenced during in vivo conversion. We tested three different pro-neural genes (Ascl1, Ngn2, and NeuroD1), which have all been used previously for neural conversion (Grande et al., 2013;Guo et al., 2014;Liu et al., 2015), and found that independently of which neurogenic genes or fate determinants are used, the reprogrammed cells still were of an interneuron identity, and the different combinations had only minor impact on the subtype identity.
The presence of GABAergic neurons when Ngn2 was used for conversion may seem counterintuitive. However, during in vivo reprogramming, Ngn2 has previously been shown to drive both GABAergic and glutamatergic phenotypes (Grande et al., 2013;Gascon et al., 2016). This may be explained by differences in starting cell types that can have different transcriptional accessibility of target genes (Wapinski et al., 2013), or differences in the level of expression as high levels of Ngn2 drive glutamatergic differentiation in the developing forebrain but support GABAergic neuron formation when expressed at low levels (Parras et al., 2002).
GABAergic IntNs have previously been generated from resident glia, both in the latent state or after trauma such as stroke (Grealish et al., 2016). Our study shows the direct reprogramming of resident NG2 glia into neurons similar to FS, PV-containing IntNs, a subtype that plays a highly interesting role in striatal function. An initial deprivation of DA input in the striatum does create an imbalance in local circuits that involve IntNs (Martinez-Cerdeno et al., 2010), and GABAergic stimulation in the striatum could support and enhance the effects from intrastriatal DA transplants (Winkler et al., 1999). In line with this, intrastriatal transplantation of FSI precursors has even shown motor improvement in a rat Parkinson's disease (PD) model (Mallet et al., 2006). Moreover, striatal FSI dysfunction may underlie some forms of dystonia in PD (Gittis et al., 2011). This type of IntN has implications also in several diseases affecting the human striatum such as Tourette's syndrome, Huntington's disease, and paroxysmal dystonia (Reiner et al., 2013;Kalanithi et al., 2005;Gernert et al., 2000), which makes these neurons interesting from a therapeutic point of view (Spatazza et al., 2017).
Cloning and Viral Vector Production
Cre-inducible AAV5 vectors were created using a similar approach as in Torper et al. (2015).
Animals and Surgery
All experimental procedures were carried out under the European Union Directive (2010/63/EU) and approved by the ethical committee for the use of laboratory animals at Lund University and the Swedish Department of Agriculture (Jordbruksverket). Surgeries were performed under general anesthesia using 2% isoflurane mixed with air at a 2:1 ratio. For in vivo conversion experiments, 1 mL of vector mix (AAV5) was injected into the striatum of each animal at a rate of 0.4 mL/min with a diffusion time of 2 min.
Electrophysiology
Patch-clamp electrophysiology was performed on striatal brain slices from ALN-injected mice using same methods as in Torper et al. (2015).
RNA Sequencing of Nuclear GFP+ Cells Isolated by Laser Capture Microdissection
Mouse brains were dissected (n = 2-3 animals/group) after decapitation and snap frozen in 2-methylbutane (Sigma-Aldrich) on dry ice and stored at À80 C until further processed. Brains were sectioned in a cryostat at À20 C, and striatal sections with target GFP-positive cells placed onto PEN membrane glass slides (Zeiss) (three replicate slides per animal), which were kept at À20 C during the sectioning and subsequently stored at À80 C. Transcriptomic data were generated using LCM-seq (Nichterwitz et al., 2016).
For more detailed information, see Supplemental Information.
ACCESSION NUMBERS
The accession numbers for the gene sequences (the vectors avaliable via Addgene) are as follows: All data are presented as means ± SEM, and an unpaired t test was performed to evaluate differences between percentages of doublepositive cells found among the four conditions (NgLN, ANgN, NgN D1 , AFLE); *p < 0.005 (p = 0.0018 for differences in NPY expression, and p = 0.0019 for differences in GAD65/67 expression); **p < 0.05 (p = 0.0081 for differences in GAD65/67 expression). Scale bars: all 25 mm. See also Figure S4.
|
2018-04-03T06:06:34.021Z
|
2017-08-24T00:00:00.000
|
{
"year": 2017,
"sha1": "9a5b7be3e4fc5498ccebc8c745cec1d881db1f25",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2213671117303338/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a5b7be3e4fc5498ccebc8c745cec1d881db1f25",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
17146427
|
pes2o/s2orc
|
v3-fos-license
|
MRI findings at image guided adaptive cervix cancer brachytherapy: radiation oncologist’s perspective
Magnetic resonance imaging (MRI) represents the reference imaging modality for image guided adaptive brachytherapy (IGABT) of cervix cancer. Accurate interpretation of pre-treatment MRI is required for proper understanding of the tumor extent and topography at IGABT. Planning and optimal timing of the application begins already before treatment, and may need to be adapted during external beam irradiation (EBRT) according to additional clinical and/or radiological findings. The level of MRI utilization in IGABT depends on the infrastructural capabilities of individual centers, ranging from no use at all to repetitive imaging during EBRT and each IGABT fraction. In this article, we summarize the role of different imaging modalities and practical aspects of MRI interpretation in cervix cancer IGABT, concentrating on the systematic evaluation of post-insertion images. MRI with the applicator in place from the radiation oncologist’s perspective should begin with immediate identification of eventual complications of the application procedure and assessment of the implant adequacy, followed by appropriate corrective measures in case of adverse findings. Finally, the tumor extent, topography, and treatment response should be evaluated in the context of initial clinical and radiological findings to allow for an appropriate selection and delineation of the target volumes.
Purpose
The conventional approach to cervix cancer brachytherapy (BT) is based on acquisition of two orthogonal pelvic radiographs after the intracavitary applicator insertion. The capability for treatment plan optimization with this technique is limited. The optimization process typically aims at achieving an adequate dose at the Manchester point A while keeping the doses at the bladder and rectum ICRU (International Commission on Radiology Units) points below their respective dose constraints. Significant amount of clinical experience has been accumulated with this approach during the past century, and it remains the essential component of locally advanced cervix cancer treatment in the majority of centers worldwide [1,2]. While local control rates of conventional BT for small tumors are encouraging, ranging up to 80-95%, the results for the locally advanced disease remain suboptimal [3]. Efforts to improve results of conventional BT in these patients by local dose escalation and perineal interstitial techniques were impeded by a considerable risk of serious late complications [4][5][6][7]. Sectional imaging has been implemented into BT planning during past decade at a growing number of centers. 3D image guided adaptive brachytherapy (IGABT) enables a comprehensive assessment of the tumor size and topography at diagnosis and at each BT fraction. It allows for an assessment of correlations between the dose-volume parameters and effects for the target volume and the organs at risk (OAR). This is a more valid concept than the conventional assessment of correlations between point-doses and tissue effects. Treatment plan optimization in the context of IGABT refers to the individual tailoring of irradiation, applying high doses to the target volume while respecting the OAR dose constraints [8,9]. The dosimetric benefits of this approach have been shown to translate into improved rates of uncomplicated cure when compared to the conventio nal BT [10,11]. Currently, magnetic resonance imaging (MRI) represents the modality of choice for IGABT. Accurate interpretation of pre-treatment MRI findings is a precondition for proper understanding of the clinical and radiological disease extent and topography at time of IGABT. Planning of the application technique and optimal timing of the procedure begins already before treatment and may need to be adapted during external beam irradiation (EBRT) according to the clinical and eventual addition-al radiological findings. The level of MRI utilization in IGABT differs between individual centers, ranging from no use at all to repetitive imaging during EBRT and at each IGABT fraction. These differences reflect the various infrastructural capabilities, institutional policies, and economic constraints of individual centers. This article summarizes the role of different imaging modalities in cervix cancer IGABT, and outlines the practical aspects of MRI interpretation of the primary tumor extent and topography at BT from the radiation oncologist's perspective. The interpretation of the imaging pathology of the regional nodes is not addressed here.
Imaging modalities
The adaptive concept of cervix cancer IGABT is based on repetitive clinical and imaging interpretation of tumor extent and topography at diagnosis, its regression during chemo-radiotherapy, and the residual pathological tissues at each BT fraction. In IGABT, the dose is adapted to a target which changes in size and shape from application to application, and the ability for an accurate delineation of the regions of interest is a precondition for treatment success and consistent reporting. Various imaging modalities have been employed in gynecological IGABT [8,9,[12][13][14][15][16][17][18][19][20][21][22][23].
The role of ultrasound (US) is currently limited mainly to the real-time or off-line guidance of the applicator insertion. Optimal positioning of the BT catheters in the target volume is a pre-requisite for achieving an optimal dose distribution during treatment planning. US-guided placement of the intracavitary and interstitital applicators is an effective and inexpensive method to prevent organ perforation [12][13][14][15]24], and to achieve an optimal implant geometry [16,17]. While US can add important information to clinical findings with regard to the definition of the extent and topography of the target volume and organs at risk at BT, its utility in treatment planning currently remains limited. Further development of the US-based contouring concepts, sonographic devices, ultrasound probes, applicators, and their set-up and compatibility are required to meet the specific demands of ultrasound imaging with the applicator in place.
Computed tomography (CT) is a relatively inexpensive and accessible modality for IGABT. However, it is characterized by a low ability to discriminate between soft tissues. It demonstrates limited value in differentiation between the tumor and normal cervix, uterus, parametria and vagina, and delineation of the organ walls [25][26][27][28][29][30]. Target volume contours, as outlined on post-insertion CT, can significantly overestimate the tumor width when compared with MRI [25]. The role of CT as the only imaging modality in IGABT of locally advanced cervix cancer is therefore limited. CT is traditionally considered to be adequate for depiction of the outer organ contours, allowing for a reliable estimation of the dose-volume parameters for the OAR [25]. However, in daily clinical experience, the ability of MRI to differentiate the organ walls from organ contents and its high soft tissue depiction quality result in more accurate delineation of the outer organ contours. In spite of the above listed limitations of CT, replacement of the X-ray based approach with the CT-assisted IGABT accompanied by careful incorporation of clinical findings [26], enables the essential qualitative leap from the 2D to 3-4D treatment planning strategy [27,28]. Utilization of a pre-brachytherapy MRI without the applicator in place, followed by a post-insertion CT could be encouraged whenever possible to facilitate the CT-based contouring by incorporation of the MRI-findings on the CT images. In addition, the combination of the MRI at first BT fraction, followed by CT at subsequent fraction(s) has been proposed as a feasible and effective strategy to cost reduction and improved availability of cervix cancer IGABT [29].
Incorporation of 18F-fluorodeoxyglucose positron emission tomography (FDG PET) findings and implementation of new tracers for angiogenesis, hypoxia, etc., represent an exciting area of research to identify target (sub)volumes that may require different dose levels. Functional imaging based IGABT should be currently considered an experimental approach.
Magnetic resonance imaging (MRI) is characterized by superior soft tissue depiction quality when compared with CT, capability of multi-planar imaging and absence of ionizing irradiation. No intravenous contrast is needed for adequate visualization and delineation of the regions of interest at IGABT. The inter-reader agreement and correlation with pathological findings in operable cervix cancer has been shown to be superior for MRI when compared with CT [25][26][27][28][29][30][31][32][33][34][35][36]. Following the early reports on the use of MRI in BT planning in the 1990s [37][38][39][40], we have witnessed a systematic development of the concepts and terms in various domains of the MRI-based IGABT in the past decade. This development has resulted in publication of reports on the optimal imaging sequences and image orientations, and recommendations on the MRI assessment of GTV and CTV, 3D dose volume parameters, radiation physics, radiobiology, applicator reconstruction, and MR imaging protocols at diagnosis and BT [8,9,[41][42][43][44]. Favorable reports on the dosimetric outcome of MRI-assisted IGABT are being reflected in a growing body of evidence, demonstrating encouraging clinical results when compared to the best results of conventional BT [10,11]. Recently, uncertainties related to the various steps of the IGABT process have been quantified, facilitating the critical interpretation of the results of clinical studies and treatment reporting [45][46][47][48][49]. In summary, MRI is currently considered the imaging modality of choice for cervix cancer IGABT. Implementation of MRI into the BT treatment planning poses several challenges to the radiation oncology team, including the requirements related to the infrastructure, equipment, imaging protocols, training, quality assurance, and interpretation of imaging findings.
Practical aspects of the interpretation of MRI findings at time of brachytherapy
Recently published recommendations from the Gynaecological GEC-ESTRO Working Group summarize the expert opinion on the basic principles and parameters for MR imaging in cervix cancer IGABT, and propose dedi-cated imaging protocols for both the "pre-treatment examination" and "BT examination" [42]. Implementation of published protocols, while taking the specific institutional practice and experience into account, is needed to meet the specific demands of cervix cancer IGABT. Once the post-insertion pelvic MRI data set is obtained, a systematic approach to the interpretation of images can be recommended to assure rapid identification of potentially hazardous conditions, evaluate the adequacy of the BT application, assess the degree of treatment response, and facilitate accurate contouring.
General principles
The general principles of post-insertion pelvic MRI interpretation include the need for synchronous viewing of images in multiple planes, availability of the pre-treatment MRI, and full description of clinical findings at diagnosis and BT. The incorporation of clinical judgment when assessing the imaging findings cannot be overemphasized, in particular with regard to interpretation of the anatomical regions which are accessible to inspection, palpation, and endoscopy (vulva, vaginal walls, portio, bladder, and rectum). The imaging pathology at BT should always be explained in the context of findings at diagnosis. Intra-departmental consultation within the designated team of radiation oncologists and cooperation with the diagnostic radiologist and radiophysicist is advised whenever possible to minimize uncertainties through consensus opinions.
Perforation of hollow organs
Uterine perforation should be excluded immediately following the acquisition of the post-insertion sectional images (Fig. 1A). Although most cases of uterine perfo-ration will resolve spontaneously without consequences after conservative treatment, specific attention is required to avoid subsequent complications such as infection, hemorrhage or peritoneal tumor-cell seeding [24,50,51]. Importantly, in an unrecognized uterine perforation, the applicator may come in close contact with the OAR. If this is not accounted for, excessive irradiation of the OAR and under-dosage of the target volume may occur, resulting in serious acute and chronic complications, and reduced probability of local control.
Reported incidence of the uterine perforations during intracavitary BT ranges from 2-14% [14,50,52,53]. Several predisposing factors for this complication have been identified in the literature, including necrotic cervical tu mor, cervical polyp, submucosal myoma, stenosis/distortion of cervical canal, prior conization, retroflexed or extremely anteflexed uterus and age over 60 years [14,24,50,52,53]. In the largest series by Šegedin et al., reporting on 428 gynecological applications, uterine perforation was indentified on post-insertion 3D imaging in 3% of procedures [24]. Most common perforation site was the posterior uterine wall (70%). In all cases, at least one of the above listed predisposing factors was present. The most commonly identified risk factors were age over 60 years and necrosis and distortion of cervical canal. All cases of uterine perforation in this series were treated conservatively by removing the applicator and monitoring the patient for 24 hours. Prophylactic antibiotics were administered in 62% of patients and blood transfusion required in 20%. All patients completed their radiotherapy as planned, without further complications. Re-perforation was avoided by using the information from 3D imaging and with real-time guidance of insertion with abdominal and/or transrectal US. At the last follow up, there were no signs of tumor-cell seeding in the affected patients A B [24]. It has to be pointed out that in the series of Šegedin et al., pulsed dose rate BT with long overall treatment times was used [24]. In selected cases, treated with high dose rate technique, BT could potentially be completed before the applicator removal if the patient's general medical condition allows and the planning aims are met during dose optimization.
With increasing use of combined intracavitary and interstitial application techniques, there is an increasing risk of perforation or injury of pelvic hollow organs and blood vessels with the needle applicators (Fig. 1B). There are scarce reports in the literature on the incidence and clinical significance of such complications. Immediate appropriate action would depend on the specific clinical situation and consultation with specialist. It has to be emphasized that specific adaptation of the interstitial application technique is recommended to minimize the chance of perforation of pelvic hollow organs or vessels [54,55]. The use of sharp interstitial needles should be avoided. Needles with blunt tips have been developed and made available specifically for gynecological IGABT. Their insertion has been proven safe, feasible, accurate, and reproducible [54,55].
Adequacy of the implant
Accurate insertion of the intracavitary applicator and optimal geometric distribution of eventual interstitial needles is a prerequisite for tight control of the dose distri-bution during treatment planning. The ability to achieve the dosimetric planning aim for the target volume while respecting the OAR dose constraints depends on the geometric adequacy of the implanted applicator channels. The inadequate dosimetric consequences of a suboptimal application cannot always be compensated by treatment plan optimization. In practice, the decision on the application technique and geometry of the inserted channels is typically based on clinical and MRI findings at diagnosis and clinical findings at IGABT. MRI for treatment planning is performed only after the application, limiting the ability for corrections in case of suboptimal implant geometry. To overcome these limitations and to increase the likelihood of optimal implant geometry already at the first application, some authors proposed an MRI-assisted pre-planning strategy [56,57]. However, due to the low availability of the MRI, relative complexity of pre-planning and scarcity of published data to support its routine clinical use, this strategy remains limited to certain specialized institutions or clinical studies. In daily clinical practice, we recommend rapid identification of the eventual inadequacies of the implant geometry on post-insertion MRI immediately following their acquisition. Images should be interpreted jointly by the radiation oncologist and radiophysicist, taking into account the spatial inter-relations between the applicator, target volume, and OAR. Such joint assessment by an experienced team will enable identification of the implant deficiencies based on the predicted isodose distribution, even before the actual treatment planning. If a suboptimal implant geometry is identified ( Fig. 2A), the decision will depend on the specific clinical situation; in principle, one of the following scenarios could be proposed: (1) treatment is completed in spite of suboptimal dose distribution, possibly with a lower dose, followed by an optimized implant and application of higher dose during next BT fraction, improving the cumulative dosimetric outcome; (2) the patient is brought back to the operating room and the implant geometry is optimized, followed by application of the planned treatment; (3) the applicator is removed and the geometrically adequate implant and treatment is carried out during subsequent insertion(s) (Fig. 2B).
Tumor response and target volume assessment
Accurate and meaningful interpretation of the imaging findings at BT is a precondition for appropriate selection of the tissues to be included in the target volumes during the delineation process. The GEC-ESTRO target concept at time of BT includes the gross tumor volume (GTV), high-risk clinical target volume (HR-CTV), and intermediate risk clinical target volume (IR-CTV). The target volumes are defined at each BT, according to the changes of tumor size and topography during time and reflecting the adaptive treatment concept [8,58]. Systematic approach to interpretation of imaging and clinical findings is recommended at time of BT in order to minimize the contouring uncertainties and errors. This process should begin already before the actual delineation, and includes careful interpretation of clinical and imaging findings at BT in the context of initial findings. The extent and topography of residual GTV, eventual areas of necrosis, and residual pathological tissues in the parametria, vagina and uterus (the "grey zones") should be systematically evaluated and treatment response quantified. During this evaluation, it should be taken into account that it is not expected to find tumor or residual pathological tissues in the regions that were tumor-free at initial examination (Figs. 3 and 4). In a study by Schmid et al., residual pathological MRI findings in the parametria were identified in 19% of the cases with predominantly expansive initial tumor growth and in 68-90% of initially infiltrative tumors, depending on the degree of infiltration [59]. Therefore, initial MRI characteristics of the parametrial infiltration by the cervical tumor appear to allow prediction of the tumor response during external beam radiotherapy and chemotherapy.
Conclusions
Magnetic resonance imaging is the gold-standard imaging modality for cervix cancer IGABT. Radiation oncologist's perspective of MRI interpretation at time of BT is specific and differs from the radiologist's perspective. It is characterized by the need for a detailed definition of the border between the target volume and the surrounding normal tissues during contouring, since the dose that is delivered to the tissues following treatment plan optimization directly depends on the delineated regions of interest. Intra-departmental consultation within the team of radiation oncologists and cooperation with the diagnostic radiologist and radiophysicist is advised to minimize uncertainties through consensus opinions. Systematic evaluation of the post-insertion MRI is recommended and should begin with immediate identification and treatment of eventual complications of the application procedure, such as perforation of hollow organs and/or vessels. Next, the adequacy of the implant should be rapidly assessed to guide decisions about eventual corrective measures. Finally, systematic evaluation of the tumor extent, topography, and treatment response is performed as a basis for an appropriate selection and delineation of the target volumes. Accurate and reproducible delineation of the GTV and CTV is required to take full advantage of this high precision treatment technique, minimize uncertainties and assure consistent recording and reporting of treatment.
Disclosure
Authors report no conflict of interest.
|
2018-04-03T05:34:56.619Z
|
2014-06-01T00:00:00.000
|
{
"year": 2014,
"sha1": "fd374ec6d20b460bdc28bb5b46ac02bb5a085b0f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.termedia.pl/Journal/-54/pdf-22904-10?filename=MRI%20findings.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fd374ec6d20b460bdc28bb5b46ac02bb5a085b0f",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
88521789
|
pes2o/s2orc
|
v3-fos-license
|
Isolation in the construction of natural experiments
A natural experiment is a type of observational study in which treatment assignment, though not randomized by the investigator, is plausibly close to random. A process that assigns treatments in a highly nonrandom, inequitable manner may, in rare and brief moments, assign aspects of treatments at random or nearly so. Isolating those moments and aspects may extract a natural experiment from a setting in which treatment assignment is otherwise quite biased, far from random. Isolation is a tool that focuses on those rare, brief instances, extracting a small natural experiment from otherwise useless data. We discuss the theory behind isolation and illustrate its use in a reanalysis of a well-known study of the effects of fertility on workforce participation. Whether a woman becomes pregnant at a certain moment in her life and whether she brings that pregnancy to term may reflect her aspirations for family, education and career, the degree of control she exerts over her fertility, and the quality of her relationship with the father; moreover, these aspirations and relationships are unlikely to be recorded with precision in surveys and censuses, and they may confound studies of workforce participation. However, given that a women is pregnant and will bring the pregnancy to term, whether she will have twins or a single child is, to a large extent, simply luck. Given that a woman is pregnant at a certain moment, the differential comparison of two types of pregnancies on workforce participation, twins or a single child, may be close to randomized, not biased by unmeasured aspirations. In this comparison, we find in our case study that mothers of twins had more children but only slightly reduced workforce participation, approximately 5% less time at work for an additional child.
Constructing natural experiments.
1.1. Natural experiments. Natural experiments are a type of observational study, that is, a study of the effects caused by treatments when random assignment is infeasible or unethical. What distinguishes a natural experiment from other observational studies is the emphasis placed upon finding unusual circumstances in which treatment assignment, though not randomized, seems to resemble randomized assignment in that it is haphazard, not the result of deliberation or considered judgement, not confounded by the typical attributes that determine treatment assignment in a particular empirical field. The literature on natural experiments spans the health and social sciences; see, for instance, Arpino and Aassve (2013), Imai et al. (2011), Meyer (1995, Rutter (2007), Sekhon and Titiunik (2012), Susser (1981) and Vandenbroucke (2004).
Traditionally, natural experiments were found, not built. In one sense, this seemed inevitable: one needs to find haphazard treatment assignment in a world that typically assigns treatments in a biased fashion, often assigning treatments with a view to achieving an objective. There is, however, substantial scope for constructing natural experiments. When treatment assignment is biased, there may be aspects of treatment assignment, present only briefly, that are haphazard, close to random. The key to constructing natural experiments is to isolate these transient, haphazard aspects from typical treatment assignments that are biased. If brief haphazard aspects of treatment assignment can be isolated from the rest, in the isolated portion it is these haphazard elements that are decisive. This is analogous to a laboratory in which a treatment is studied in isolation from disruptions that would obscure the treatment's effects. Laboratories are built, not found.
1.2. A natural experiment studying effects of fertility on workforce participation. Does having a child reduce a mother's participation in the workforce? If it does, what is the magnitude of the reduction? The question is relevant to individuals planning families and careers and to legislators and managers who determine policies related to fertility, such as family leaves. A major barrier to answering this question is that, for many if not most women, decisions about fertility, education and career are highly interconnected, and each decision has consequences for the others. Here we follow Angrist and Evans (1998) and seek to determine if there is some source of variation in fertility that does not reflect career plans and is just luck. Although a woman has the ability to influence the timing of her pregnancies, given that she is pregnant at a particular age, she has much less influence about whether she will have a boy or a girl, whether she will have a single child or twins-to a large extent, that is just luck. More precisely, that a woman is pregnant at a certain moment in her life may be indicative of her unrecorded plans and aspirations for education, family and career, but conditionally given that she is pregnant at that moment, the birth outcome, a boy or a girl or twins, is unlikely to indicate much about her plans and aspirations.
We focus here on the haphazard contrast most likely to shift the total number of children, namely, a comparison of similar women, one with a twin at her kth birth, the other with children of mixed sex at her kth birth since, as Angrist and Evans (1998) noted, many women or families in the US prefer to have children of both sexes, rather than just boys or just girls, that is, a third child is seen in data to be more common if the first two children have the same sex. While we could compare women having twins with women having a single child whose sex is the same as her first child, we focus on comparing women having twins with women having a single child whose sex is different from her first since the first woman may end up with one more child than she intended, whereas the other woman will, at least, not have additional children simply to have one of each sex.
What question does such a natural experiment answer? Conditionally given that a woman with a certain prior history of fertility is currently pregnant, having a girl or a boy or twins does not pick out a particular type of woman. So the study is accepting whatever process led a particular woman to be pregnant at a certain moment in her life, and it is asking: What would happen if she unexpectedly had two children at that pregnancy rather than one? How would that event alter her subsequent workforce participation? We use the idea from Angrist and Evans (1998) to illustrate and discuss tools to extract natural experiments from larger biased data sets, in particular, risk set matching [Li, Propert and Rosenbaum (2001)], differential effects [Rosenbaum (2006[Rosenbaum ( , 2013a] and strengthening an instrumental variable [Baiocchi et al. (2010), Zubizarreta et al. (2013)].
1.3. Informal review of two key concepts: Differential effects; risk-set matching. Because differential effects and risk set matching may be unfamiliar, we now review the motivation for these techniques. Consider, first, differential effects and generic biases acting at a single point in time [Rosenbaum (2006[Rosenbaum ( , 2013a]. Treatment assignment may be biased by certain unmeasured covariates that promote several treatments in a similar way. When this is true, receiving a treatment s may be very biased by these covariates, while receiving one treatment s in lieu of another s ′ may be unbiased or less biased or biased in a different way. Here, attention shifts from whether or not a person received treatment s (i.e., the main effect of s) to whether a person received treatment s rather than treatment s ′ conditionally given that the person received either treatment s or treatment s ′ (i.e., the differential effect of s in lieu of s ′ ). Consider an example discussed in detail by Anthony et al. (2000). There is a theory that nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen (e.g., brand Advil), may reduce the risk of Alzheimer disease. There is an obvious bias in comparing people who regularly take ibuprofen and people who do not. In all likelihood, a person who regularly takes ibuprofen is experiencing chronic pain, perhaps arthritis or back pain, is aware of that pain, and is capable of acting deliberately on the basis of that awareness. It has been suggested that people in the early undiagnosed stages of Alzheimer disease are less aware of pain and less able to act on what awareness they have, so that fact alone might produce a spurious association between use of ibuprofen and lower risk of later diagnosed Alzheimer disease. There are, however, pain relievers that are not NSAIDs, for example, acetaminophen (e.g., brand Tylenol). While limited awareness of pain or limited ability to act on awareness might reduce use of pain relievers of all kinds, it seems far less plausible that it shifts people away from ibuprofen and toward acetaminophen. That is, the differential effect of acetaminophen-versus-ibuprofen-of one treatment in lieu of the othermay not be biased by unmeasured covariates that would bias straightforward estimates of the main effect of either drug. Differential effects are not main effects, but when differential effects are interesting, they may be immune to certain biases that distort main effects. See also Gibbons et al. (2010) for differential effects in the study of medications.
Consider, second, risk-set matching, a device for respecting the temporal structure of treatment assignment in observational studies [Li, Propert and Rosenbaum (2001)]. For each subject in a randomized experiment, there is a specific moment at which this subject is assigned to treatment or to control. In some observational studies, there is no corresponding moment. Some people receive treatment at a specific time, others receive it later or never receive it, but anyone who does not receive treatment today might receive it tomorrow. Risk-set matching pairs two individuals at a specific time, two individuals who looked similar in observed covariates prior to that specific time, a time at which one individual was just treated and the other was not-yet-treated. The not-yet-treated individual may be treated tomorrow, next year or never. We compare two individuals who looked similar prior to the moment that one of them was treated, avoiding matching or adjustment for events subsequent to that moment [cf. Rosenbaum (1984)]. That is, in the language of Cox's proportional hazards model, risk-set matching pairs two individuals who were both at risk of receiving the treatment a moment before one of them actually received it, two individuals who looked similar in time-dependent covariates prior to that moment. Taken alone, without differential comparisons, risk-set matching is a method for controlling measured time-dependent covariates respecting the temporal structure of treatment assignment; see van der Laan and Robins (2003) for other methods for this task. 1.4. Outline of the paper. Section 2 discusses new relevant theory, specifically theory linking risk-set matching for time-dependent measured covariates with differential comparisons unaffected by certain unmeasured time dependent covariates. Fertility is commonly modeled in terms of "event history" or point process models determining the timing of events together with "marks" or random variables describing these randomly timed events. The mark may record the occurrence of twins. Temporal order is key and must be respected. Sections 3 and 4 complete the case study of twin births with the construction of the matched sample using combinatorial optimization for risk-set matching discussed in Section 3 and a detailed analysis presented in Section 4. Section 5 includes a discussion of related work.
2. Risk-set matching to control generic unmeasured biases.
2.1. Notation for treatments over time. The population before matching contains statistically independent individuals. At time t, individual ℓ has a history of events prior to t, the observed history being recorded in x ℓt and the unobserved history being recorded in u ℓt . To avoid a formal notation that we would rarely use, we write histories as variables, x ℓt or u ℓt , but we intend to convey a little more than this. Both the quantity and types of information in x ℓt or in u ℓt or in (x ℓt , u ℓt ) increase as time passes, that is, as t increases [or, formally, the sigma algebra generated by (x ℓt , u ℓt ) is contained within the sigma algebra generated by (x ℓt ′ , u ℓt ′ ) for t < t ′ ].
In our case study, x ℓt records such things as the ages at which mother ℓ gave birth to the children she had prior to time t, her years of education attained at the times of those births before time t, and unchanging characteristics such as her place of birth, race or ethnicity. In parallel, u ℓt might be an unmeasured quantity reflecting the entire history of a woman's inclination to work full time in the year subsequent to time t. Obviously, a birth at time t might, often would, alter x ℓt ′ or u ℓt ′ for t ′ > t.
There is also a treatment process Z ℓt that is in one of K + 1 states, s 0 , s 1 , . . . , s K . That is, at any time t, individual ℓ is in exactly one of these states, Z ℓt = s k for some k ∈ {0, 1, . . . , K}. Also, write Z ℓt for the history of the Z ℓt process strictly prior to time t, so Z ℓt records Z ℓt ′ for t ′ < t but it does not record Z ℓt . In our case study, state s 0 is the interval state of not currently giving birth to a child, state s 1 is the point state of giving birth to a single female child, state s 2 is the point state of giving birth to a single male child, state s 3 is the point state of giving birth to a pair of female twins and so on. Most women are in state Z ℓt = s 0 at most times t. The history Z ℓt records mother ℓ's births up to time t, their timing, the sex of the child, twins, etc.
Consider a specific individual ℓ at a specific time t. At this moment, the individual has a treatment history Z ℓt prior to t and is about to receive a current treatment Z ℓt . Given the past, Z ℓt , we are interested in the effect of the current treatment Z ℓt on some future (i.e., after t) outcome R ℓ . Write F ℓt = (Z ℓt , x ℓt , u ℓt ) for the past at time t. In parallel with Neyman (1923) and Rubin (1974), this individual ℓ at this time t has K + 1 possible values for R ℓ depending upon the treatment Z ℓt assigned at time t, that is, R ℓ = r kℓ if Z ℓt = s k , where only one R ℓ is observed from an individual, and the effect of giving treatment k rather than k ′ at time t, namely, r kℓ − r k ′ ℓ is not observed for any person at any time. This structure is for individual ℓ at a specific time t with treatment history Z ℓt ; typically, everything about this structure would change if the history Z ℓt to time t had been different.
The question is what effect treatment at time t has on an individual with a specific treatment and covariate history prior to t. It is entirely possibleindeed, in typical applications, it is likely-that the treatments Z ℓt ′ at times t ′ < t alter the value of observed or unobserved subsequent history (x ℓt , u ℓt ), but the history at t, namely, (x ℓt , u ℓt ), records the situation just prior to t and hence is unaffected by the treatment assignment Z ℓt at t. Quite often, the outcome R ℓ is a future value of a quantity that is analogous to a past quantity recorded in the history (x ℓt , u ℓt ). In our case study, R ℓ might measure an aspect of future workforce participation beyond time t where (x ℓt , u ℓt ) records workforce participation prior to time t, or R ℓ might measure educational attainment at some time after t where (x ℓt , u ℓt ) records educational attainment prior to time t.
In our case study, aspects of the record of a woman's fertility, Z ℓt , are likely to be strongly predicted by aspects of her observed and unobserved histories (x ℓt , u ℓt ). A woman ℓ aged t ′ = 18 years whose private aspiration u ℓt is to earn a Ph.D. in molecular biology and an MBA and to start her own biotechnology company is likely to take active steps to ensure Z ℓt = s 0 for t ∈ (18, 22] or longer, that is, she is likely to postpone having children for at least several years. In contrast, another woman ℓ ′ whose private aspiration u ℓ ′ t at age t ′ = 18 is to stay at home as the mother of many children may take active steps to ensure Z ℓt = s 0 for several t ∈ (18, 22], that is, she may actively pursue her goal of a large family. A comparison of the workforce participation of woman ℓ and woman ℓ ′ will be severely biased as an estimate of the effects of having a child before age 22 on workforce participation, because ℓ tried to shape her fertility to fit her work plans and ℓ ′ tried to shape her fertility to fit her family plans-even if, by some accident, they had the same pattern of fertility over t ∈ (18, 22], we would not be surprised to learn that ℓ subsequently worked more for pay than did ℓ ′ . What is an investigator to do when unmeasured aspirations, intentions and goals are strongly associated with treatment assignment? 2.2. What is risk-set matching? Risk-set matching compares people, say, ℓ and ℓ ′ , who received different treatments at time t, Z ℓt = Z ℓ ′ t , but who
ISOLATION IN THE CONSTRUCTION OF NATURAL EXPERIMENTS
7 looked similar in their observed histories prior to t, x ℓt = x ℓ ′ t and Z ℓt = Z ℓ ′ t ; see Li, Propert and Rosenbaum (2001), Lu (2005) and Rosenbaum [(2010), Section 12]. Importantly, ℓ and ℓ ′ are similar prior to t in terms of observable quantities that may be controlled by matching, but they may not be similar in terms of unmeasured histories, u ℓt = u ℓ ′ t , and of course they may differ in the future, after time t, not least because they received different treatments at time t. Risk-set matching does not solve the problem of unmeasured histories. Risk-set matching does respect the temporal structure of the data, avoiding adjustment for variables affected by the treatment [Rosenbaum (1984)]. Risk-set matching also "simplifies the conditions of observation," to use Mervyn Susser's [(1973), Section 7] well-chosen phrase, ensuring that comparisons are of people with histories that look comparable, even though those histories may be of different lengths, and hence may contain qualitatively different information. Although individuals have histories of different lengths containing qualitatively different information, matched individuals have histories of the same length. For instance, a woman giving birth to her 3rd child has in her history ages of birth of her first three children, where a mother giving birth to her second child does not have in her history her age at the birth of her third child, if indeed she had a third child.
In implementing risk-set matching in Section 3, we match women of the same age, with the same history of fertility-the same numbers of prior children born at the same ages in the same patterns. We also control for temporally fixed quantities associated with fertility, such as ethnicity. A delicate issue that risk-set matching would straightforwardly address with adequate data is "education." On the one hand, education is strongly related to wage income and is related to employment, so it may strongly predict certain workforce outcomes R ℓ . On the other hand, education may itself be affected by fertility: a mother who has her first child at age 16 may as a consequence have difficulty completing high school. In principle, the issue is straightforward with risk-set matching: in studying the effects of fertility Z ℓt at time t, one compares two people who had the same education prior to t, without equating their educations subsequent to time t. Again, this avoids adjustment for variables affected by the treatment [Rosenbaum (1984)]. If the adjustment for education at time t controlled for subsequent education at time t ′ > t, it might-probably would-remove a substantial part of the actual effect on workforce participation of having a child at age 16. Not finishing high school is a good way to have trouble in the labor market, and having a child at age 16 is a good way to have trouble finishing high school; everyone remembers this until they start running regressions, but then, too often, part of an actual effect is removed by adjusting for a posttreatment variable that was also affected by the treatment.
2.3. Removing generic unmeasured biases by differential comparisons in risk sets. The model for biased treatment assignment in risk-set matching is intended to express the thought that matching for the observed past, (Z ℓt , x ℓt ), has controlled for the observed past but typically did not control for the unobserved past u ℓt . The model is a slight generalization to multiple states of the model for two states in Li, Propert and Rosenbaum [(2001), Section 4], and that model is itself closely patterned after Cox's (1972) proportional hazards model for outcomes rather than treatments. People are in state s 0 almost all the time, and are in states s 1 , . . . , s K only at points in time. Let λ k (F ℓt ) = λ k (Z ℓt , x ℓt , u ℓt ) be the hazard, assumed to exist, of entering state k ≥ 1 at time t given past F ℓt . The hazard is assumed to be of the form Because x ℓt may include as one of its coordinates the time t, this model permits the hazards to vary with time t. For state s 0 , it is notationally convenient to define λ 0 (·, ·, ·) = 1 and φ 0 = 0.
In Section 2.1, u ℓt was described as a possibly multivariate history of a possibly continuous process in time, whereas in the hazard model, exp{κ k (Z ℓt , x ℓt ) + φ k u ℓt }, the unobserved element has become a scalar. This seems at first to be an enormous and disappointing loss of generality, but upon reflection one sees that the loss is not great. Suppose u ℓt did record a multivariate history over time, and consider the hazard model exp{κ k (Z ℓt , is some unknown real-valued functional of that multivariate, temporal history. Although this appears at first to be a more general model, writing u iℓ = f (u ℓt ), the model becomes exp{κ k (Z ℓt , x ℓt ) + φ k u iℓ }, a scalar model essentially as before. In words, in exp{κ k (Z ℓt , x ℓt ) + φ k f (u ℓt )}, not knowing u ℓt and not knowing f (·) is no better and no worse than not knowing the scalar u iℓ = f (u ℓt ). It is the impact of unmeasured history on the hazard-a scalar-that matters, not the particulars of that history. See Li, Propert and Rosenbaum (2001) and Lu (2005) for related discussion.
Let s ∈ {s 1 , . . . , s K } be one of the point states or birth outcomes (single girl, etc.), and let s ′ = s be any one of the other states, s ′ ∈ {s 0 , s 1 , . . . , s K }.
Here, s ′ may be either the state s 0 of not giving birth or a point state. Suppose that we form a risk-set match of one individual with Z ℓt = s and J − 1 ≥ 1 other individuals ℓ ′ in state s ′ at t, where all J individuals have the same observed history to time t, Z ℓt = Z ℓ ′ t and x ℓt = x ℓ ′ t . For instance, this might be a match of J women with the same observed history to time t, one of whom gave birth to her first child at t, a single girl s 1 , where the other J − 1 women had had no child up to and including time t. Despite looking similar prior to time t, it is possible, perhaps likely, that these J women differed in their ambitions u ℓt for school or work. After all, one had a child at time t while the others did not. Alternatively, the matching might compare a woman who had her first child, a girl or point state s 1 , at time t to J − 1 women with the same observable past who had a first child, a boy or point state s 2 , at time t. Perhaps this second comparison is closer to random than the previous comparison of women with and without children at time t, because now all J women had their first child at time t, and it was only the sex of the child that varied. Obviously, there are many analogous possibilities, but we suppose the investigator will focus on one such comparison at a time, for now, s and s ′ with s = s ′ and s, s ′ ∈ {s 0 , . . . , s K }.
The risk-set match is built rolling forward in time t, matching women with states s or s ′ at t and with identical observable pasts, (Z ℓt , x ℓt ), possibly different unobservable pasts u ℓt , removing individuals once matched; however, events subsequent to time t are not used in matching at time t. In the end, there are I nonoverlapping matched sets, each containing J individuals. It is notationally convenient to replace the label ℓ, where ℓ does not indicate who is matched to whom, by noninformative labels for sets, i = 1, . . . , I, and for individuals within sets, j = 1, . . . , J , for instance, random labels could be used. We then have Z ijt = Z ij ′ t and x ijt = x ij ′ t for all i, j, j ′ , but pos- Let Z be the event that for each i, exactly one individual j has Z ijt = s and the remaining J − 1 individuals j ′ have Z ij ′ t = s ′ , so the risk-set matched design ensures that Z occurs. Given Z, the time t is fixed, and the two states, s and s ′ , are fixed, so it is convenient to write Z ij = 1 if Z ijt = s and Z ij = 0 if Z ijt = s ′ , so that 1 = J j=1 Z ij for each i. The next step is key. Although there are K+1 2 possible choices of two states s, s ′ ∈ {s 0 , . . . , s K } to compare by risk-set matching, the same unobserved covariate u ijt can severely bias some choices of two states, while others may be nearly random or only slightly biased. Consider the conditional probability that, in set i of this risk-set matched design, it is individual j who received treatment s, with Z ijt = s, the remaining J − 1 individuals receiving treatment s ′ .
where the last expression (1) is the same as the sensitivity analysis model in Rosenbaum (2007Rosenbaum ( , 2013b for comparing treatment and control in I matched sets. The key point is that there may be reason to believe that |φ s − φ s ′ | is small for some choices of s, s ′ , and large for other choices. Refraining from having a child, s = 0, is often a carefully planned event, but whether a child is a boy or a girl, twins or a single birth, is a much more haphazard event. Some comparisons are expected to be less biased by unmeasured intentions and preferences than other comparisons. If a careful choice of s, s ′ implies that |γ| = |φ s − φ s ′ | is small, then the inference about treatment effects may be convincing if it is insensitive to small biases |γ| even if it is sensitive to moderate biases. If φ s − φ s ′ = 0, then (1) is the randomization distribution, Pr(Z ijt = s|F it , Z) = 1/J for each ijt; moreover, this is true even if φ s and φ s ′ are large, so that comparing mothers who had children at different times would be severely biased by u ijt .
2.4. Sensitivity analysis for any remaining differential biases. If φ s = φ s ′ , but |γ| = |φ s − φ s ′ | is small in (1), then the differential comparison of treatments s and s ′ in (1) may still be biased by u ijt , and the sensitivity analysis examines the possible consequences of biases of various magnitudes γ. In the current paper, the sensitivity analyses use (1) with a test statistic that is either the mean difference in workforce participation or a corresponding M -estimate with Huber's weights. Of course, the mean difference is one particular form of M -estimate. The sensitivity analysis was implemented as described in Rosenbaum (2007) with the restriction that u ijt ∈ [0, 1], so that under (1) matched mothers may differ in their hazards of birth outcome s versus s ′ by at most a factor of Γ = exp(γ). In the comparison in Section 4, this means that two mothers with the same pattern of fertility and observed covariates to time t, both of whom gave birth at time t, may differ in their odds of having a twin, s, rather than a single child of a different sex than her earlier children, s ′ , by at most a factor of Γ because of differences in the unmeasured u ij . Although biases of this sort are not inconceivable, perhaps as a consequence of differential use of abortion or fertility treatments, presumably such a bias Γ is not very large, much smaller than the biases associated with efforts to control the timing of births. The one parameter Γ may be reinterpreted in terms of two parameters describing treatment-control pairs, one ∆ relating u ij to the outcome (r T ij , r Cij ), the other Λ relating u ij to the treatment Z ij , such that a single value of Γ corresponds to a curve of values of (∆, Λ) defined by Γ = (∆Λ + 1)/(∆ + Λ), so a brief unidimensional analysis in terms of Γ may be interpreted in terms of infinitely many twodimensional analyses in terms of (∆, Λ); see Rosenbaum and Silber (2009). For instance, the curve for Γ = (∆Λ + 1)/(∆ + Λ) = 1.25 includes the point (∆, Λ) = (2, 2) for a doubling of the odds of treatment and a doubling of the odds of a positive pair difference in outcomes. Hsu and Small (2013) show how to calibrate a sensitivity analysis about an unobserved covariate using the observed covariates.
What is the role of the restriction u ijt ∈ [0, 1]? The restriction u ijt ∈ [0, 1] gives a simple numerical meaning to γ and Γ by fixing the scale of the unobserved covariate: in (1), two subjects may differ in their hazard of treatment s rather that treatment s ′ at time t by at most a factor of Γ because they differ in terms of u ijt . It is possible to replace the restriction that u ijt ∈ [0, 1] for all ijt by the restriction that u ijt ∈ [0, 1] for, say, 99% of the ijt with the remainder unrestricted [Rosenbaum (1987), Section 4]; however, when using robust methods, small parts of the data make small contributions to the inference, so this replacement has limited impact. Permitting 1% of the u ijt to be unrestricted should count as a larger bias, in some sense a larger γ, and Wang and Krieger (2006) explore this possibility in a special case, concluding that binary u ijt do the most damage among all u ijt with a fixed standard deviation.
For discussion of a variety of methods of sensitivity analysis in observational studies, see Baiocchi et al. (2010), Cornfield et al. (1959) (2000), Rosenbaum (2007Rosenbaum ( , 2013b, Small (2007), Small and Rosenbaum (2008) and Yu and Gastwirth (2005). (1) and is motivated by the possibility that |φ s − φ s ′ | may be small or zero when neither φ s nor φ s ′ is small or zero. If φ s is not small, receipt of treatment s rather than no treatment will be biased by the unmeasured time-dependent covariate u ijt . In parallel, if φ s ′ is not small, receipt of treatment s ′ rather than no treatment will be biased by u ijt . However, if φ s = φ s ′ , then the differential comparison of treatments s and s ′ , conditionally given one of them, will not be biased by u ijt , even though φ s and φ s ′ may both be large. If unmeasured aspirations and plans (u ijt ) influence the timing of fertility but not whether twins (s) or a single child (s ′ ) is born, then a comparison of two mothers with the same timing, one with twins, the other with a single child, is not biased by the unmeasured aspirations and plans. Equation (1) isolates biased timing from possibly unbiased birth outcomes given timing. The sensitivity analysis considers the possibility that |φ s − φ s ′ | is small but not zero, so there is some differential bias.
What is isolation? Isolation refers to equation
In the case study, it seems likely that the timing of births is affected by unmeasured covariates u ijt but, conditionally given a birth, specific birth outcomes are close to random; that is, each φ s is not small but each |φ s − φ s ′ | is small. In some other context, it might be that |φ s − φ s ′ | is thought to be small for some pairs s, s ′ ∈ {1, . . . , K} and not for others, and, in this case, attention might be restricted to a few comparisons for which |φ s − φ s ′ | is thought to be small.
No matter how deliberate and purposeful a life may be, there are brief moments when some consequential aspect of that life is determined by something haphazard. Isolation narrows the focus in two ways: the moment and the aspect. One compares people who appeared similar a moment before luck played its consequential role. Among such people, one considers only a consequential aspect controlled by luck. Isolation refers to the joint use of risk-set matching to focus on a moment and differential effects to focus on an aspect. 2.6. Selecting strong but haphazard comparisons. To emulate a randomized experiment, a natural experiment should have a consequential difference in treatments determined by something haphazard. The strongest contrast is twins at birth k versus mixed sex children at birth k, because this comparison is expected to do the most to shift the number of children. The population of pregnant women would not be distorted by limiting attention to these two groups, providing that the unobserved u ijt affects the timing but not the outcome of pregnancies (i.e., providing φ s = φ s ′ for s, s ′ ∈ {1, . . . , K}).
Natural experiments may yield instrumental variables where "strong" refers to the strength of the instrument. An instrument is a haphazard nudge to accept a higher dose of treatment, where the nudge affects the outcome only if it alters the dose of treatment, the so-called "exclusion restriction"; see Holland (1988) and Angrist, Imbens and Rubin (1996). In Section 2.3, some patterns of births (e.g., twins) may induce women to have more children than they would have had with a different pattern of births, so perhaps certain patterns are instruments for family size (the dose). An instrument is weak if most nudges are ignored, rarely altering the dose. An instrument is strong if it typically materially alters the dose. Weak instruments create inferential problems with limited identification [Bound, Jaeger and Baker (1995), Imbens and Rosenbaum (2005), Small (2007)] and, more importantly, inferences based on weak instruments are invariably sensitive to tiny departures from randomized assignment [Small and Rosenbaum (2008)]. Therefore, it is often advantageous to strengthen an instrument [Baiocchi et al. (2010), Zubizarreta et al. (2013)].
Is the exclusion restriction plausible here? Perhaps not. The exclusion restriction would mean that having twins affects workforce participation only by altering the total number of children. If a mother wanted three children but had twins at her second pregnancy, the occurrence of twins might have altered the timing of her children's births rather than the total number of children. A mother who wished to stay at home until her three children had entered kindergarten might return to work sooner because of twins at the second birth without altering her total number of children, and in this case the exclusion restriction would not be satisfied.
Even if the exclusion restriction does not hold, so the natural experiment does not yield an instrument, it is nonetheless advantageous to have a consequential difference in treatments determined by something that is haphazard. In particular, the Wald estimator commonly used with instrumental variables estimates a ratio of treatment effects-a so-called effect ratio-when the exclusion restriction does not hold. The effect ratio is a local-average treatment effect when the exclusion restriction holds, but it is interpretable without that condition; see Section 4 and Baiocchi et al. (2010) for further discussion.
A distinction is sometimes made between internal and external validity, a distinction introduced by Donald T. Campbell and colleagues, a distinction that Campbell (1986) later attempted to revise. In revised form, internal validity became "local causal validity," meaning correct estimation of the effects of the treatments actually studied in the populations actually studied. What had been external validity separated into several concepts, each referring to some generalization, perhaps from the treatments under study to other related treatments, from the populations under study to other related populations, or from the outcome measures under study to other related measures. Because it uses Census data from 1980, Angrist and Evans' (1998) study concerns of a well-defined population at a particular era in history, and results about women's workforce participation might easily be different in the US in earlier and later eras. It would be comparatively straightforward to replicate their study using Census data from other eras or using similar data in other countries. Their study is reasonably compelling as a study of the effects of having twins rather than a single child but, as the discussion of the exclusion restriction above makes clear, it is not certain that having twins has the same effect on workforce participation as having two children at different times. Moreover, the study provides no information about women who have no children at all. In brief, twinning is typically an unintended and somewhat random event, whereas many women attempt to carefully, thoughtfully and deliberately control the timing of fertility, so Angrist and Evan's study has unusual strengths in local causal validity, but one needs to avoid extrapolating their findings to other eras or types of fertility that they did not study.
3.1. One matched risk set. We created nonoverlapping matched sets of 6 women who were similar prior to the birth of their kth child, for k = 2, 3, 4, one of whom had a twin on this kth birth, whereas the others had children of both sexes as of the kth child. For instance, matched set #836 consisted of six women. All six women had their first child at age 18 and their second child at age 22, and all were white. After the birth of the second child, five of the mothers had one boy and one girl, and one of the mothers had twins at the second pregnancy. A mother's plans for education, career and family may easily influence the timing of her pregnancies, but these six women gave birth at the same ages. A mother's plans for education, career and family are much less likely to determine which of the six pregnancies will end with twins and which will end with two children of different sexes-for most mothers, that's just luck. All six mothers had 12 years of education at the time of their first and second births at ages 18 and 22, respectively; see Section 3.2 for technical details about this statement.
Matched sampling controls, or should control, for the past, not the future [Rosenbaum (1984)]. The six women were similar prior to their second pregnancy. They had different outcomes at their second pregnancy. What happened subsequently? The woman with twins ended up with 3 children in total, the other five woman ended up with two children each-that is, none of these women went on to have additional children beyond their second pregnancy. The pattern is different in other matched sets. In this one matched set, all six women had no additional education beyond the 12 years they had at age 18, the age of their first birth. In this particular matched set, the mother of twins ranked third in workforce participation. In the year prior to the 1980 Census, two of the women with two children had worked at least 40 hours in the previous week and 52 weeks in the previous year, while the remaining three women with two children had not worked at all in the previous year. The woman with twins, with three children, had worked 40 hours in the previous week and 20 weeks in the previous year.
Matched sets varied, but set #836 was typical in one respect. In the matched comparison, it was uncommon for women who had children by age 18 to ultimately complete a BA degree-only 5.5% did so-whereas it was much more common for women who did not have a child by age 18 to complete a BA degree-28.2% did so. Total lifetime education is the sum of two variables, a covariate describing education prior to the kth birth and an outcome describing additional education subsequent to the kth birth. Riskset matching entails matching for the covariate-the past-but not for the outcome-the future.
3.2. Technical detail: How the matching was done. Matches were constructed in temporal order, beginning with the second pregnancy. Mothers not matched at the second pregnancy might be matched later. The matching was exact for three variables-age category at the second pregnancy, race/ethnicity and region of the US; see Table 1. Within each of these 64 = 4 3 cells, the match solved a combinatorial optimization problem to make the mother of twins similar to the five control mothers in the same matched set. Similarity was judged by a robust Mahalanobis distance [Rosenbaum (2010), Section 8.3] using observed covariates x it prior to this pregnancy. Forming nonoverlapping matched sets to minimize the sum of the treated-versus-control distances within sets is a version of the optimal assignment problem, and it may be solved using the pairmatch function of Hansen's (2007) optmatch package in R. [We used mipmatch in R available at http://www-stat.wharton.upenn.edu/˜josezubi/; see Zubizarreta (2012).] From the Census data, we can know the education of the mother prior to the Census, her age at the Census and the ages of her children, and from this we can deduce her ages at the births of her children. Ideally, we would know exactly her years of education at the birth of each of her children, but the Census provides slightly less information. The norm in the US is to complete high school with 12 years of education at age 18. If a woman had a total of E years of education at the time of the census and if she was age A at her kth pregnancy, we credited her with min(E, A − 6) years of education at her kth pregnancy. For instance, a woman who had a BA degree with 16 years of education and a first child at age 26 was credited with 16 years of education at the birth of her first child. This is a reasonable approximation but will err in some cases. The exact timing of education is available in some longitudinal data sets.
3.3. Covariate balance prior to the kth birth in the risk-set match. Figures 1 and 2 show the balance on age at each pregnancy and education at each pregnancy. The match at the second pregnancy should balance age and education at the first two pregnancies, viewing subsequent events as outcomes. The match at the third pregnancy should balance age and education at the first three pregnancies, viewing subsequent events as outcomes. The match at the fourth pregnancy is analogous. Figures 1 and 2 show the desired balance was achieved.
Tables 1 and 2 show the comparability of the matched groups separately for the matches at the second, third and fourth pregnancy. Table 1 exhibits perfect balance for categories of race/ethnicity, region of the US and age at the second pregnancy. Moreover, the interactions of these three variables are also exactly balanced. Fig. 1. Age at births in 5040 1-5 nonoverlapping matched sets containing 30,240 mothers, specifically 5040 mothers who gave birth to a twin at the indicated pregnancy and 25,200 mothers who had at least one child of each sex by the end of the indicated pregnancy. For 3380 sets matched at the second pregnancy, matching controlled the past, namely age at the first and second births. For 1358 sets matched at the third pregnancy, matching controlled the past, namely age at the first, second and third births. For 302 sets matched at the fourth pregnancy, matching controlled the past, namely age at the first, second, third and fourth births. 4. Inference: Tobit effects, proportional effects, sensitivity analysis. Figure 3 depicts two outcomes recorded on Census day for the 30,240 mothers in 5040 matched sets, each set containing one mother who had a twin at the indicated pregnancy and 5 mothers who had at least one child of each sex at the indicated pregnancy. One outcome is the total number of children recorded on Census day. The other outcome is the work fraction where 0 indicates no work for pay and 1 indicates full time work (≥ 40 hours per week). The work fraction is the number of weeks worked in the last year multiplied by the minimum of 40 and the number of hours worked in the last week, and then this product is divided by 40 × 52 to produce a number between 0 and 1. (A small fraction of mothers worked substantially more than 40 hours in the previous week.) Table 1 In each matched risk set containing J = 6 mothers, a mother of a twin at birth k is matched to J − 1 = 5 control mothers whose kth birth was a single child whose sex was different from one of her previous children. The matching was exact for four age categories, for four race/ethnicity categories and for four regions of the US, and because it was exact, it controlled their interactions. The In the top half of Figure 3, at the second pregnancy, a twin birth shifted upward by about 1 child the boxplot of number of children. The shift is smaller at the third and fourth pregnancies, where the lower quartile and median increase by 1 child, but the upper quartile is unchanged. Presumably, some mothers pregnant for the third or fourth time intend to have large families and twins did not alter their plans. In the bottom half of Figure 3, mothers of twins worked somewhat less, but the difference in work fraction is not extremely large. Figure 4 displays the information about work fraction in a different format, as a quantile-quantile plot.
We consider two models for the effect on the fraction worked, R ij . One model is a so-called Tobit effect, named for James Tobin, of twin versus different-sex-single-child, Z ij . The Tobit effect has r T ij = max(0, r Cij − τ ) and it reflects the fact that a woman's workforce participation may decline to zero but not further. For instance, if τ = 0.1 = 10%, then a mother who would have worked at least r Cij = 10% of full-time without twins would work 10% less with twins, r T ij = r Cij − 10%, but a mother who would have worked r Cij = 5% or r Cij = 0% of full-time without twins would not work with twins, Table 2 Baseline comparison of 30,240 distinct mothers in I = 5040 = 3380 + 1358 + 302 nonoverlapping matched sets of J = 6 mothers, each set containing one mother who gave birth to a twin and J − 1 control mothers who gave birth to a single child whose sex differed from that of one of her previous children. The r T ij = 0%. For the Tobit effect, we draw inferences about τ . If H 0 : τ = τ 0 were true, then max{0, R ij − (1 − Z ij )τ 0 } = r T ij does not vary with Z ij and satisfies the null hypothesis of no treatment. Therefore, H 0 : τ = τ 0 is the hypothesis of no treatment effect on max{0, R ij − (1 − Z ij )τ 0 } and the confidence interval is obtained in the usual way by inverting the test. In the usual way, the point estimate solves for τ an estimating equation that equates the test statistic to its null expectation. We use the treated-minuscontrol mean as the test statistic, but very similar results were obtained using an M -estimate with Huber's weight function trimming at twice the median absolute deviation. See Rosenbaum (2007) and the senmwCI function in the sensitivitymw package in R for computations. Table 3 displays inferences about τ , the effect of a twin on hours worked or, more precisely, on the work fraction. For Γ = 1, Table 3 displays ran- Fig. 4. Quantile-quantile plots of work fraction for twins (vertical) and controls (horizontal) with the line of equality. The plot shows that women with twins were more likely to not work, as seen in the horizontal start to the plot, and they worked fewer hours in total, as quantiles fall below the line of equality.
domization inferences assuming the differential comparison of twins versus different-single-sex-child is free of bias from unmeasured covariates. For Γ > 1, sensitivity to unmeasured bias is displayed. The point estimate of τ in the absence of bias is 0.0793 or about 8% reduction in work hours (0.08 × 40 = 3.2 hours per week) for a mother with twins. More precisely, this is an 8% reduction in work fraction or a reduction of 3.2 hours per week for any mother who would work at least 3.2 hours if she did not have twins. The results are insensitive to small biases, say, Γ ≤ 1.2, but are sensitive to moderate bias, Γ = 1.25; however, we do not expect much bias in the 22 J. R. ZUBIZARRETA, D. S. SMALL AND P. R. ROSENBAUM Table 3 Inference about the Tobit effect τ . For each Γ, the sensitivity analysis gives the maximum possible P -value testing the null hypothesis of no treatment effect, H0 : τ = 0, the minimum one-sided 95% confidence interval and the minimum possible point estimate. Inferences use the mean, but M -estimates with Huber weights produced similar results Γ P -value 95% CI Estimate (2009), in a matched pair, treatment-versus-control comparison, a bias Γ = 1.25 is produced by an unobserved covariate that doubles the odds of treatment and doubles the odds of a positive treatment-minus-control pair difference in outcomes. Figure 5 looks at residuals. With τ 0 = 0.0793, Figure 5 plots max{0, R ij − (1− Z ij )τ 0 }. In an infinite sample without bias, this plot would have identical pairs of boxplots if the Tobit effect were correct. Though not identical in pairs, the boxplots are similar, except perhaps at the 4th pregnancy where the sample size is not large. Arguably, the data do not sharply contradict a Tobit effect.
The second model related the effect on workforce participation to the effect on the number of children, that is, the two outcomes in Figure 3. Write D ij for the number of children, with D ij = d T ij if Z ij = 1 and D ij = d Cij if Z ij = 0. The second model says the effect of twin-versus-different-sexsingle child on the workforce outcome is proportional to the effect on the number of children, r T ij − r Cij = β(d T ij − d Cij ). Under this model, R ij − βD ij = r T ij − βd T ij = r Cij − βd Cij does not change with Z ij , so (i) the null hypothesis H 0 : β = β 0 is tested by testing the hypothesis of no effect of the treatment Z ij on R ij − β 0 D ij , (ii) a confidence interval for β is obtained in the usual way by inverting the test, and (iii) a sensitivity analysis for biased Z ij is conducted in the usual way; see Rosenbaum (1996) and Imbens and Rosenbaum (2005). This model embodies the exclusion restriction in saying that if the twin did not alter the total number of children for mother ij, so d T ij = d Cij , then it did not alter her workforce participation, r T ij = r Cij . For instance, if mother ij had a twin on her second birth, Z ij = 1, she might have three children, d T ij = 3, where perhaps she would have had two children if she had had a different-sex-single child at the second birth, d Cij = 2, so for this mother the twin causes a 1 child increase in her number of children, d T ij − d Cij = 1, and hence a change in workforce participation of r T ij − r Cij = β(d T ij − d Cij ) = β. Some other mother, i ′ j ′ , might have had three children regardless, d T ij = d Cij = 3, in which case the twin caused no increase in her number of children, d T ij − d Cij = 0 so r T ij − r Cij = 0. Baiocchi et al. (2010) show that randomization inferences (i.e., inferences with γ = φ s − φ s ′ = 0) for β under the model r T ij − r Cij = β(d T ij − d Cij ) are identical to randomization inferences for the effect ratio, ( I i=1 J j=1 r T ij − r Cij )/( I i=1 J j=1 d T ij − d Cij ), which is the effect on workforce participation per added child, and this is true whether or not the exclusion restriction holds. For instance, β = −0.1 would be a 0.1 reduction in the average work fraction per additional child, whether or not r T ij − r Cij = β(d T ij − d Cij ) for each individual ij. Without the model r T ij − r Cij = β(d T ij − d Cij ), but Table 4 Inference about the proportional effect, β. For each Γ, the sensitivity analysis gives the maximum possible P -value testing the null hypothesis of no treatment effect, H0 : β = 0, the minimum one-sided 95% confidence interval and the minimum possible point estimate. Inferences use the mean, but M -estimates with Huber weights produced similar results Γ P -value 95% CI Estimate with the exclusion restriction, the effect ratio can be interpreted as the average effect on workforce participation per child among mothers who had additional children because of the twin; see Angrist, Imbens and Rubin (1996). Table 4 draws inferences about the proportional effect, β. The test of no treatment effect is the same as in Table 3, so the P -values in the two analyses are equally sensitive to unmeasured biases. In the absence of unmeasured bias, Γ = 1, the point estimate of β suggests a 5% reduction in the work fraction per additional child. We have been looking at the effects of twins versus the popular mix of children of both sexes. The effects appear to be small.
5. Discussion. Isolation, as we have defined it, is used in the following situation. One of several treatments may be inflicted upon individuals (or self-inflicted) at certain moments in time. The timing t of treatment may be severely biased by both measured and unmeasured time-varying covariates, but there may be two treatments, s and s ′ , such that conditionally given some treatment at t, the occurrence of treatment s in lieu of treatment s ′ is close to random. Isolation focuses attention on that brief moment and random aspect by controlling for measured time-dependent covariates using risk-set matching and by removing a generic bias using a differential comparison. Stated precisely, isolation refers to the radical simplification of the conditional probability in (1) that occurs when φ s = φ s ′ ; then, the unobserved time dependent covariate u ijt that would bias most comparisons does not bias a risk-set match of treatment s in lieu of s ′ . This radical simplification, when it occurs, justifies one very specific analysis: the comparison of matched sets with similar observed histories to time t where some individual received treatment s and the rest received treatment s ′ . In the case study, the timing of births is biased by a woman's plans and aspirations for education, career and family, but conditionally given a birth at time t, the occurrence of twins rather than a single birth is largely unaffected by her plans.
In a different study that employed similar reasoning, Nagin and Snodgrass (2013) examined the effects of incarceration on subsequent criminal activity. The substantial difficulty is that judges decide in a thoughtful manner whether to imprison an individual convicted for a crime. When two people are convicted of the same crime, it is far from a random event when one is sent to prison and the other is punished in a different way. Nagin and Snodgrass looked at counties in Pennsylvania in which some judges were much harsher than others, sending many more convicts to prison. Committing a crime is not haphazard, nor is a judge's decision, but having your case come to trial when judge A rather than judge B is next available is, in most instances, a haphazard event. Nagin and Snodgrass contrasted the subsequent criminal activity of individuals with similar pasts who were tried before harsh judges and those tried before lenient judges in the same county at about the same time, so each convict might have received either judge. They found little or no evidence in support of the widespread belief that harsher judges and harsher sentences reduce the frequency of subsequent rearrest.
A similar strategy is sometimes used in studies of differential effects of biologically different drugs used to treat the same disease. The differential effect may be less confounded than the absolute effect of either drug, particularly if the choice of drug is determined by something haphazard. For example, Brookhart et al. (2006) compared the gastrointestinal toxicity caused by COX-II inhibitors versus NSAIDs by comparing the patients of physicians who usually prescribe one versus those who usually prescribe the other. See also Gibbons et al. (2010) and Ryan et al. (2012).
|
2015-01-19T05:52:50.000Z
|
2014-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "729d2a06d8801be04da39c82ffa461bdb122f7c5",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1214/14-aoas770",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "729d2a06d8801be04da39c82ffa461bdb122f7c5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Psychology",
"Mathematics"
]
}
|
263826373
|
pes2o/s2orc
|
v3-fos-license
|
Study on Natural Passivation Behavior of Corrosion-resistant Steel Rebars in the Mortar
. The passivation behavior of HRB400 carbon steel rebar, 304 austenitic stainless steel rebar and 2304 austenitic-ferritic duplex stainless steel rebar during curing stage of mortar was studied by electrochemical testing techniques such as open-circuit potential, electrochemical impedance spectroscopy and linear polarization curve, and the corrosion resistance of the passivation film of the three rebars was compared to provide a reference for the practical engineering application of corrosion-resistant steel rebars. The study shows that all three types of steel rebars can be passivated during the 28 days curing stage of mortar, and 2304 duplex stainless steel rebars have the best passivation film corrosion resistance, followed by 304 stainless steel rebars, and HRB400 steel rebar was the worst
Introduction
Reinforced concrete is so widely used that it has become the most basic materials for infrastructure in transports, such as cross-sea bridges, operating platforms, port terminals, immersed tube tunnels, etc.However, due to the harsh service environment in marine, the reinforced concrete structures suffered serious corrosion under the long-term penetration of chloride and cause premature structural failure, which brings a series of economic, safety, and environmental problems (Melchers and Li 2009).It was well known that the main factors causing the failure of marine concrete structures is reinforcement corrosion (Ahmad 2003, Berke et al. 1992, Moser et al. 2012).During the early stage of cement hydration, the steel reinforcement generates a relatively dense passivation film on the interface of the concrete/steel in the high alkali condition, which can prevent corrosion to a certain extent.However, the presence of a large amount of the Cl -both in the concrete materials and external environment can cause serious damage to the passivation film (Angst et al. 2009, Schueremans et al. 1999).Due to the denser passivation film on the surface of corrosion-resistant steel rebars and their excellent resistance to chloride and salt attack, the use of corrosion-resistant steel rebars to replace carbon steel rebars is one of the important way to release the corrosion risk of steel rebars and extend their service life (Bertolini et al. 1996, Castro et al. 2003, Baddoo 2008, Perez-Quiroz et al. 2008).304 austenitic stainless steel rebar and 2304 austenitic-ferritic duplex stainless steel rebar have been applicated in some marine infrastructures due to their high resistance to corrosion media and low whole-life cost (Aldaca, et al. 2020, Padhy et al. 2011).A large amount of work has been performed to investigate the passivation behaviors of these stainless steel rebars in concrete simulating solution (Barbosa et al. 1991, Ogunsanya and Hansson 2019, Xya et al. 2020, Yuan et al. 2020), but the concrete pore simulation solution is not the same as the actual environment in which the reinforcement is placed, which has brought some hindrance to the practical application of corrosion-resistant steel rebars.Therefore, it is necessary to carry out research on the passivation behavior of different corrosion-resistant steel rebars in solid structures to provide better guidance for engineering applications.
In this work, the natural passivation behaviors of 304 stainless steel rebar and 2304 duplex stainless steel rebar were studied, and HRB400 carbon steel rebar was cast in mortar as a reference, and the passivation behavior during the curing stage was investigated by using open circuit potential, electrochemical impedance spectroscopy and linear polarization curves.The results will provide a guidance for the application of corrosion-resistant rebar for marine structures.
Experimental
The experimental materials were HRB400 carbon steel rebar, 304 stainless steel rebar, 2304 duplex stainless steel rebar with the composition were shown in Table 1.The above rebars were cut into Φ2 cm×2 cm (exposed area of 3.14 cm 2 ), and the end was welded with copper line and sealed with epoxy resin, and after curing, the bars were sealed with 40#, 100#, 360#, 600#, 1000#, 2000# sandpaper step by step, and rinsed and blown dry with ethanol and deionized water.The prepared rebar samples were poured with mortar in a 100 mm×100 mm×100 mm mold with a cover thickness of 2 cm, and the configuration table of mortar was shown in Table 2.Among them, cement for ordinary Portland cement, sand for standard sand.The mortar was placed in calcium hydroxide solution for curing stage and electrochemical testing.The electrochemical testing facility was Princeton 2273.The open circuit potential, electrochemical impedance spectroscopy and linear polarization curve were tested daily during the curing period of the reinforced mortar system last for 28 days.The frequency range of the electrochemical impedance spectrum test was 100 kHz to 10 mHz with an amplitude of 10 mV, and the data were analyzed using ZSimp Win software for a reasonable equivalent circuit fit.The linear polarization curve test polarizes the relative corrosion potential Ecorr=±20 mV of the steel rebar with a scan rate of 3 mV/s.After fitting, the polarization resistance Rp of the steel rebar can be obtained, and the corrosion current density icorr of the steel bar can be found by the Sten-Geary formula (icorr=B/Rp) where B is the Tafel constant, when the rebar is in the blunt state, generally take the value of 52 mV (Law et al. 2000).The electrochemical impedance spectroscopy of the three rebar mortar systems were tested at different ages, and the results are shown in Figure 2. It can be seen from the figure that the radius of capacitive arc resistance, low-frequency impedance modulus, and phase angle peak width of the three rebar systems showed a trend of gradual increase as the age increased, and the maximum phase angle moved toward the low frequency.After 28 d of curing, the radius of capacitive arc resistance and low-frequency impedance modulus of 2304 duplex stainless steel rebar were larger than those of 304 stainless steel rebar, and HRB400 plain rebar was the smallest.The Bode plots of the three types of rebar show that the phase angles in the low frequency range exhibit asymmetry, indicating the existence of two overlapping time constants (Xya et al. 2020); moreover, the maximum phase angle in the Bode plots is less than 90°.Based on the results and analysis of the electrochemical impedance spectroscopy, the data were fitted using the equivalent circuit of Figure 3 (Zheng et al. 2020), considering the redox reaction on the surface of the rebar (Shi et al. 2020).Where, Ro is the mortar resistance, Rct is the charge transfer resistance, Qdl is the double layer capacitance, Rf is the passivation film resistance, and Qf is the passivation film capacitance.The fitting parameters for electrochemical impedance spectroscopy of the three rebar mortar systems at different ages were shown in Table 3.The variation of charge transfer resistance Rct with age and the variation of resistance Rf of passivation film with age shown in figure 4. The Rct of both HRB400 plain rebar and 304 stainless steel rebar grew slowly during the 28 d curing process, while 2304 duplex stainless steel rebar grew rapidly during the ages of 1 d~7 d and slowly at the later stages.The Rf of all three types of rebar increased with age, and the passivation film layer resistance Rf of HRB400 plain rebar, 304 stainless steel rebar and 2304 duplex stainless steel rebar reached 0.26 MΩ, 1.51 MΩ and 2.69 MΩ after 28 d of curing, respectively.It is better than 304 stainless steel rebar and better than HRB400 plain rebar.Rebar-Mortar Time The polarization curves were tested for three types of rebar mortars at different ages, and the results are shown in Figure 5.With the growth of age, the linear polarization curves shifted negatively along the X-axis and positively along the Y-axis, i.e., the corrosion potential kept increasing and the corrosion current density kept decreasing, which indicated that the rust probability and corrosion rate of rebar gradually decreased, i.e., the three types of rebar gradually passivated during curing stage of mortar and the surface passivation film gradually stabilized.The results of the linear polarization curves were fitted and the results are shown in Figure 6.With the growth of age, the corrosion current density of the three types of rebar in mortar all showed different degrees of decrease, and after 28 d of curing, the corrosion current density of HRB400 rebar, 304 stainless steel rebar and 2304 duplex stainless steel rebar were 0.492 μA/cm 2 , 0.237 μA/cm 2 , 0.171 μA/cm 2 respectively, which indicated that during 28 d of curing, 2304 duplex stainless steel rebar passivated best, followed by 304 stainless steel rebar, HRB400 rebar was the worst.
Conclusion
-The electrochemical test indicated that, HRB400 rebar, 304 stainless steel rebar and 2304 duplex stainless steel rebar all complete passivation during the 28 d curing of mortar.HRB400 rebar passivates faster in 1~7 d of mortar curing and more slowly in the later period.304 and 2304 stainless steel rebar grows rapidly in 1~14 d of surface passivation film and also passivates faster in the later period of mortar curing.-2304 duplex stainless steel rebar has the best corrosion resistance of surface passivation film after cement hydration, and the film resistance is 1.5 times that of 304 stainless steel rebar and 10.2 times that of HRB400 plain steel rebar.
Figure 1 Figure 1 .
Figure 1 showed the open circuit potential versus time for HRB400 plain rebar, 304 stainless steel rebar and 2304 duplex stainless steel rebar mortar system at different ages.The opencircuit potential of all three types of rebar increased with age and grew more rapidly before 7 d.The open-circuit potential of HRB400 plain rebar grew slowly to -210.85 mV during the age of 7 d~28 d, while the open-circuit potential of 304 stainless steel rebar and 2304 duplex stainless steel rebar in mortar still grew faster to -110.58 mV and -116.61mV.This indicated that the surfaces of the three rebar types stabilized during the mortar oxidation process and were in a passivated state within 28 d.
Figure 3 .
Figure 3. Equivalent circuit diagram used for rebars during curing stage of mortar.
Figure 4 .
Figure 4. Fitting results of electrochemical impedance spectroscopy of three types of rebar during curing stage of mortar: (a) Variation of charge transfer resistance Rct with age; (b) Variation of resistance Rf of passivation film with age.
Figure 5 .Figure 6 .
Figure 5. Linear polarization curves of three types of rebar at different ages during curing stage of mortar.
Table 1 .
The main components of the three types of steel rebar
Table 3 .
Fitting parameters for electrochemical impedance spectroscopy of the three rebar mortar systems at different ages.
|
2023-10-11T15:35:53.301Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "18217fb9f1a70f7a00e976454ff13dd2441f4772",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.scipedia.com/wd/images/1/12/Draft_Sanchez_Pinedo_519310238115.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "64b10ea9643a48ab61988d644c1940d327219006",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
54026016
|
pes2o/s2orc
|
v3-fos-license
|
Optimal operation of integrated electrical, district heating and natural gas system in wind dominated power system
Nowadays, installed capacity of renewable energy sources is increasing at a high rate and even higher increase is expected in the future. In order to accommodate renewable energy sources, integration of gas, electricity and district heating network is a promising solution. This paper provides a coordinated operation and analysis of electricity, district heating and natural gas system with integrated wind farm. The interactions among different energy sectors will provide more flexibility required by the future renewable energy system. A nonlinear optimization problem is presented with focus on decreasing the operational cost and improving efficiency of integrated system, as well as meeting the demands. Optimal operation is performed for a test case system including constraints of individual systems and linkages between each of the systems. The test system includes electrical, district heating and natural gas subsystem with thermal and gas storages, combined heat and power and power to gas units and wind turbine. Simulation results show that integration of electricity, heating and natural gas system decreases the operational cost and provides higher flexibility to the system. Moreover, wind curtailment is reduced with integration of P2G.
Introduction
Long-term targets for 2020, 2030 and 2050 for European countries are introduced by European Commission with the main target of having a sustainable and secure energy system by 2050. Taking Denmark as example, in 2015, wind power penetration was 50% and in 2014, wind power generation exceeded demand by almost 1000 h. Danish main goal by 2050 is 100% renewable energy sources (RES) based system [1]. Nowadays, the installed capacity of RES is increasing at a high rate and replacing conventional generators. Consequently, mismatch between generation and demand can occur, leading to unstable system. In order to realize the long-term goal of RES based systems operating in a secure, reliable and economic manner, the synergy between electricity, heat and gas has to be explored [2].
Extensive studies have been carried out related to integration of energy storages and flexible loads in order to accommodate the RES including [3]. Furthermore, electric water heater, combined heat and power (CHP) and heat pumps are widely used and an extensive literature can be found for linkage between electrical power system (EPS) and district heating system (DHS). In [4], the optimal dispatch of EPS and DHS of urban area with increased RES is performed with the aim to decrease operational costs of both systems. Authors of [7] introduced and compared two methods, decomposed and integrated, for combined analysis of EPS and DHS where optimal dispatch of EPS is considered only. Optimization of operation of DHS with thermal storage in pipelines is presented in [8] and optimization of supply temperature and mass flow rate in DHS with minimization of operational cost and losses is presented in [9]. When it comes to natural gas system (NGS) and EPS, basic principle of modelling of NGS, and optimal power flow of EPS and NGS is presented in [10]. Moreover, the link between NGS and EPS used to accommodate RES is power to gas (P2G) unit which has a fast response to changes in generation and demand according to [5,6]. Furthermore, it can convert excess electricity from RES to hydrogen or methane which further on can be stored in gas pipeline or storage for longer periods [5]. The interaction of NGS and EPS through P2G units is studied in [11]. Steady state analysis of NGS and EPS system with units P2G and CHP is presented in [5]. Based on the work done in mentioned papers, the interactions among different energy sectors should provide higher flexibility required by the future RES system while minimizing the operational costs and losses. One of the prominent solutions to balance the renewable production and increase flexibility would be to integrate EPS, DHS and NGS with energy storages and coupling components. In this paper multi period optimal operation focused on minimizing operational costs of all systems and detailed constraints of integrated network are presented. Firstly, mathematical models are used to describe the DHS including demand, thermal storage and heating source like CHP units. In DHS, temperature of water is important parameter that must be maintained within permissible operational limits and the effect of water temperature on DHS is further on explained in Section 2.2. Secondly, mathematical model of NGS including gas source, P2G unit and gas storage is developed. Finally, integration EPS, DHS and NGS including their components and links, and analysis of interactions between electricity, gas and heat is conducted. Further on, optimal operation strategy of integrated energy system is developed. Optimal operation including mathematical models is performed in software General Algebraic Modeling System (GAMS) and results are additionally verified through MATLAB Yalmip toolbox and MATPOWER toolbox.
This paper is organized as follows. The second section introduces the mathematical modelling of EPS, DHS and NGS. Accordingly, linking components of integrated system are presented with corresponding constraints. In section three, test case of integrated system is presented and analysed. Section four provides the results of the test case system. Section five gives the conclusion of this work.
Modeling of electric power system
Optimal power flow studies have been performed for a long time where the main objective is to supply the load reliably and as economically as possible. The power flow study in this work is optimal AC power flow. The goal of this study is to obtain generators active and reactive power and voltage angle and magnitude on each bus. Active and reactive power flow balance equations for every bus in the EPS in the form of equality constraints are shown in (1)-(2) with the operational limits of the variables in (3)-(4) [12]. To further on expand the power flow balance equation regarding CHP, P2G and wind turbine, (5) is presented with its operational constraint in (6).
Modeling of district heating system
DHS consists of supply pipelines delivering heat from the heat source to the heat demand and return pipelines coming back to the heat source. As previously mentioned in Section 1, temperature of water is important parameter. The supply temperature varies from 70°C to 120 °C, while return temperature is lower depending on the heat extracted by the heat load [7]. DHS model consists of hydraulic and thermal model. Variable in hydraulic model is mass flow, while thermal model includes supply and return temperatures and heat power delivered by the heat source. Hydraulic equation presenting continuity of flow demonstrates that the mass flow entering the node equals to the mass flow leaving the node and mass flow consumption at that node and it is shown in (7). Thermal model consists of nodal heat balance equation, temperature drop and temperature mix equations. Heat balance of DHS consisting of heat source, heat load and storage is as presented in (8). The heat power being positive means releasing power to the system, while negative indicates heat loss or heat being consumed by the load. Further on, the equation for temperature drop is provided in (9) where λ depends on diameter and heat transfer coefficient of the pipeline. In this work a constant of 0.4 is taken for λ. The heat loss of each pipeline is due to the heat transfer along the pipe from high water temperature to the ambient temperature. In this work ambient temperature of 10°C is chosen. The total heat loss of DHS is the sum of the heat losses in each pipeline. The temperature drop equation is applicable for both supply and return network. Furthermore, temperature mix equation shown in (10) is used when there is more than one incoming mass flow to the node. The temperature of mass flow going out of the node is then calculated based on the temperatures of incoming mass flows [7]. The operational limits for DHS and heat storage balance equation are presented in (11)-(16). Temperature lower and upper limits for DHS in this work are 30°C and 80°C respectively.
Modeling of natural gas system
NGS consists of loads, pipes, gas compressors (GC), storages and P2G plants. While natural gas is being transported, there is a gas pressure loss due to the pipeline and gas friction and energy loss due to the heat transfer between gas and environment along pipelines. Therefore, compressors are integrated in the system to compensate for the losses and to keep the gas flowing. In that way, a part of transported gas is used to power the compressors. Other than gas source supply, P2G units are integrated which are using surplus electricity to produce hydrogen. For modelling purposes of NGS, following equations are being used. Firstly, (17) is pipeline flow equation with fixed gas pressure at reference node to 1MPa each hour. Secondly, nodal gas flow balance is shown in (18). Further on, in (19) energy consumption by GC is provided. As mentioned, DHS has storages to optimize electricity and heat production. Meaning, in cases of overproduction, CHP decreases its generation and while heating demand is high, the heat is provided by the heat storage. Therefore, balance of gas storage is shown in (20) and operational limits of NGS variables are presented in (21)
Integrated electricity, district heating and natural gas test case system
Test case of integrated 4-bus EPS, 6-node DHS and 5-node NGS is presented in Fig. 1. Electricity, heat and gas demand and wind power generation data are provided by Danish transmission system operator, Energinet. Time interval of obtained data is 1 h and simulation period of 1 day is chosen in January. The data is converted in per unit system where base values of power and gas pressure are 1 MW and 1 MPa respectively. Additional data for test system is presented in Table 1 and Table 2 according to [4,5,6]. The operation of integrated test case model is solved by solver IPOPT in GAMS.
Results of optimal operation of integrated system
In Fig. 2.a) optimal generation of EPS based on lowest operational cost is shown. Generation is including active output power of wind turbine and CHP, while demand is including electricity demand of EPS and electricity consumed by P2G for electrolysis and methanation process. As it can be noticed, in moments of surplus power of wind turbine, P2G uses excess electricity and converts it to gas. The wind power production is the highest in the morning and evening and accordingly the overall demand is higher that period due to electricity consumption of P2G. Accordingly, there is a wind spillage from 0.025pu -0.1pu for last four hours of the day. By considering higher limit of 0.2pu of gas supply from P2G, the wind spillage would be zero. Therefore, it can be observed that the optimal operation of EPS is obtained. Optimal operation of DHS generation and thermal demand are shown in Fig. 2.b). Heat demand is including only heat load (red curve), while blue curve is showing heat demand and losses. Thermal loss during 24 hours is 10.3%. In this work, the heat is provided by CHP and heat storage. The heat provided by heat storage is almost constant during entire day. Therefore, the storage balance is dropping as seen in Fig. 3.b). Initial value of the heat storage state is 0.5, where at the end of the day decreases to 0.167. It is not reaching the minimum permitted limit of the storage capacity by the end of the day. The results of optimal operation of generation and demand of NGS are shown in Fig. 3.a). The gas supply consists of gas source, P2G and gas storage supply while the entire demand (black curve) is gas load and gas consumption of CHP. It can be seen that P2G is supplying higher amount of gas at the moments of higher excess electricity from wind turbines. Accordingly, the gas storage is supplying the gas. The initial value of gas storage capacity is 0.5, and at the end of the day is reaching its 0.1 as seen in Fig. 3.b). Both, heat and gas storage are effected by EPS. Heat provided by CHP is high during entire day resulting in almost constant heat supply by heat storage. When it comes to NGS, due to the high wind production at the last four hours of the day, P2G is reaching its maximum available gas supply of 0.1 pu as seen in Fig. 3a). In that period, gas storage increases its output reaching maximum permitted output in order to maintain cost effective system. The behavior can be observed in Fig. 3b) where indication of increase of gas storage output is observed after intersection of both curves. Water temperature of the each node in DHS is presented in Fig. 4.a). The temperatures of node 1 and node 6 are the highest since the heat source and heat storage are connected to that nodes respectively and the heat is supplied to the grid. The temperature after the heat is being consumed by the load is lower than temperature at node 5. Therefore, temperature of the node 5 corresponds to the mix of incoming temperature from load and incoming temperature of 80°C from node 6. Accordingly, a decrease in temperature at node 4 is seen due to the losses caused by a difference between water temperature in the pipes and ambient temperature. The same can be observed for node 2 and node 3. Accordingly, by looking at the heat demand profile in Fig. 2.b), it can be noticed that temperature at node 5 is decreasing in moment of heat load increase. When it comes to gas pressure in the NGS, it is maintained within secure limits and the behavior is similar as temperature behavior in DHS. With higher gas demand, the gas pressure drops. Fig.4.b) is showing gas flow rate in NGS in each pipe. The positive gas flows for some hours have pipe 12, pipe 23 and pipe 35. Opposite flow is present in pipe 34 entire day. Meaning, loads at node 5 and node 2 are additionally supplied by gas storage and P2G in moments of high wind turbine power output. The opposite flow along the pipe 34 is due to the gas provided by gas storage to the system.
Conclusion
This paper investigates a coordination between EPS, DHS and NGS and provides an optimal operation of an integrated system with the objective to minimize total operational cost of all three systems. For simulation purposes, a test case model consisting of CHP providing heat and electricity, gas source, P2G, wind turbines and heat and gas storages is used. Mathematical modelling of EPS, DHS and NGS with its coupling components is performed and the integrated model with its constraints is solved in GAMS. Simulation results show that integration of EPS, DHS and NGS with linking components provides a flexibility to the system in a way that reduces fuel consumption and fluctuations due to the wind variable output power. Accordingly, outcome of this work shows that excess electricity can be converted to gas through P2G unit and, therefore, wind curtailment can be reduced. To conclude, integration of EPS, DHS and NGS is a prominent solution for providing flexibility to the power system.
|
2018-11-23T06:18:25.717Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0b7b6909f183dbef38bc33609112e062e6998baf",
"oa_license": null,
"oa_url": "https://doi.org/10.12720/sgce.9.2.237-246",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "49a66868d0a6e8dd7cba6a022c65b00e82c544e9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
257603697
|
pes2o/s2orc
|
v3-fos-license
|
Immunoregulation of asthma by type 2 cytokine therapies: Treatments for all ages?
Asthma is classically considered to be a disease of type 2 immune dysfunction, since many patients exhibit the consequences of excess secretion of cytokines such as IL‐4, IL‐5, and IL‐13 concomitant with inflammation typified by eosinophils. Mouse and human disease models have determined that many of the canonical pathophysiologic features of asthma may be caused by these disordered type 2 immune pathways. As such considerable efforts have been made to develop specific drugs targeting key cytokines. There are currently available multiple biologic agents that successfully reduce the functions of IL‐4, IL‐5, and IL‐13 in patients, and many improve the course of severe asthma. However, none are curative and do not always minimize the key features of disease, such as airway hyperresponsiveness. Here, we review the current therapeutic landscape targeting type 2 immune cytokines and discuss evidence of efficacy and limitations of their use in adults and children with asthma.
Introduction
Asthma is typically characterized by eosinophilic inflammation and airway hyperresponsiveness coupled with a variety of structural airway changes that are collectively termed airway remodeling. It has traditionally been thought of as a disease of type 2 immunity due to increased levels of interleukin (IL)-13, IL-4, and IL-5. Disordered T-cell immunity and hypersecretion of these canonical type 2 cytokines are central to the immune pathophysiology associated with multiple allergic disorders, including asthma.
Therefore, these cytokine pathways have been the focus of interest as potential targets for novel asthma therapies. Data from mouse models indicate that these cytokines are responsible for many of the classical features of asthma, thus efforts from pharmaceutical companies have been directed toward blocking their function in vivo, via either neutralizing antibodies or receptor Correspondence: Prof. Clare M. Lloyd and Sejal Saglani e-mail: c.lloyd@imperial.ac.uk; s.saglani@imperial.ac.uk blocking molecules. A range of clinical trials with these agents serve as in vivo human "experiments" and have revealed important findings indicating the function of type 2 cytokines in humans that do not always reflect the data previously collected from mouse models. Here, we review the role of type 2 cytokines in driving cellular pathophysiology in asthma patients and discuss how clinical trials have added to our body of knowledge regarding the function of type 2 cytokines in human biology. Importantly, we also review the efficacy of targeting type 2 cytokines in children with asthma, how responses may vary from those in adults, and the importance of designing clinical trials in age-appropriate individuals to directly compare the efficacy and mechanism of action of type 2 biologics. 11 in mice. Although they share some common features, each also has a distinct functional profile (Fig 1). IL-4 is produced by activated T cells, mast cells, basophils, innate lymphoid cells (ILCs), and eosinophils. It is a critical cytokine for driving type 2 immunity as it facilitates differentiation of naïve Th cells into Th2 cells. This is a property unique to IL-4, since other type 2 cytokines do not drive T-cell differentiation. IL-4 is also important for dendritic cell differentiation, required to maintain immune tolerance via stimulation of regulatory T cells (Tregs). Elevated IL-4 expression can enhance the type 2 response via repression and impairment of Treg tolerogenic functions [1,2]. IL-4 also promotes class switching of immunoglobulin to IgE in B cells; increases expression of IgE receptors on the cell surface and specifically on mast cells and basophils enhances expression of the high-affinity IgE receptor [3,4]. IL-4 increases expression of VCAM on pulmonary vascular endothelial cells, thereby facilitating the transmigration of eosinophils across the vascular endothelium and increases expression of eosinophilic chemokines such as the eotaxin family, as well as inhibiting eosinophil apoptosis [5]. IL-4 also exerts effects on nonimmune cells: enhancing mucin gene expression thus promoting mucus hypersecretion, and increasing secretion of cytokines from fibroblasts, so potentially enhancing tissue remodeling [6].
IL-13 is primarily involved in the effector phase of type 2 immune reactions, and thus influences the development of many of the classical pathophysiological features of asthma due to its effects on lung structural cells such as epithelium, fibroblasts, smooth muscle cells, and endothelium as well as immune cells [7]. IL-13 inhibits macrophage production of pro-inflammatory cytokines, TNF, IL1β, and chemokines, but increases IL-12 secretion by macrophages and dendritic cells (DCs) [8]. Activation of B cells in the presence of IL-13 facilitates their proliferation and upregulates expression of CD23, MHCII, and IgM, and also promotes class switching to IgE and IgG1 [9]. Unlike IL-4, IL-13 plays no role in T-cell differentiation since naïve T cells do not express IL-13R. IL-13 can influence the recruitment of eosinophils and basophils via generation of chemokines such as eotaxin. IL-13 is closely related to IL-4, but responses evoked are generally smaller in magnitude [10]. IL-13 increases inducible nitric oxide synthase (iNOS) expression in airway epithelial cells, the predominant contributor of exhaled nitric oxide (Fe NO ), a measurement used as a clinical biomarker to indicate type 2 inflammation in the airways [11]. IL-13 also induces a variety of effects on stromal cells in the lungs that likely promote the induction of tissue remodeling: regulation of epithelial barrier function, differentiation and proliferation of mucus secreting goblet cells, induction of smooth muscle hypertrophy and enhanced contractility, production of ECM, and myofibroblast differentiation [12].
IL-5 is a pro-inflammatory cytokine that is responsible for eosinophil maturation, differentiation, activation, and migration [13]. A range of animal models have demonstrated a close association between IL-5 and eosinophil function, particularly following exposure to allergens or during helminth infections. The effects of IL-5 on eosinophilopoiesis extend from the BM to the site of local inflammation, which in the case of allergen driven inflammation is the bronchial mucosa. Raised eosinophil numbers in the blood, sputum, and airway tissue and lavage are indicative of asthma in adult patients and correlate with elevated levels of IL-5 [14][15][16]. IL-5 promotes the differentiation and maturation of CD34 + eosinophil progenitors both in BM but also in the bronchial mucosa [14,17]. In addition, IL-5 synergizes with the eotaxin family of chemokines to promote the migration of eosinophils to the asthmatic airway [18,19]. Other effects that IL-5 exerts on eosinophils specifically include apoptosis, adhesion to endothelial cells as well as to the ECM, which may reflect a potential role in development of airway remodeling.
The predominant cellular sources of IL-5 include Th2 cells and ILC2, but natural killer T cells, mast cells, and eosinophils themselves are also capable of producing IL-5. IL-5 release from Th2 cells is triggered by inhaled allergen reacting with DCs, while secretion from ILC2 is dependent on GATA3 activation induced by epithelial derived alarmins such as IL-25, IL-33, and thymic stromal lymphopoietin (TSLP) [20].
Receptor binding
Type 2 cytokines bind to a series of receptors composed of distinct but also shared chains (Fig. 2). The IL-4R is a cell surface heterodimeric complex composed of a specific immunoglobulin high-affinity alpha chain combined with a second chain that can be either the common gamma chain or the alpha chain of the IL-13R in order to facilitate the action of either IL-4 or IL-13 within different tissues [21]. IL-4Rα coupled with the common gamma chain forms the type I IL-4 receptor and binds solely IL-4. In contrast, the type II heterodimeric complex comprises IL-4Rα coupled with IL-13Rα, the low-affinity binding receptor for IL-13 that binds IL-13 and IL-4 with high affinity. Both chains of the receptor are required for cellular activation, and only the alpha chain is needed for IL-4 binding (Fig. 2). The type I receptor is mainly expressed by hematopoietic cells, primarily lymphocytes with minimal expression by nonhematopoietic cells such as epithelial or stromal cells. Conversely, the type II receptors are mainly expressed by epithelial cells, while myeloid cells express both types [21].
A soluble form of IL-4Rα is also capable of binding to IL-4 but lacks both a transmembrane and cytoplasmic domain and so cannot signal, thus can act as a decoy. This antagonistic receptor serves as an anti-inflammatory mechanism that counteracts the effects of IL-4 and might represent an autoregulatory or homeostatic mechanism [22]. Thus, sIL-4Rα has therapeutic potential as a cytokine antagonist, and clinical trials indicate it is safe and has some efficacy in moderate asthma; however, it is not currently being investigated as a therapeutic target [22,23].
IL-13 also has two receptors, but in contrast to IL-4, these comprise two separate binding chains: IL-13Rα1 and IL-13Rα2. The main high-affinity IL-13R complex comprises IL-13Rα1 in conjunction with IL-4Rα. This IL-13 Rα1 is expressed by a range of immune cells that include B cells, monocytes/macrophages, mast cells, and basophils but not T cells, as well as endothelial cells and airway epithelial cells [24]. Although IL-13Rα2 binds IL-13 with Figure 1. Functional profile of type 2 cytokines in asthma. In response to inhaled environmental triggers, hypersecretion of the type 2 cytokines, IL-4, IL-5, and IL13 drives the pathological features of asthma: IgE production, airway hyper-responsiveness, eosinophilia, goblet cell hyperplasia, and mucus hypersecretion. IL4 production by Th2, ILC2s, mast cells, DCs, and eosinophils promote IgE class switching, eosinophil transmigration, mucus hypersecretion, and fibroblast activation. Uniquely, IL4 is key for the differentiation of naïve Th cells into Th2 cells. IL5 is produced by various immune cells, however Th2 and ILC2s are the predominant cellular source. Eosinophil maturation, differentiation, and activation are all driven by IL5. IL13 has a similar functional profile to IL4, is predominantly produced by Th2 and ILC2 cells, and is a key contributor of airway remodeling. Novel biologics developed to inhibit the function of IL4/IL13 (dupilumab, lebrikizumab) and IL5 (benralizumab and mepolizumab) as well as the anti-IgE Omalizumab are depicted. To exert their biologic effects, the type 2 cytokines can bind to a series of receptors. IL4 can signal via a type I or type II receptor. Upon IL4 ligand binding to the IL4Ra receptor, the recruitment of either the common gamma chain (γc, type 1) or the alpha chain of the IL13R (IL13Rα1, type II) forms a heterodimeric complex. Type I receptors are expressed on all hematopoietic cells and are exclusive to IL4 ligand binding. Nonhematopoietic cells such as the structural airway cells express IL4Rα1, whereas myeloid cells express both the γc and IL4Rα1 receptors. IL4 type I receptor binding activates STAT6 or IRS2 to promote a type 2 (T2) response and macrophage and fibroblast activation. IL13 ligand binding to the type II receptor IL13Rα1 forms a heterodimeric complex with the IL4Rα chain. Type II receptor binding by IL4 or IL13 leads to the activation of STAT3 or STAT6 to induce airway hyper-responsiveness (AHR) and mucus hypersecretion. IL13Rα1 is expressed on a range of immune and structural airway cells. IL13 binds with higher affinity to IL13Rα2, known as a "decoy receptor", currently thought to attenuate IL13/IL4/STAT6 signaling. IL5 ligand binding to the type I IL5Rα that is only expressed on eosinophils and basophils induces the recruitment of the common βc chain to promote STAT1,2,5 signaling and modulate eosinophil biology. much higher affinity than IL-13Rα1, it does not signal and is thus considered a decoy receptor. It potentially serves to downregulate IL-13 functions, although its function is not really known. Given that IL-4 can also signal through the IL-13R complex, it operates as a second receptor for IL-4 and is referred to as type II IL-4R [21]. In contrast, the type I IL-4R complex, which consists of the IL-4R α chain and the common γ chain, is specific for IL-4 ( Fig. 1).
Alveolar macrophages are the most predominant immune cell in the lung and capable of differentiating into distinct subtypes in response to different stimuli. Resting macrophages are activated by IL-4 and IL-13, developing into a macrophage phenotype that promotes repair and remodeling [25]. Thus, both protective and pathogenic roles have been identified for these key effector cells. Ongoing studies will ascertain the benefit of targeting alveolar macrophages subtypes for the treatment of asthma [26].
The biological effects of IL-5 are mediated following binding with the specific IL-5R. This is a type I cytokine receptor consisting of a heterodimer of the IL-5Rα subunit combined with a nonspecific common βc subunit, which is also found in the receptor complexes for IL-3 and GMCSF [27]. The IL-5Rα is exclusively expressed by eosinophils and some basophils, and as a monomer is a low affinity receptor, while dimerization with the β-chain produces a high affinity receptor. The alpha subunit binds IL-5 while the β subunit is nonligand binding but facilitates signal transduction and is expressed on virtually all leukocytes.
Intracellular signaling pathways initiated by IL-4, IL-5, and IL-13
Engagement of either IL-4 or IL-13 with their receptor promotes activation of the signal transducer and activator of transcription (STAT6). Type I IL-4R can also activate alternative transcription factors from the insulin response substrate (IRS)-2 family, thereby linking the IL-4R with other downstream signaling pathways that include PI3K/mTORC2, AKT, AHC/MAPK, and Shp-1 [28]. Other pathways implicated in receptor signaling include STAT3 activation via IL-13Rα1 and IRS2 upregulation by Socs1/ubiquitin [29].
Cytokine binding triggers trans-phosphorylation and activation of the IL-13R cytoplasmic tyrosine kinases from the Janus family protein kinases (JAKs), which are recruited to the complex. IL-13 can recruit different combinations of JAKs depending on the tissue location of the receptor. JAK1, JAK2, and JAK3 are associated with IL-4Rα, γc, and IL-13Rα1 chains, respectively. These JAKs are then able to phosphorylate STAT6 via its SH2 domain, the principal transcription factor activated by both IL-4 and IL-13. While STAT6 induces distinct gene expression profiles in different cell types (reviewed in [28]), it activates the majority of the genes induced by IL-13 and is thus responsible for the majority of the pathophysiologic features typical of asthma [29].
Triggering of the IL-4R/STAT6 axis via IL-4 and IL-13 promotes allergic inflammation, and mice with targeted deletion of either IL-4Rα or IL-13Rα1 have provided insight into the functions of the individual receptors [30,31]. IL-4R type I is critical for the generation of type 2 responses, development of alternatively activated macrophages and fibroblast activation, while the type II receptor is essential for development of allergen induced airway hyperreactivity and mucus hypersecretion [29].
Binding of IL-5 initiates the generation of a functional IL-5Rα/βc receptor complex, which in turn initiates a network of transcription factors that consist of JAK1/2-STAT1/3/5 modules, p38, and ERK MAPK, as well as NFκB [32]. The ensuing activation of specific target genes leads to eosinophil maturation, enhanced survival/reduced apoptosis, as well as activation.
Biologics currently available: The challenge of individualized management
Given the observed effects of type 2 cytokines in driving key features of pathology efforts to develop novel therapies for asthma have been directed toward mitigating the effects of these cytokines [33,34].
Pharmaceutical companies have developed a range of drugs designed to reduce type 2 cytokines by either neutralizing the cytokine itself or binding to the receptor. At present, the licensed indication for all available targeted treatments is only for severe asthma [35], which cannot be controlled despite high-dose inhaled and/or oral corticosteroids. Since most cytokine directed treatments available at present target either IL-5, IL-4, or IL-13, and thus type 2 immunity, predicting which specific therapeutic will be best for which individual patient can be difficult. The choice of treatment for each patient will be determined by clinical and biological biomarker results (blood eosinophils, exhaled nitric oxide, number of exacerbations in the previous 12 months), but if a patient is eligible for more than one cytokine blocking approach, then in the absence of head-to-head trials of efficacy, the pragmatic approach is often determined by physician experience, or the order in which licensing was approved.
Blocking IL-5 in severe asthma
Mepolizumab and reslizumab are both anti-IL-5 antibodies and are indicated as add-on maintenance treatments for severe eosinophilic asthma. Mepolizumab, administered subcutaneously, is licensed in patients aged ≥6 years, while reslizumab, which can only be given intravenously, is only licensed for adults aged ≥18 years. The route of administration means mepolizumab is often the preferred choice clinically.
Phase 3 clinical trials of add-on treatment with mepolizumab or reslizumab have shown reduced exacerbation rates by approximately 50% and improved health-related quality of life in adult patients with severe eosinophilic asthma with recurrent exacerbations. These outcomes were irrespective of the presence or absence of allergen sensitization [36][37][38]. Although different cut-off values for blood eosinophil counts were used in these studies (≥150 cells/mcL at screening or ≥300 cells/mcL in the previous 12 months for mepolizumab, and ≥400 cells/mcL for reslizumab), blood eosinophilia was a better predictor of a therapeutic response to anti-interleukin-5 antibody than exhaled nitric oxide (FeNO) [33,39]. Mepolizumab efficacy was assessed at different doses, but the lack of significant differences in exacerbations at the lowest dose of 75 mg compared to 750 mg resulted in the lowest dose being approved for clinical use. However, impact on reducing blood eosinophils was dose dependent, and a reduction in airway eosinophils was only seen at the highest dose [33].
Benralizumab is a monoclonal antibody that targets the alpha subunit of IL-5R. In a bronchoscopic study, benralizumab reduced eosinophil counts in the airway mucosa and in the airway lumen (sputum) by at least 90% and completely depleted blood eosinophils [40]. Phase 3 clinical trials involving predominantly adults with exacerbation-prone, severe eosinophilic asthma (baseline blood eosinophil counts of ≥300 cells/mcL), have shown addon treatment with benralizumab (administered subcutaneously every 4 or 8 weeks) significantly reduced asthma exacerbation rate and improved prebronchodilator forced expiratory volume in 1 s (FEV 1 ) compared with placebo [41,42]. The attraction of benralizumab is the efficacy of the 8-week dosing regimen, which has been recommended for its licensed use. Further realworld and open-label extension studies have confirmed the efficacy and long-term safety of benralizumab in patients with severe eosinophilic asthma [43,44].
Despite the overall efficacy in large trials, all current therapies that target IL-5 or its receptor rely on the measurement of a single elevated blood eosinophil count to determine eligibility. However, variation of blood eosinophils over time has been documented in patients that were in the placebo arms of the phase 3 clinical trials [45]. Moreover, there is little relationship in children, between airway and blood eosinophils, and of the various biomarkers used to define eosinophilic asthma, since no single biomarker truly reflects airway eosinophilia, which is the target of biological therapies [46,47]. It is therefore hard to predict efficacy in the individual patient. A limitation of the therapies that target IL-5 is their relatively low impact on lung function. Other type 2 cytokines have therefore been targeted to try and exert maximal effect on airway hyperresponsiveness and lung function.
Blocking IL-13 in severe asthma
Murine studies of anti-IL-13 therapies very consistently showed an improvement in airway hyperresponsiveness, this was therefore an attractive target for type 2 asthma. Lebrikizumab, a neutralizing antibody against IL-13, which blocked its interaction with IL-4Rα, was first tested with a primary endpoint of relative change in prebronchodilator FEV 1 . Prespecified subgroups of patients were defined to try to identify the phenotype that might respond best, and this was done according to serum IgE, blood eosinophils, and serum periostin. The primary outcome was positive in the first phase 2 trial, showing greater efficacy in those with higher serum periostin levels [34]. This approach of identifying type 2 high patients using serum periostin or blood eosinophils to determine efficacy was subsequently adopted in larger phase 3 trials. The primary endpoint was rate of exacerbations, but the final outcome was not positive even in those with enhanced periostin or blood eosinophil levels [48]. The intervention was thus abandoned as a therapy for severe asthma by the company.
Tralokinumab is another IL-13 neutralizing antibody that has been tested in severe adult asthma. An early phase 2b trial of tralokinumab given every 2 weeks improved pre-bronchodilator FEV1, but it did not reduce asthma exacerbation rates in patients with uncontrolled asthma [49]. Tralokinumab was subsequently evaluated in two large multicenter phase 3 clinical trial in patients with moderate-to-severe asthma [50]. In the first, there was no significant decrease in annual exacerbation rate compared to placebo, but there was a significant exacerbation rate reduction in patients with high FeNO. However, in the second large clinical trial which prespecified treatment effect in patients with high FeNO (>37 ppb), there was no longer an improvement in exacerbation rate compared to placebo [50]. It has also failed to show an impact of oral corticosteroid reduction in patients with very severe asthma [43].
The "failed" efficacy of anti-IL-13 antibody therapy in severe asthma has been a huge lesson about the potential discrepancy between molecules that show promise from mechanistic preclinical studies and pathobiology to efficacy in clinical trials [51]. There are several explanations for lack of clinical efficacy, including the heterogeneity of the biomarkers used for severe asthma, need for increased awareness about their longitudinal variation and care before choosing a biomarker measurement at a single time-point to determine eligibility for a particular treatment. In addition, failure to block eosinophils in parallel with IL-13 may not be sufficient to reduce exacerbations, but importantly, preclinical studies show maximal impact of blocking IL-13 on reducing sputum production and goblet cell hyperplasia, so efficacy may be greatest in the clinical phenotype that includes increased mucus (sputum) production [52]. Finally, findings from pre-clinical studies that block molecules before established disease (preventive regimens), rather than after disease manifestation (therapeutic regimen), should not be extrapolated to equate to likely clinical efficacy and such models must be challenged.
Combined blocking of IL-4 and IL-13 via the IL-4rα
Dupilumab is a monoclonal antibody that blocks both IL-4 and IL-13 signaling by binding the alpha subunit of the IL-4/13R. In the first phase 3 clinical trial of dupilumab involving patients with uncontrolled moderate-to-severe asthma, there was a significant reduction in severe asthma exacerbations, including those leading to emergency-department visits or hospitalization, as compared with placebo [53]. The added attraction of this antibody over those blocking IL-5 is the repeated demonstration of beneficial effect on symptom control and prebronchodilator FEV1 [54]. Reductions in exacerbations and improvements in lung function were seen among patients with blood eosinophil counts of ≥150 cells/mcL or higher, or those with FeNO ≥ 25 ppb at baseline [53,55]. However, greatest efficacy was seen in patients with blood eosinophil counts ≥300 cells/mcL, with a 47.7% reduction in exacerbations and a 0.32 L increase in FEV1.
The main adverse effect of dupilumab has been a blood hypereosinophilia, the mechanism for this is uncertain, but likely because of the IL-4-/IL-13-mediated inhibition of eosinophil migration from blood into tissues. This adverse effect is seen in 2-25% of patients; however, a significant rise of >5000 cells/mcL is seen in <2% of patients, and the finding in the majority is transient, asymptomatic, and does not require discontinuation of treatment [56].
The clinical attraction of dupilumab over IL-5 blocking agents is the beneficial effect on lung function as well as asthma exacerbations. However, currently, in the UK it can only be used after a failed trial of mepolizumab, and the disadvantage is the 2 weekly administration, compared to 4 weekly for mepolizumab and 8 weekly for benralizumab.
Blocking TSLP in severe asthma
Tezepelumab, a human monoclonal antibody specific to TSLP, is the first biologic targeting an innate epithelial cytokine that has been approved for the treatment of severe asthma. Tezepelumab is approved for patients 12 years and above who despite treatment with high-dose inhaled corticosteroids fail to achieve asthma control. As an epithelial alarmin, TSLP propagates type 2 airway inflammation in response to inhaled harmful environmental factors. More recently, a role for TSLP in mediating the interactions between airway smooth muscle cells and immune cells during inflammation has been proposed [57]. Patients with asthma have higher levels of TSLP compared to healthy controls, with TSLP levels correlating with disease severity. In Phase 2b (PATHWAY) [58] and Phase 3 (NAVIGATOR) [59], trial of tezepelumab, compared to placebo, resulted in significantly fewer exacerbations over a 52week period (71% and 44%, respectively). The NAVIGATOR trial also showed tezepelumab improved FEV 1 , asthma control, and patients reported improved quality of life compared to those on the placebo arm [59].
Challenges ahead
The challenge is now to define stable and reliable clinical phenotypes and biomarkers that will enable targeted choice of the optimal biologic for the individual patient. This may only be possible with head-to head comparison trials, but these are unlikely to be funded by the pharmaceutical industry, so there is a need for investment from independent, academic funders. This is the only way to achieve optimal cost-effectiveness and best outcomes for patients. The other critical question is whether any of these interventions will enable disease modification. There is good evidence that currently available biologics, specifically mepolizumab [60] and benralizumab, enable reduction in steroid doses, and significant steroid sparing [61]. However, studies to date have shown discontinuation of biologics results in loss of asthma control, suggesting continuous treatment is necessary to maintain clinical benefit [60]. Indeed, data from the randomized placebo-controlled study, COMET, showed that stopping long-term mepolizumab treatment worsened asthma control and increased exacerbations [62]. Unfortunately, pharma will also not want to undertake trials that may show there is no long-term need for their novel therapies, thus again necessitating independently funded academic trial designs. It is likely that disease modification may only occur by intervening earlier during disease development, making therapies that block IL-4/IL-13 very attractive targets specifically in childhood asthma, which is predominantly driven by allergen sensitization, eosinophilia, and type 2 immunity. It is important to note that the impact on features of airway remodeling and the pathological features of asthma have not been investigated following biologics. This is relevant to patients of all ages as airway remodeling, specifically thickening of the reticular basement membrane is observed in very young children and can impact lung function [63,64].
The difference between treating children and adults
Although the biologics that target IL-5 and the IL-4rα have been approved for use in children with severe asthma, there has been limited evidence of efficacy in children compared to adults, and licensing has been approved by extrapolation of adult data to children. However, generation of evidence in children is important because the immune system develops postnatally and it is acknowledged that there may be different immune mechanisms at play during early life compared to in adulthood [65][66][67]. It is clear that the timing of allergen exposure, during this "window of opportunity," influences the nature and magnitude of the immune response to the allergen and impact development of pathology. Thus, age is an important consideration when using biologic reagents, which influence key immune pathways.
Efficacy of anti-IL-5 biologics in children
Until recently, only 37 adolescents aged 12-18 years had been included in placebo-controlled trials of mepolizumab, which overall had included >1800 subjects. Despite this very small number of adolescents being included, and no convincing evidence of efficacy in a post hoc analysis of these subjects [68], approval for use in children aged 6-16 years with severe asthma was licensed by both the FDA and EMA. The same biomarkers were approved as used in the adult studies. The need to consider the impact of maturing immunity and very different ranges of blood eosinophils in healthy children compared to adults [69], and evidence of a marked airway eosinophilia, which is relatively steroid resistant in children, were highlighted as significant reasons for urgent ageappropriate trials before licensing [70]. Three years after license was approved, the first trial of efficacy of mepolizumab in a pediatric trial has been published. However, the inclusion criteria were very different to those used in adult studies, and the population was urban, inner city, and predominantly African American or Hispanic. The MUPPITS-2 trial included children aged 6-16 years, and for inclusion, they only needed two exacerbations in the previous year with blood eosinophils of ≥150 cells/mcL. In contrast, current prescribing guidance is at least four exacerbations in the previous year with blood eosinophils at least 300 cells/mcL or three exacerbations with eosinophils of ≥400 cells/mcL. Despite the differences in biomarkers and severity of disease, in this exacerbation prone urban population, mepolizumab resulted in a 27% reduction in exacerbation rate over 52 weeks, mean exacerbation rate in mepolizumab group was 0.96 compared to 1.3 in the placebo group [71]. Although this was a significant reduction, it is hard to argue this was a clinically meaningful reduction and the authors acknowledge the effect was much lower than seen in adult studies. It is hard to know whether the effect was lower because of the urban and minority population, or because lower blood eosinophils were used as the cut-off, or whether number of exacerbations prior to entry were lower than in the adult studies, or whether the role of eosinophils is very different in children to adults. However, this highlights the need for specific clinical trials in children. In contrast to adult studies, there was no benefit of mepolizumab on any secondary clinical endpoints including symptoms scores, lung function, or exhaled nitric oxide.
An interesting finding was that mepolizumab showed the greatest effect during Autumn, when viral exacerbations are at their peak. This is consistent with the finding that omalizumab also has a specific pre-seasonal effect [72], with improved antiviral immunity [73] to rhinovirus infection during Autumn. Given the primary effect of all current type 2 targeting biologics, and the continued viral seasonal peak in admissions for children, the mechanisms of action of the biologics and interactions between anti-viral immunity and type 2 immunity need specific investigation, as pre-seasonal prescribing may be an attractive approach, especially as the long-term effects of blocking type 2 immunity remain unknown.
Efficacy of IL-4 receptor blockade in childhood severe asthma
In contrast to mepolizumab, dupilumab has only been licensed for children with severe asthma after data from a pediatric clinical trial was generated.
In children with severe asthma and those having recurrent asthma exacerbations, a post hoc analysis of the LIBERTY ASTHMA QUEST study suggested that dupilumab assessed in adults may be efficacious in children [74].LIBERTY ASTHMA QUEST was a phase 3, randomized, double-blind, placebocontrolled, parallel-group trial, whereby 1902 patients were recruited, aged ≥12, with persistent asthma. Patients taking continuous ICS therapy that required one-to-two additional asthma controller therapies were randomized and add-on dupilumab therapy, subcutaneously, every 2 weeks was compared to a matched placebo. The primary end points assessed were rates of severe exacerbations over a 52-week period and change in FEV 1 [75]. With these data, a placebo-controlled trial was conducted in 6-to 11-year-olds with uncontrolled moderate-to-severe asthma to receive subcutaneous dupilumab or placebo 2 weekly for 52 weeks [76].Two primary efficacy cohorts were included, a group with inflammation of 150 blood eosinophils/μL or exhaled nitric oxide of 20 ppb, or a blood eosinophil count of >300/μL. In the former, efficacy population assessed the primary endpoint, annualized asthma exacerbation rate, significantly reduced, and there were associated improvements in asthma control (symptom score) and lung function in children treated with dupilumab compared to placebo. Of note, similar significant dupilumab beneficial improvements were observed in children with an eosinophil count of >300/μL. The safety of dupilumab was similar to that of placebo in the study. In contrast to the mepolizumab effect in children, the overall effect of dupilumab in children was much larger.
The annualized rate of severe asthma exacerbations was 0.31 with dupilumab and 0.75 with placebo (relative risk reduction in the dupilumab group, 59.3%) [76]. The effect size on exacerbations, together with an improvement in lung function, and equal effects in those with high or low blood eosinophils, suggests blocking IL-4 and IL-13 via the IL-4 receptor is a more favorable mechanism for children with severe asthma.
A post hoc analysis comparing efficacy of dupilumab in subjects aged 12 years and older with "allergic asthma" to those without "allergic asthma" has been performed. Allergic asthma phenotype was defined as serum total IgE ≥ 30 IU/mL and sensitizations to 1 or more perennial aeroallergen. Of 1902 patients included in the analysis, 83.3% had eosinophils ≥150 cells/mcL and/or FENO ≥ 25 ppb; 56.9% had evidence for allergic asthma. Dupilumab significantly reduced the rate of severe asthma exacerbations in patients with (48.8%) and without (64.0%) evidence of allergic asthma and improved lung function, irrespective of whether they showed evidence of an allergic asthma phenotype.
Given that dupilumab shows efficacy in children with eosinophils >150 cells/mcL, and with equal efficacy regardless of allergen sensitization, this means its use is not as limited as is currently the case for the anti-IgE monoclonal antibody, omalizumab, suggests that dupilumab may be the biologic with potentially greatest efficacy in children. However, the key now is an urgent need for head-to-head trials that directly compare efficacy of different biologics currently licensed to really understand the optimal biomarkers and predictors of response, but also to understand the mechanism of action and enable the correct choice for the individual child. There are still a group of nonresponders for all current biologics, so it is key to identify markers of response and nonresponse if we are to progress. A summary of the efficacy of the currently available biologics for treating adult and pediatric severe asthma is provided in Table 1.
Summary
The range of treatments for severe asthma has increased over the last few years with the introduction of biological agents that suppress the activity of type 2 cytokines. While clinical trials and real-world evidence show that they effectively reduce exacerbations and, in the case of anti-IL-5/R agents, suppress eosinophilia, there are unanswered key questions. Given the data from multiple animal models of allergic disease that show suppression of type 2 cytokine activity ameliorates allergen driven lung dysfunction, it is not clear why this does not seem to be the case in patients. These differences are likely that mouse models exhibit dominant type 2 immunopathology and do not reflect the real-world complexity of the disease in humans. IL-13 in particular has been shown to promote airway hyperreactivity in mice yet suppression in patients via a variety of different biologics does not affect lung function [51]. Similarly, the effect on remodeling pathways has not yet been described, although there is some indication that benralizumab reduces airway smooth muscle mass [77]. It may be necessary to use agents that target more than one cytokine. This strategy has been employed with a novel IL-4Rα/IL-5-bispecific antibody that targeted multiple type 2 cytokines. Simultaneous blockade of IL-4, IL-5, and IL-13 resulted in ameliorated goblet cell hyperplasia as well as reduced lung function [78]. Similarly, vaccination with dual targeting IL-4 and IL-13 kinoids reduced type2 inflammation in HDM-exposed mice [79]. To date, the longer term consequences of suppressing type 2 immunity has not been determined. It is accepted that many of the cells and molecules that are involved in type 2 immune reactions are also involved in key homeostatic pathways such as energy regulation, thermoregulation, and tissue repair [80], so it would be important to understand how type 2 biologics affect these functions. This is particularly important when considering the agents as treatments for childhood asthma. Going forward, it will be important to determine how type 2 biologics influence the full range of asthma characteristics and how suppression of type 2 immunity affects immune function over longer time frames.
Conflict of interest:
The authors declare no commercial or financial conflict of interest.
Data availability statement: Data sharing is not applicable to this article as no new data were created or analyzed in this study.
|
2023-03-19T06:16:49.125Z
|
2023-03-17T00:00:00.000
|
{
"year": 2023,
"sha1": "b3253f8bd2f9d131f2f1630a94b5243084006458",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/eji.202249919",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "b93cd2f550ef95f517e0a783a7cba39fbda99a97",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
116731529
|
pes2o/s2orc
|
v3-fos-license
|
Improving contractor social networking on IBS infrastructure maintenance projects: a review
Purpose – A key factor adversely affecting contractor social networking performance is the improper handling and information management of contractor ’ s services delivery on websites. Contractor social networking is particularly problematic on industrialised building system (IBS) infrastructure maintenance projects where contractor ’ s certified quality product and firms are not matched with maintenance specialisation services. The paper aims to discuss this issue. Design/methodology/approach – This paper reports on the early stages of research which is developing a new information and communications technology (ICT)-based approach to managing contractor social networking on IBS infrastructure maintenance schemes. As a precursor to this work, the paper reviews current contractor social networking websites practices on IBS infrastructure maintenance projects and explores the ICT tools and techniques currently being employed on such projects. Findings – The findings reveal the need for more sophisticated contractor social networking websites solutions which accord with the needs of IBS infrastructure maintenance schemes. Originality/value – The paper concludes by presenting a research framework for developing such a system in the future.
1. Introduction "The age of technology and the communication revolution have provided the construction industry with new opportunities to advance contracting and grow business.It's not surprising that companies of any size present their new services on the social media today" (Clark, 2014).Many construction companies such as industrialised building system (IBS) infrastructure maintenance contractor aim to provide their certified quality product and firms as valuable as possible in order to meet a client's needs on their services offered.IBS infrastructure maintenance involves the critical and unique prefabricated components and onsite installation that utilises techniques, products, components or infrastructure systems from the structural classification (e.g.bridge, tunnel and dam) and should need critical contractor networking attention towards their specialisation services delivery information on websites.There are many factors affecting contractor social networking performance on such projects.Kostora (2015), Ruan et al. (2012), Larsen (2011) and Chowdhury et al. (2011) suggested that the main reasons for poor contractor social networking performance on websites were fault information exchange, less working relationship, general description of maintenance knowledge exchange in a tweet, ignoring contractual relationship and using less task dependency in maintenance activities to manage project process that turn client or participant misinterpretation from an online engagement easily.Online engagement is commonly defined as the interaction between construction team across a range of contractor social networks.Creative (2015) also suggested that inaccurate content production including content strategy was a major cause of misconceptions communication.Thus, it would seem that poor communication content is a major cause of poor contractor social networking performance in IBS infrastructure maintenance projects.The conflict of interest is potential to be emerging with the communication misconceptions of services-based specifications and impact of the client's satisfaction (Glick and Guggemos, 2009).In addition, the clients facing difficulty to consider the specific contractor through generic communication contributed to the ineffective marketing approach such as trust and relationship development while the contractor's social network services delivery cannot have refined it accordingly (Blismas and Wakefield, 2009).
In order to make contractor social networking on website effective for IBS infrastructure maintenance projects, there should be complementary between the contractor and client for implementing an integrated approach in the social engagement (e.g.social network analysis (SNA)) to facilitate new communication platforms that business best for IBS infrastructure maintenance projects in the future.This paper reviews current contractor social networking websites practices on IBS infrastructure maintenance projects and explores the information and communications technology (ICT) tools and techniques implemented.It starts with a review of current contractor social networking practices in infrastructure maintenance projects and the common problems.The ways in which contractor social networking are managed on IBS infrastructure maintenance projects are discussed and areas for improvement are highlighted.The paper concludes with a discussion of the findings showing the outline features of a research framework for more sophisticated contractor social networking websites solutions which accord with the needs of IBS infrastructure maintenance schemes.
Research methodology
This research is to develop a system to improve the contractor social networking websites used in social networking processes at IBS infrastructure maintenance projects.The contractor social networking processes are to identify avenues of team structure optimisation, to assess and plan the IBS infrastructure maintenance implementation.The new system is expected to integrate the contractor social networking processes and information database to provide efficient mechanisms of contractor social networking websites at IBS infrastructure maintenance projects.The literature review was conducted at the beginning of the research to examine and analyse the contractor social networking practices and integration of ICT into the contractor's services delivery on websites.Second, the multiple case studies were to review the current practice of contractor social networking and ICT implementation in the real situation.The causal explanation of the problems was also investigated in order to establish the requirement for facilitating particular social networking processes to be transformed into the system approaches in managing maintenance at IBS infrastructure.The next stage involved the building of process prototype model for in-depth understanding of the conventional processes and to propose the particular flow of process to be transformed into the ICT-based system.
The main contribution of this paper is to analyse the contents of existing bridge resilience databases on contractor social networking and provide an approach for future courses of action to optimise their potential applications by contractors.First, an analysis of the contractor social networking databases in infrastructure maintenance projects was completed based on reference literature.A revision of the scope and characteristics of existing databases in different processes, presenting some of the challenges in this field, is therefore presented in sections "Identification", "Assessing" and "Planning".Next, a critical and comparative analysis of them was undertaken, going from simple databases to diagnosis support-platforms.The corresponding results are included in section "Improving contractor social networking in IBS infrastructure maintenance projects".Finally, a proposal for the development of a knowledge-based identification, assessing and planning app tool for IBS infrastructure maintenance projects was designed and is presented in the section "Need for improvement" of this paper.
3. Contractor social networking in infrastructure maintenance projects SNA is an important function of analytical tool in the contractor social networking research in order to provide indications of knowledge integration collaborative working and effective communication in infrastructure maintenance projects (Loosemore, 1998;Chinowsky et al., 2008;El-Sheikha and Pryke, 2010;Larsen, 2011).Cross and Prusak (2002) and Cohen et al. (2001) defined SNA functions which include providing a method to understand informal networks within and between organisations, managing the informal networks systematically, supporting collaboration, commitment, ready access to knowledge and talent, and coherent organisational behaviour.The less effective of SNA such as improper handling and information management of contractor's services delivery on websites during organisation and coordination of infrastructure maintenance projects will influence the trust principle, project progress and the quality services.Kostora (2015) stated that posts for document element and knowledge section in information management may range from 15 to 85 per cent of total project's organisation and coordination posts.In addition, Ireland (2010) indicated that almost 91 per cent of the total project planning post of any industrial organisation consists of project's organisation and coordination posts.Therefore, there is a need for efficient contractor social networking in order to control information technology post productivity in infrastructure maintenance projects.
There are many issues which contribute to poor contractor social networking in infrastructure maintenance projects.Styhre and Gluch (2010) suggested that inefficient use of information in decisional process, lack of a proper work coordination, inappropriate services delivery and less integration of inspection and maintenance data all adversely affect contractor social networking.In addition, the common issues in relation to contractor social networking are as follows: • improper detail working of ties and interactions (Ruan et al., 2013); • deficient quality control (Ruan et al., 2012); • lack of consistency and completeness of infrastructure data (Wambeke et al., 2011); • lack of experienced worker (Wambeke et al., 2011); • insufficient information about maintenance, repair and renewal planning (Styhre and Gluch, 2010); • insufficient information among clients and contractors about IBSs on infrastructure (Chinowsky et al., 2010); • insufficient coordination among client and contractor organisation (Park et al., 2011); and • communication issues among team players (El-Sheikha and Pryke, 2010).
The processes involved in contractor social networking consist of three stages: identification, assessing and planning on infrastructure maintenance projects.A good contractor social networking environment enables appropriate infrastructure components monitoring on onsite or off-site maintenance projects.Each stage is clarified in detail in order to better understand the actual process in contractor social networking practices in most infrastructure maintenance projects in Malaysia.
Contractor social networking
Identification According to the definition of Ehrlich and Chang (2006), the process of identification social network approaches or social relationships survey is "the informal networks within and between organisations of the construction and services of an infrastructure in sufficient collaboration to enable a contractor to advise what impact the condition and the circumstances of that project infrastructure performance will have upon the client based on trust and mutual support".This is an essential scheme and requirement to successful communication and interdisciplinary interaction as well as making appropriate recommendations for optimising team structure.Malisiovas and Song (2014) described that social network identification as a measurement analysis to collect information about the density, centrality, betweenness, geodesic distance, average shortest path and modularity condition of the network structures for a defined purpose to provide detailed information comprised of information diffusion effect, list of communication and information-sharing problems that may be required for social relationships survey.Hu andRacherla (2008) andDe Nooy et al. (2005) also mentioned that identification process includes investigating the conditions of social structures by analysing the interactions and interrelationships of a set of actors to support collaboration (e.g.designer and contractor) or problem solving for the completion and quality of social networks.The correct identification process of industrial network issues at the interpersonal level in specific conditions for social network study covers identifying principle method of SNA which related to individuals, organisations and knowledge diffusion in the infrastructure maintenance research domain (Park et al., 2011).In addition, the social relationships survey is also undertaken with the help of project managers and construction managers to improve the quality of international project planning and the capabilities of a firm (e.g.interfirm relationships) including to establish the extent for proper collaboration work to which this requirement is being met (Park and Han, 2012).Therefore, the perfect identification process on condition of a social network becoming the measurement marker for high-performance teams will provide guides for all the subsequent quantitative information in relation to the complex and interactive processes that would be put onto the social network in order to keep the social network with better performance especially for the infrastructure maintenance projects.It also enables the impact of an identified social network on the absence of knowledge sharing across an organisation for any signs of abnormal collaboration of the team structure such as reoccurrence incidences of communication problems towards decision-making teams, organisational disintegration and high fragmentation of project networks to be solved.Vechan et al. (2014) mentioned that the main purpose of social network identification is to inspect and to prevent disorganisation as a part of communication process, as well as to share them with information and knowledge of drawings or documents such as a checklist and punch list for facilitating infrastructure maintenance measures and rectifications action projects in the future.In addition, the consideration of social network identification methods in the IBS maintenance industry has an important implication for the development of an effective project management strategy at the firm level, particularly in terms of recommending refined negative interactions in case of robust project network designs between domestic and global projects, and hence to lengthen the project effectiveness and collaborative ventures between various overseas companies (Park et al., 2011).According to Wood (2012), the IBS infrastructure could permit the reduction or elimination of information-sharing problems, ineffective communication and decision making for maintenance projects to meet long-term needs on networking services when the process of identification social network method is put in the operative way.The requirement for social relationships survey is to reduce the defective mutual interactions and increase the profit and quality of infrastructure maintenance projects.Therefore, it is essential for the contractor social networking process to consider efficiency in social network identification to ensure the acceptable quality and productivity, resulting in better service to respond to the project networks or communication issue without delay.
Assessing
Social network assessment is concerned with analysing the likely future characteristic and performance of an existing participant relationship (e.g.possible project design, architecture team selection, costing options) (Malisiovas and Song, 2014).It is also known as social network diagnosis or special characteristic diagnosis, and is a tool to cope with the problems of ambiguity and uncertainty decision in collaboration networks and hence support the firm's performance of the maintenance projects.The primary assessment encompasses a wide range of mechanics and dynamics categories that includes the communication, information exchange, knowledge exchange, experience, reliance, trust and values which are used as part of an overall model and social network assessment for organisations in determining recommended actions for enhancing high performance and future collaboration management.The objective of social network assessment in infrastructure maintenance projects is to provide quality business network strategies and also in forecasting the collaboration function for the future planning in infrastructure maintenance programme.Besides, the assessment category can be used as an innovative and transformative tool for appropriate network relationships required in regular project teams (Chinowsky et al., 2008;Vechan et al., 2014).Mutis and Issa (2011) stated that assessment is about systematic coordination of project (priority CSN method) on infrastructure maintenance that considered data content on the critical state of infrastructure maintenance to decide on a priority ranking for communication or coordination of features and service elements in terms of collaborative and decision performances (e.g.effect of social human network and interaction level of multiple actors).
Poor social network assessment of infrastructure maintenance projects could lead to many difficulties, which end up to poor project performance and waste such as project delays, unacceptable qualities and higher cost.A failure in the social network assessment process as also listed by various researchers could result in: • lack of integration between project team (Lin, 2015); • poor quality order and information transmission (Lin, 2015); • lack of information and knowledge integration (Chinowsky et al., 2008;Liu et al., 2015); • improper order-management of the network; • defects in completed building; • insufficient network about site organisation and their management organisation (Niknam and Karshenas, 2014); and • no standard format or guidelines for the social network assessment (Niknam and Karshenas, 2014).
In order to avoid failure, it requires the typical assessing of procedure to make it beneficial to predict the social network strategies needed for existing structures of the building and to assist in decision making of maintenance future planning (Niknam and Karshenas, 2014).Keung and Shen (2013) stated that there are about four main processes involved in social network assessment, and this is illustrated in Figure 1.The social network assessment 483 Contractor social networking begins with categorising the parameters in the formation of the measurement model to determine the information exchange between project members level, followed by the selection of project communication system needed as the team communication to engage in network activities with each other and to work together effectively to accomplish tasks.The next step requires the knowledge sharing for collaboration and corporate culture for promoting networking in order to develop a joint competitive advantage and form effective relationships between a company and its constituents, respectively, and ends with the results of the competence of human resources measurement of learning capability in intra-and inter-organisational settings based on training and experience.
The contractor social networking of infrastructure maintenance projects from an accurate assessment of interfirm networking conditions and causes of impaired social interaction, at the expertise level of hypercompetitive business environment as well as with the specialized approaches and methods, are challenges of many contractor companies (Keung and Shen, 2013).Therefore, social network assessment is needed to develop a set of criteria in order to implement a more intricate strategy capable of assessing organisation and coordination failures for infrastructure maintenance projects using the results of the level efficiency of rating in need of contractor social networking based on weights of indicators.In determining strategic social network assessment alternatives, as aforementioned, the overall performance of the organisation and coordination assessments must be used by considering information exchange between project members, project communication system, knowledge sharing for collaboration, corporate culture for promoting networking and learning capability in intra-and inter-organisational settings for the evaluation.An extended implementation of the social network assessment must also ensure to follow the standard requirement of assessment process in selecting an effective contractor social networking of infrastructure maintenance projects.
Planning Ricardo et al. (2014) define effective social network planning as using the appropriate method in providing a critical node (e.g.individuals, groups, teams or entire organisations) and relationship (e.g.nature or direction) in project stages of infrastructure maintenance management and production process including design, manufacturing, transportation and installation.This involves all decisions at all levels of the team member network centrality related to maintaining a high level of availability, reliability, value of the infrastructure and its components and its ability to perform to a standard level of coordination (Dogan et al., 2012).Therefore, social network planning provides measurement to ensure that infrastructures are performed in advance and that a various part of the organisation is required in guiding the process of developing a comprehensive contractor social networking programs and recommendations (Sepasgozar and Bernold, 2012).Planning of social network is the predetermined tasks that cover at two different decision levels: strategic or tactical.Decisions at the strategic level are about organising the long-term vision of organisation as a corporate business planning and tactical decisions issuing way to business within a present strategy for achieving long-, medium-and short-term goals and targets (Ozorhon et al., 2007;Bakht and El-Diraby, 2015).The importance of social network planning of infrastructure maintenance projects is highlighted by the fact that they are not strategic, expensive and engage critical decisions.Estimated costs for repair and maintenance allocating may range from 16 to 50 per cent (Lee, 1996;CIDB, 2007) and 5 to 10 per cent depending on the type of infrastructure (Ali, 2009) from total construction costs.Because complex and critical infrastructure construction frequencies are rising rapidly, it requires planned social network strategy considerations to enhance efficiency of the characteristics that influence the maintenance project performance.The selection of planned social network strategy to optimise infrastructure maintenance performance becomes an important decision in infrastructure industry as it can enhance the cost and risk management process, provide highly efficient utilisation of project team resource, increase collaboration value, organisational communication, improve quality and durability (Son et al., 2010;Park and Han, 2012).
A social network plan requires a micro-management between costs and risks of inspections, repairs and replacements of components to determine strategically optimal 485 Contractor social networking contractor social networking actions for an infrastructure system.Micro-management is the organisation and coordination style where the managers are heavily engaged in the daily affairs and specific tasks of contractors while the opposite (not micro-management) is giving a degree of autonomy to contractors.Due to the frequency of repair and maintenance programs, there are quality considerations when designing a social network planning system (Du and El-Gafy, 2015).The strategic performance of social network planning selection is an important function on the technical level in the designing of a social network planning system in order to enhance production control, provide social impact and building relationships among construction teams that can contribute to maximise infrastructure system's reliability-related functions (e.g.availability), improve accuracy and reliability maintenance planning of deteriorating infrastructures (Lin et al., 2015).In addition, maintenance strategy decision making is also becoming extremely important part of social network planning on constructed infrastructures, which have several benefits (Priven and Sacks, 2015) such as: • assessing contractor performance and working relationships, as well as determining whether the correlation between social network and communication and project workflow quality needs to be established; • determining the key social network planning measures (e.g.failure of probability of a network and intensity of communication); and • organising the social network planning in maintenance scheme (e.g.communication channel classification and networks importance).
The social network planning must be managed strategically both in terms of performance and maintenance conflict of infrastructures during their whole life cycle (design, construction, use and maintenance).Most of the studies have assumed that social network planning is irrelevant, leads to the downtime, loss of infrastructure and market opportunities, which attribute a high proportion of costs and performance deterioration (Ekwo, 2013;Albert and Hallowell, 2014).Therefore, planning and implementation of social network strategies, which reduce the conflict of infrastructure and the performance degradation for continuous improvement in maintenance performance, can often be the social network planning system reliability and maintainability.
IBS infrastructure maintenance projects
In most of the IBS infrastructures, there is a lack of measurement tools available for social networking process to contractors in Malaysia to enable them to understand level of assessment of contractor social networking websites practices.The social networking processes involved the identification, assessing and planning of management for infrastructure structure and facility.The existing literature revealed that the app has contributed to the crucial problems in bridge resilience for the IBS infrastructures as Maintenance and Development Unit (UPS) (Venkittaraman and Banerjee, 2014).The maintenance management issues evolved to the point that they affect the maintenance efficiency such as inadequate technical knowledge, shoddy workmanship, improper assembly of the components and poor quality IBS products (Zhou et al., 2010).However, the approaches to address the problems are undertaken by staff through proper prioritising of bridge maintenance, the improvement of the standard inspection with the detailing specification data of building defects and to allocate the applicable financial contingency for maintenance control such as leaked concrete and structure crack.
Nevertheless, the tools are not being used to overcome incomprehensiveness of the app-integrated process as these are mostly used to record and exchange information related to the maintenance components within IBS infrastructure.There was an inadequate use of modem ICT tools such as a framework that defines metrics or indicators that satisfactorily characterize Malaysia's bridge infrastructure, which could provide real-time diagnosis information of IBS infrastructure maintenance (Karamlou and Bocchini, 2014).The apps mostly used at the IBS infrastructure provided no proper framework that can be referred as an integrated process app for collaborative data environment in terms of bridge resilience such as specification, structure risk and condition in detail that could help the maintenance management staff to conduct effective execution on the defect (Deco et al., 2013).In addition, the transformation to the new app framework that involved an initial determination of the context of the resilience assessment, followed by a detailed assessment of resilience measures, which combine to generate a resilience score ranging from 4 (very high resilience) to 1 (low resilience), is mostly suggested to facilitate user at the IBS infrastructure in assessing defects efficiently including defect control using ICT.
Contractor social networking practices in IBS infrastructure maintenance projects
With the maturing of contractor social networking tools and platforms, an increasing number of smart mobile device application (app) services are being adopted in various industry functions, including real estate, waste management, transportation, supply chain and maintenance management (Sattineni and Schmidt, 2015).Nevertheless, some industry practitioners still hesitate to adopt this innovative tool.According to the survey conducted by Ekow and Kofi (2016), maintenance participants did not identify much value in using app, except for the last few years.However, according to Zheng et al. (2016) and Azhar and Cox (2015), the actual benefits of app along with its team networking realisation capabilities have now comprehended and utilised to the best benefits of all stakeholders of an IBS infrastructure maintenance project such that contractor social networking can be integrated into the project management and working process, which also can be used by most AEC sector for maintenance quality control and efficient information utilisation.Currently, in the IBS infrastructure maintenance project worldwide, work has been carried out into the use of app in manufacturing, components tracking, waste reduction and supply chain management (Son et al., 2012;Davies and Harty, 2013;Kelm et al., 2013;Kim et al., 2013).Apps contain the information needed for particular phases of a building's life cycle (scheduling, analyses, cost evaluation, etc.) and should offer construction new opportunities to improve the communication and collaboration between participants through higher interoperability of data.It can provide potential savings (cost and efficiency), and can also be suitable for maintenance and support in the following ways ( Joyce, 2011a;Colonna, 2012;Razmerita et al., 2014): • BIM 360 Field-to plan maintenance work and to perform repairs in facilities using decision-making process.
• PunchIt/iSite Monitor-to assess the condition of building carefully and entirety through visual inspection, scoring, photos and layout plan tag.
•
Tradies App-to facilitate in making decision for maintenance application of buildings based on the knowledge base from the database collection.
• Procore for iPhone-for identifying any maintenance or repair items, as well as any imminent hazards using three separate levels: hands on, visual and testing.
• OnSite Photo/Onsite Punchlist-to define the different parameters (e.g.main defects and severity of defects) in carrying out the defect mapping for buildings condition.
• Plan Grid-to investigate and classify the latent defect or the defect in part of the difficult captured level into grid location.
Contractor social networking
• OnSite Files-for examining both the building underlying structure and its external shell, respectively.
• Aconex Mobile-to minimise the total maintenance time and cost in scheduling planned maintenance activities.
•
Foreman's Mate-to schedule the maintenance activities through the defect performance based on the severe rank level.
• Drawvis-to provide managing of a spare part and schedule the maintenance service.
Table I shows ten current app systems that are selected based on major functions in the comparison over current technologies where there is the same gap among those systems which is the defect diagnosis and analysis support function.
6. Improving contractor social networking in IBS infrastructure maintenance projects Currently, in the IBS infrastructure maintenance project worldwide, work has been carried out into the use of app in contractor social networking such as BIM 360 Field, iSite Monitor, Plan Grid and Aconex Mobile.App tools contain the information needed for particular phases of a building's life-cycle (scheduling, analyses, cost evaluation, etc.) and should offer construction new opportunities to improve the communication and collaboration between participants through higher interoperability of data ( Joyce, 2011a, b).It can provide potential savings (cost and efficiency), and can also be suitable for faster information sharing to a larger network of people and organisations (Chen and Kamara, 2011).However, Sattineni and Schmidt (2015) state that the app is yet to be mainstream as a communication tool throughout maintenance industry and also attribute to the lack of understanding due to technical difficulties on parameters and indicators of measuring the resilience.The weakness of app in the IBS bridge infrastructure maintenance industry is that it is only limited to the level of awareness and the utilisation of contractor social networking applications on app devices concerning the sustainable development towards vulnerable communities.In particular, the improvements of perceived usefulness and perceived ease of use of maintenance-related applications on maintenance professional's intentions to use such applications offer fruitful avenues for future research.The researchers stated that proper resolution strategies of contractor social networking are essential during the maintenance period in order to get a better improvement on its acceptance and utilisation by maintenance professionals (Silvius, 2016).Azhar and Abeln (2014) investigated the criticisms regarding the requirements for effective use of the app device.There were a few strategy options of the app for managing IBS infrastructure maintenance projects related to contractor social networking including (see Figure 2 and Table II): • Reasonable level of resources social media literacy on various scales (asset/network/ region): the app device requires the contractor to be familiar with the basics of social media platforms.They were required to have basic resources social media skills in order to understand the wide range of land transport system (bridge, road and rail) for effective use of the app device.
• Sufficient link to broader criticality and risk management approaches: administrations were necessary for allowing prioritisation of improvements and interventions for the new contractor social networking applications on app devices on the IBS infrastructure to support daily operations, maintenance and value of social media resources.
489
Contractor social networking 7. Need for improvement An initial assessment of the tools and techniques currently in use in contractor social networking websites suggests that most of them are under development with a few being used on a commercial basis.Based on the literature, this research is intended to improve information management of contractor social networking on IBS infrastructure maintenance projects through the use of app device.The relevance and capability of the app device to improve contractor's certified quality product and firms as well as maintenance specialisation services have been confirmed and verified through the review of the previous section.The app clearly covers a wide realm of information categories such as company branding, disseminating project news, information on job hiring and client networking.This will be able to create the environment in facilitating contractor social networking to become more easy and effective (Anumba and Wang, 2012).However, more sophisticated contractor social networking websites solutions which accord with the needs of IBS infrastructure maintenance schemes in the future are anticipated to use multiple aspects of communications and collaboration technologies (e.g.marketing, connections, support, education and recruiting) such as integration of social media platforms that constitute vulnerabilities and resilience characteristic (Kudos BIM 360 Field and Plan Grid Publons) (refer Figure 3
and Table II).
There is a substantial scope for further research, which can include the following: (1) Further research on social networking systems is required to enable the app device to be better integrated into IBS infrastructure maintenance projects in the future.It is required to study the parameters that relate to infrastructure disaster in the adoption of the contractor social networking application on app devices.
(2) The applicability of the app device in IBS infrastructure is based on the type of impact of bridge resilience projects.It is developed to support the requirement for improving the overall process of small or large scale of contractor social networking.This app performance should also be maximised specifically during maintenance in order to avoid loss of profit for construction companies.
(3) The IBS infrastructure maintenance industry contractors could develop a resilience app framework that defines metrics or indicators.It satisfactorily characterises Malaysia's infrastructure to achieve the best production of contractor social networking websites solutions in terms of time, budget (cost), quality and productivity.
Lessons and enlightenments
(1) The information management and the application of contractor social networking databases should be paid high attention.Because of the inadequacy of app tools and limitation of understanding about the management of disaster risk function mechanism, the design of contractor social networking structures is very important.The application of modern app tools such as BIM integration can avoid or alleviate the poor service delivery of main infrastructure.
(2) The appropriate contractor social networking should be selected in IBS infrastructure areas for dams, bridges and other important IBS infrastructures.The complex bridge infrastructure should be given the priority for these types of IBS resilient infrastructures.
(3) The evaluation of existing apps should be processed, especially the contractor social networking application in the large collaboration construction parties.
(4) The contractor social networking quality for IBS maintenance infrastructures should be strictly guaranteed.(5) The deficiency of communication and collaboration between participants affected the contractor social networking capability in the bridge resilience.The future work should be strengthened to guarantee the information flow in case of interruptions.
Conclusion
A brief overview of contractor social networking practices on IBS infrastructure maintenance projects was clearly presented in this paper.The findings from the literature show that the researchers felt that the functionality of the app device was inappropriate for addressing social networking problems, and there are some limitations, which would need to be addressed in the future.It was identified that the main limitation to the use of the app device on IBS infrastructure maintenance projects was primarily the lack of measurement tool for bridge resilience.The realisation of full potential of social media resources should be maximised during communication and collaboration in order to avoid loss of profit for most maintenance contractors.There was also a need for proper training for the effective implementation of the app device in such projects.The development of current social media resources is part of basis for forming an effective framework which then will be used for supporting the improvements of contractor social networking practices.The next stages of this research will investigate the further innovation of app devices on contractor social networking including the extent and nature of mechanisation of the IBS infrastructure maintenance processes (framework development of disaster in infrastructure resilient) in developing such a device in the future.
|
2019-04-16T13:28:08.698Z
|
2018-06-25T00:00:00.000
|
{
"year": 2018,
"sha1": "ebf303cf47a1091ffe8fd5c11eec8bc33b1dd0f6",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/ECAM-08-2016-0180/full/pdf?title=improving-contractor-social-networking-on-ibs-infrastructure-maintenance-projects-a-review",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ebf303cf47a1091ffe8fd5c11eec8bc33b1dd0f6",
"s2fieldsofstudy": [
"Business",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
266222453
|
pes2o/s2orc
|
v3-fos-license
|
Effect of network size on comparing different stock networks
We analyzed complex networks generated by the threshold method in the Korean and Indian stock markets during the non-crisis period of 2004 and the crisis period of 2008, while varying the size of the system. To create the stock network, we randomly selected N stock indices from the market and constructed the network based on cross-correlation among the time series of stock prices. We computed the average shortest path length L and average clustering coefficient C for several ensembles of generated stock networks and found that both metrics are influenced by network size. Since L and C are affected by network size N, a direct comparison of graph measures between stock networks with different numbers of nodes could lead to erroneous conclusions. However, we observed that the dependency of network measures on N is significantly reduced when comparing larger networks with normalized shortest path lengths. Additionally, we discovered that the effect of network size on network measures during the crisis period is almost negligible compared to the non-crisis periods.
Introduction
The complexity of the stock market has been a subject of extensive research in recent decades [1][2][3][4][5].Graph theory has proven to be a valuable tool for gaining insights into the structural and functional characteristics of stock market networks.However, when comparing the topologies of different networks using graph theory, several significant limitations arise.
One such limitation is that stock market network measures are dependent on the number of nodes, which means that different network sizes may exhibit varying measure values.Furthermore, experimental network topologies are often unknown, which can further complicate direct comparisons of network properties.When comparing stock networks of different sizes, the bias caused by network size differences can result in incorrect conclusions.
Wu et al employed graph theory and the vector autoregressive method to study the stock markets of ASEAN5, China, Japan, and South Korea [6].Their findings indicated that despite government efforts to promote financial market coordination and integration in East and Southeast Asia, these markets were not as robust as they appeared.Nobi et al generated threshold networks, hierarchical networks, and minimum spanning trees from a correlation matrix constructed using 185 individual stock prices [7].They demonstrated that during crisis periods, the degree distribution of the largest cluster in threshold networks was thicker than during non-crisis times.In the Korean and US stock markets, non-financial companies served as central nodes of the minimal spanning tree during the crisis period, while financial firms occupied the center nodes during non-crisis periods [8].Nobi et al also investigated changes in correlation and network structure between 2000 and 2012 using 145 stock prices from the Korean stock market and 30 indices from international stocks [9].Their findings revealed that average correlations of international indices increased over time, while average correlations of local indices decreased, except for significant fluctuations during crises when they surpassed average correlations of global indices.
Several studies have employed graph theory and neural networks to predict stock market prices, demonstrating superior performance to traditional approaches by incorporating structural information into the prediction models [10,11].Dimitrios et al investigated the linkages among companies on the Greek Stock Exchange from 2007 to 2012 and found that the market was "shallow," meaning that a small number of powerful investors had a significant impact on the volatility of numerous company values [12].Wen et al constructed a tail dependence network using stock market data from 73 countries, revealing that European markets were more influential than Asian and African markets during both booms and recessions, and that geographically neighboring economies were susceptible to financial risks [13].Hu et al analyzed Shanghai and Shenzhen A-share stocks using threshold networks, and they determined that Chinese stock networks exhibited the properties of a small-world network and followed a power-law distribution [14].Van Wijk et al employed graph theory to compare brain networks of varying sizes and connectivity densities [15].Their study aimed to identify network measures that are not influenced by changes in network size and connectivity density.Although there are currently no effective methods to account for the size and connectivity densitydependent impacts, the authors found that certain measures, such as the normalized path length and non-normalized clustering coefficient, are less sensitive to changes in network size and connectivity density.Some articles have conducted comparative analyses of stock markets of varying sizes [16][17][18].Eom et al constructed minimum spanning trees (MSTs) for 463 stock indices from KOSPI and 400 stocks from S&P 500, and then compared the network properties of both MSTs [16].They concluded that network degree is the most valuable quantity because it can describe network topology and has a close relationship with the market index.Alam et al investigated the stock correlations and their index effects for developed, developing, and emerging markets, using 377 stocks from S&P 500, 165 stocks from KOSPI, and 220 stocks from the Dhaka Stock Exchange [17].They found that during the global financial crisis, stocks in developed and emerging markets were more strongly correlated than those in developing markets.Balash et al used 194 stocks from IMOEX, 163 stocks from CSI163, and 468 stocks from S&P 500 to conduct network analyses of Russian, Chinese, and US stock markets [18].They compared the structural and topological properties of these networks and reported that the topological properties of Russian stocks differ from those of Chinese and US stocks.However, in all of these studies [8,9,14,[16][17][18], the bias induced by the size difference of the network was overlooked, as most network properties are subject to change with network size.Hence, comparing network properties of differently sized networks directly can lead to inconsistent outcomes.
In this study, we investigate how network measures are affected by network size and propose various methods to overcome the inconsistency associated with comparing stock market networks of different sizes.We also examine how network properties respond to changes in network size during both crisis and non-crisis periods.Specifically, we analyze the network properties of 500 companies on the Korean stock market and 500 companies on the Indian stock market and observe how these properties change with variable network size and market stability.To reduce the dependence of network properties on changing network size, we apply normalization techniques.Our results indicate that normalizing with the range of possible values significantly reduces the dependence of the average shortest path length on network size, while variable network size has little influence on the average clustering coefficient.Therefore, normalized average shortest path length and non-normalized average clustering coefficient can be used to compare stock networks of different sizes with minimal bias.We also find that during crises, the impact of network size on network metrics is relatively low compared to non-crisis periods, which can be helpful in identifying critical market timelines.Furthermore, we discover that the network properties of larger networks are not greatly influenced by their size.The primary objective of this research is to investigate various strategies for correcting network measures for their dependence on number of nodes to enable comparison of stock networks of different sizes with minimal bias, which has not been explored previously.
The remaining sections of the paper are structured as follows: Section 2 discusses the data sources used in this study to construct networks.In section 3, we outline the methods used to generate threshold networks from stock correlations, define various network measures, and describe normalization techniques.We then present and discuss our findings in section 4. Finally, we summarize our study and draw conclusions in section 5.
Data description
We consider stock market data from two different countries to conduct our study: the Korean Composite Stock Price Indexes (KOSPI) and the National Stock Exchange of India Limited (NSE).The data was collected from Yahoo Finance and Investing(https:// https://finance.yahoo.com/world-indices) [19,20].The KOSPI is a set of indexes that reflect the overall Korean Stock Exchange and its constituents.On the other hand, the NSE is the largest financial market in India and a reflection of the Indian Stock Exchange as a whole.We chose 500 indices from KOSPI and 500 indices from NSE during the years 2004 and 2008, with 2008 being the year of the global financial crisis.The purpose behind selecting these years is to examine how well our assessment holds up during times of financial crisis as well as times when there is no crisis.
Network construction and normalization methods
We calculate the log returns of daily closing prices by first taking the natural logarithm of the daily closing prices and then subtracting the result of the previous day's closing price from the result of the current day's closing price.Returns essentially represent the profit or loss resulting from an investment within a specific timeframe, and they measure the amount of money gained or lost as a result of investing.If we denote the daily closing price of stock i at time t as p i (t) and daily returns of stock i at time t as r i (t), then the log returns r i (t) over period Δt can be written as, where i = 1,. ..,N and t = 1,. ..,Twhere N is the total number of stocks and T is the number of days.Since we are only considering daily returns, we set Δt to one day.We use a one-year time window to divide the daily returns before constructing the network.Next, we calculate the cross-correlation matrix for all the logarithmic returns r i (t) within a one-year segment using the following formula, where c ij is an element of the cross-correlation matrix and <�> represents the mean and σ i represents the standard deviation of the stock i.
After the correlation matrix is formed, a stock network is typically created using a fixed threshold technique, which is a commonly used method in stock market research [9,[21][22][23].However, there are other approaches to constructing stock networks.For instance, some studies have employed Minimum Spanning Trees (MST) using correlation matrices [24][25][26][27], while others have used Planar Maximally Filtered Graphs (PMFG) [28,29].In a limited number of cases, researchers have utilized machine learning techniques to discover network relationships and then applied thresholds to construct networks [30].
Each element in the correlation matrix corresponds to an edge in the network.For example, the edge e ij connecting nodes i and j corresponds to the correlation matrix element c ij .The fixed threshold technique utilizes a threshold correlation value θ, where |θ|�1, to determine which edges to include in the final network.If the absolute correlation value between two stocks exceeds the threshold, for example, if |c ij |>θ, an edge is added between those stocks, and the corresponding entry in the adjacency matrix is set to one.We set a threshold value of 0.3 for constructing our networks, which was chosen without any specific reasoning or prior knowledge to ensure that it did not influence our analysis.We also tested other threshold values to determine their impact on our results, but the findings remained stable across all tested threshold values, and we did not observe any significant variation in our outcomes (not shown in the paper).Despite this, we used a threshold value of 0.3 to generate our results, as it offered a suitable balance between capturing the network's structure while keeping it relatively simple.To eliminate self-connections, we set all diagonal elements of the adjacency matrix to zero.
Various network measures that provide insights into the network topology can be determined.We used the small-world index to identify the small-world structure of Korean and Indian stock networks.A small-world network is characterized by the property that the average shortest path lengths between nodes increase slowly as the network size grows.The smallworld index can be utilized to determine the small-world property of a network.Small-world networks are distinguished by their path lengths and clustering coefficients [31].The clustering coefficient indicates the level of interconnectivity among nodes in the immediate neighborhood.A clustering coefficient of C = 1 indicates a fully connected neighborhood, whereas a value close to C = 0 indicates a sparse neighborhood.The average clustering coefficient of a network of size N is calculated using the following formula [32], where k i denotes the degree of node i, and a ij is the element of the network's adjacency matrix.The average shortest path length is the average number of steps along the shortest pathways between all possible pairs of nodes in a network.It is defined as [33], where d(V i , V j ) denotes the shortest path length between nodes V i and V j .
To reduce the sensitivity of network properties to the size of the network (N), different normalization techniques are employed.In some studies, graph measurements are normalized by comparing them to those of random networks with the same number of nodes and edges [34,35].In this approach, the average path length (L) and the average clustering coefficient (C) of the stock network are normalized by dividing them by the corresponding values of a random network.We used Erdos-Renyi random networks for normalization of stock network measures.The normalization process can be expressed mathematically as follows, The stock network exhibits properties that lie between those of lattice and random networks, and thus possesses small-world properties.Consequently, the path length and clustering coefficient of a stock network will fall between those of a lattice and a random network [36].Therefore, these measures can be normalized by considering the range of possible values between lattice and random networks using the following equation.
In this context, the lattice network is created using a regular ring lattice, while the random network is formed using an Erdos-Renyi random graph.After normalization, the small-world index can be defined using the clustering coefficient and the path length as follows, If the value of the small-world index is greater than 1, then the network is considered a small-world network [31,37].We find different values of σ from various sized Korean stock networks, and on average, the value of σ = 14.95 is obtained during the non-crisis period of 2004, and σ = 1.55 is obtained during the crisis period of 2008.In the case of Indian stocks, on average, the value of σ = 2.07 is obtained during the non-crisis period of 2004, and σ = 1.21 is obtained during the crisis period of 2008.This means that during the non-crisis period, both Korean and Indian stock networks have a much more small-world structure, while during the crisis period, the networks have a loose small-world structure because the small-world index is lower during the crisis.
The small-world index indicates whether a network exhibits a small-world structure, with values greater than σ = 1 indicating such a structure [31,37].We computed the values of σ for Korean and Indian stock networks of varying sizes, and on average, during the non-crisis period of 2004, the Korean stock networks had a high small-world index with σ = 14.95, while during the crisis period of 2008, the networks showed a weaker small-world structure with σ = 1.55.Similarly, during the non-crisis period of 2004, the Indian stock networks exhibited a small-world structure with σ = 2.07, and during the crisis period of 2008, the networks displayed a less pronounced small-world structure with σ = 1.21.This suggests that during the non-crisis period, both Korean and Indian stock networks had a more pronounced smallworld structure, while during the crisis period, the networks exhibited a less distinct smallworld structure, as reflected by the lower values of the small-world index.
Network structure
The complex network presented in this study is generated using the correlation matrix of returns from the Korean and Indian stock markets.The threshold method is applied to obtain the network structure.Fig 1 illustrates two stock networks, where each node represents a stock, and an edge connecting two nodes signifies a significant correlation between them.In Fig 1(A), the network is generated using the threshold method on 300 randomly selected stock indices from the Korean stock market in 2004.The resulting network contains a giant cluster with N _gaint = 183, as well as some small, isolated clusters.However, the network is sparse, and many detached nodes are visible in the periphery.Conversely, Fig 1(B) depicts the network structure of the Korean stock market during the global financial crisis in 2008, with the same number of nodes.The network is dense, with only a few detached nodes, and all other nodes are tightly connected, forming a giant cluster with N _gaint = 290.This suggests a closely coupled network structure, indicating a highly correlated stock market during times of crisis.Furthermore, we observe that the average path lengths of stock networks are significantly smaller during the crisis period than during non-crisis periods, and they are more resistant to changes in N.This suggests that networks are more strongly interconnected during the crisis period, and nodes are reachable from each other with fewer steps.We see a similar trend for the average shortest path length of NSE stocks, as depicted in Fig 2(C), but the range of variation is narrower than that of KOSPI stocks.Fig 2(B) illustrates that the average clustering coefficient of KOSPI stocks increases with the number of nodes, as seen in both crisis and noncrisis periods.However, the clustering coefficient for the non-crisis period increases more rapidly than it does for the crisis period, wherein changes in the clustering coefficient are minimal and hardly noticeable in the figure .The clustering coefficient during the crisis period is higher than that during the non-crisis period, indicating a greater compactness of clusters during the crisis.The average clustering of NSE stocks exhibits similar characteristics to KOSPI stocks, as illustrated in Fig 2(D), but with a narrower range of variation.Conversely, the average shortest path length shows the opposite trend: when the average shortest path length is short, the network is strongly clustered.Finally, by comparing the scales of path length and clustering coefficient in Fig 2(A)-2(D), we observe that the path length is much more sensitive to changes in network size, while the average clustering coefficient of the stock network changes with a smaller margin, making it more robust and less biased by N. , the normalized clustering coefficients for both KOSPI and NSE stocks in 2004 exhibit greater sensitivity to network size during the noncrisis period compared to non-normalized values.This is demonstrated by the large y-axis scale.However, during the crisis, the normalization process has little impact on clustering coefficients, which change similarly to non-normalized values.The average clustering coefficients throughout the non-crisis period are much higher than those of the crisis period after normalization.Additionally, during the 2008 financial crisis, the normalized path lengths and clustering coefficients of stock networks are close to 1, indicating randomness in stock networks during the crisis.
Normalization by the range of possible values.
In this technique, the network measures are normalized by the range of possible outcomes between a regular ring lattice network and an Erdos-Renyi random graph, as shown in Eq 6.The result after this normalization is depicted in Fig 4 .According to the Fig 4(A) and 4(C), during the non-crisis period, the normalized average path length of KOSPI and NSE stocks grow with N when N is smaller; however, when N is larger, the dependence on N is significantly reduced, and the slope of the curve becomes zero when N is greater than 200.Also, the dependency of normalized path length is reduced for the crisis period, but this is a slight reduction that is not clearly visible in the figure.But the scale of the y-axis shows that normalizing by the range of possible values reduces the sensitivity of the average path length to N substantially in both crisis and non-crisis periods.On the other hand, average clustering remains highly sensitive to changes in the number of The normalized clustering coefficient of KOSPI stocks grows with an increasing number of nodes during the non-crisis period, especially between N equals 100 and 200, and during the crisis, this dependency remains as before.This trend is almost similar but opposite to the average clustering normalized by the random network in Fig 3(B).The normalized clustering coefficient of NSE stocks decreases with an increasing number of nodes during the non-crisis period, and during the crisis, this dependency remains as before.In summary, normalization by the range of possible values reduces the sensitivity of path length to the variability of N, but the crisis period is not distinguishable from non-crisis timelines when N is greater than 200 as path of both periods are almost overlapping after this normalization.However, it makes the clustering coefficient more sensitive to the variability of N, particularly in non-crisis situations.
This technique involves normalizing network measures by the range of possible outcomes between a regular ring lattice network and an Erdos-Renyi random graph, as shown in Eq 6.In summary, normalizing by the range of possible values reduces the sensitivity of path length to the variability of N.However, after normalization, the crisis period is not distinguishable from non-crisis timelines when N is greater than 200, as the path lengths of both periods almost overlap.Furthermore, normalizing by the range of possible values makes the clustering coefficient more sensitive to the variability of N, particularly in non-crisis situations.Overall, normalizing by the range of possible values significantly reduces the differences in average path lengths between networks of various sizes, making the path length's sensitivity to N negligible.However, there are no significant differences in the average clustering coefficient when no normalization is applied.Moreover, the y-axis scale during the crisis period is much smaller than that during the non-crisis period, indicating less sensitivity of network metrics to N during crises.
Conclusion
We examined how network size affects network measures during both the non-crisis period of 2004 and the global financial crisis of 2008 using the threshold method.Our findings show that during the non-crisis period, the average path length L of a network is highly dependent on network size N, while less sensitivity to N is observed during the crisis.However, the average clustering coefficient remains relatively constant regardless of the size of the network.During a crisis, we found that average path lengths were shorter, and average clustering coefficients were higher than during a non-crisis period, suggesting that network structures become more tightly connected and locally compact during a crisis.
To address the bias introduced by N in comparing network measures, we employed various normalization techniques.When the measures of a stock network are normalized with those of a random network, we observed an increase in the sensitivity of both path length and clustering to N during the non-crisis period but no noticeable changes during the crisis.Normalizing the path length L by the range of possible values reduced its dependence on N on a large scale, and there was no bias for N greater than 200.However, this normalization technique increased the sensitivity of the clustering coefficient to N.
Our study also found that the N -dependency of network measures differ during crisis and non-crisis periods.When comparing networks with a large number of nodes, the dependence The following are the differences observed: (a) in shortest path length and (b) in average clustering for the KOSPI stocks during the non-crisis period of 2004, (c) in shortest path length and (d) in average clustering for the KOSPI stocks during the crisis period of 2008, (e) in shortest path length and (f) in average clustering for the NSE stocks during the non-crisis period of 2004, and (g) in shortest path length and (h) in average clustering for the NSE stocks during the crisis period of 2008.These differences highlight the sensitivity of network metrics to network size and the impact of normalization on reducing this sensitivity.Additionally, they demonstrate how the financial crisis affected the network metrics differently than the non-crisis period.https://doi.org/10.1371/journal.pone.0288733.g005 of both normalized and non-normalized network measurements on N is lower.Moreover, during a crisis, the effect changing N on network measures is significantly lower than during the non-crisis period.To compare stock markets of varying sizes with minimal bias, we recommend using normalized average shortest path lengths and non-normalized clustering coefficients when analyzing networks with a large number of nodes.
While our study has made progress in reducing the dependence of network measures on network size, there are still some residual impacts that we were unable to fully eliminate.Our study focused on a sample of up to 500 companies from developing countries over a 1-year time frame using a fixed threshold approach, which has some limitations.We plan to expand our research to include other countries and integrate advanced machine learning and other network construction techniques to deepen our understanding of this fascinating subject.It is essential to consider the effects of N -dependent biases in graph measurements when conducting experimental analyses of stock markets.
Fig 1 .
Fig 1.The network structure of the Korean stock market with 300 nodes (stocks), including (a) the year 2004, a noncrisis period, with 1,781 edges, and (b) the year 2008, during the crisis, with 25,848 edges.A significant difference in the compactness of the network is observed between the crisis and non-crisis periods.To generate the stock network, we randomly selected 300 stocks and applied the threshold method.https://doi.org/10.1371/journal.pone.0288733.g001
Fig 2
Fig 2 illustrates the variability of the average shortest path length and the average clustering coefficient as a function of the number of nodes N for the years 2004 and 2008.In 2008, SouthKorea and India experienced the global financial crisis, and to calculate the characteristic quantities of the stock network, we averaged over 2000 configurations.We compared the network properties between the normal period and the crisis period.In Fig2(A), we observe that for the KOSPI stocks in 2004, the shortest path length L increases significantly with N, especially for N<200, and it increases slowly for N>200.Conversely, the path length L for 2008 increases only slightly with the number of nodes, which is barely noticeable in the graph at the given scale.For larger networks, the average path length responds similarly to changes in N for both crisis and non-crisis periods.
Fig 2 .
Fig 2. Effects of network size on network measures, specifically, (a) the average shortest path length of KOSPI indices, (b) the average clustering of KOSPI indices, (c) the average shortest path length of NSE indices, and (d) the average clustering of NSE indices.To obtain more precise curves, we averaged these results over 2,000 ensembles.https://doi.org/10.1371/journal.pone.0288733.g002
4. 3 . 1
Normalization by random network.Fig 3 displays the normalized shortest path length (L/L rand ) and clustering coefficient (C/C rand ) of KOSPI and NSE stocks compared to Erdos-Renyi random networks.As depicted in Fig 3(A), the normalized shortest path length for KOSPI stocks in 2004 is sensitive to network size, increasing logarithmically with N. In contrast, the non-normalized path length varies on a larger scale during the non-crisis period.However, the normalized shortest path lengths in 2008 show slow growth over increasing N, reducing the distinction between crisis and non-crisis periods.Similar findings are observed for NSE stocks in Fig 3(C).In Fig 3(B) and 3(D) nodes, as illustrated in Fig 4(B) and 4(D), despite the range of possible values normalizing it.
Fig 4
illustrates the results after normalization.During the non-crisis period, the normalized average path length of KOSPI and NSE stocks initially increases with N but then plateaus when N exceeds 200.Normalizing by the range of possible values substantially reduces the sensitivity of the average path length to N in both crisis and non-crisis periods, as indicated by the scale of the y-axis.However, the average clustering remains highly sensitive to changes in the
Fig 3 .
Fig 3.The impact of network size on normalized network measures, based on Erdos-Renyi random graphs, is explored in the following analyses: (a) normalized average shortest path length of KOSPI indices, (b) normalized average clustering of KOSPI indices, (c) normalized average shortest path length of NSE indices, and (d) normalized average clustering of NSE indices.These results were averaged over 2,000 ensembles.https://doi.org/10.1371/journal.pone.0288733.g003
Fig 5
Fig 5 presents a comparison of network measures across networks of various sizes, ranging from 75 to 500 nodes in increments of 25.Each point on the graph represents the difference in measures between a network of a given size and the immediately preceding network in terms of size.For instance, the metrics of a network with 75 nodes are subtracted from those of a network with 100 nodes.
Fig 4 .
Fig 4. Effects of network size on normalized network measures (a) on normalized average shortest path length and (b) on normalized average clustering of KOSPI and NSE indices Results are averaged over 2,000 ensembles.https://doi.org/10.1371/journal.pone.0288733.g004
Fig 5 .
Fig 5. Differences of normalized and non-normalized network metrics for different sized networks during crisis and non-crisis periods.The following are the differences observed: (a) in shortest path length and (b) in average clustering for the KOSPI stocks during the non-crisis period of 2004, (c) in shortest path length and (d) in average clustering for the KOSPI stocks during the crisis period of 2008, (e) in shortest path length and (f) in average clustering for the NSE stocks during the non-crisis period of 2004, and (g) in shortest path length and (h) in average clustering for the NSE stocks during the crisis period of 2008.These differences highlight the sensitivity of network metrics to network size and the impact of normalization on reducing this sensitivity.Additionally, they demonstrate how the financial crisis affected the network metrics differently than the non-crisis period.
|
2023-12-16T05:14:34.269Z
|
2023-12-14T00:00:00.000
|
{
"year": 2023,
"sha1": "1aa44969c708813e699e8b09fc47c6cde3449357",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1aa44969c708813e699e8b09fc47c6cde3449357",
"s2fieldsofstudy": [
"Economics",
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251336305
|
pes2o/s2orc
|
v3-fos-license
|
Long-Term Longitudinal Analysis of Neutralizing Antibody Response to Three Vaccine Doses in a Real-Life Setting of Previously SARS-CoV-2 Infected Healthcare Workers: A Model for Predicting Response to Further Vaccine Doses
We report the time course of neutralizing antibody (NtAb) response, as measured by authentic virus neutralization, in healthcare workers (HCWs) with a mild or asymptomatic SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) infection diagnosed at the onset of the pandemic, with no reinfection throughout and after a three-dose schedule of the BNT162b2 mRNA vaccine with an overall follow-up of almost two years since infection. Forty-eight HCWs (median age 47 years, all immunocompetent) were evaluated: 29 (60.4%) were asymptomatic. NtAb serum was titrated at eight subsequent time points: T1 and T2 were after natural infection, T3 on the day of the first vaccine dose, T4 on the day of the second dose, T5, T6, and T7 were between the second and third dose, and T8 followed the third dose by a median of 34 days. NtAb titers at all postvaccination time points (T4 to T8) were significantly higher than all those at prevaccination time points (T1 to T3). The highest NtAb increase was following the first vaccine dose while subsequent doses did not further boost NtAb titers. However, the third vaccine dose appeared to revive waning immunity. NtAb levels were positively correlated at most time points suggesting an important role for immunogenetics.
First approved in December 2020 [5], COVID-19 vaccines remain the cornerstone of prevention and protection against infection and severe disease. While the initial one-or Vaccines 2022, 10, 1237 2 of 9 two-dose schedule, depending on the vaccine, has played a key role in the mitigation of COVID-19 morbidity and mortality, the emergence of viral variants with varying degrees of immune escape led most countries to deploy a third dose or even a fourth dose of vaccine boosters [6][7][8][9]. Overall, the interplay between natural infection and vaccination, as well as the role of different vaccine schedules and methods used to quantify the neutralizing antibody (NtAb) response [10][11][12][13][14], have made it difficult to depict the key features and dynamics of immunization to SARS-CoV-2 in a real-life setting.
Here, we report the time course of NtAb response, as measured by authentic virus neutralization, in healthcare workers (HCWs) with a mild or asymptomatic SARS-CoV-2 infection diagnosed at the onset of pandemic and no reinfection throughout and after a three-dose schedule of BNT162b2 mRNA vaccine with an overall follow-up of almost two years since infection.
Study Design
Forty-eight HCWs living in Northern Italy were included in the study. All of them had a laboratory diagnosis of SARS-CoV-2 infection in the Veneto region in March-April 2020 and were tested because of clinical suspicion or in the context of the hospital surveillance program. Symptomatic HCWs were evaluated by an infectious disease specialist and diagnosed with mild disease [15], as defined by the symptoms reported in Figure S1. Based on a hospital nasopharyngeal swab screening surveillance program performed at different intervals (ranging from four to seven days, according to the epidemiological context), no reinfection was detected during the whole study period. After a median interval of 297 days from the diagnosis of their SARS-CoV-2 infection, the HCWs received the first dose of the BNT162b2 vaccine, followed by the second dose after three weeks, and then by the third dose 9-12 months later. Written informed consent was obtained from all the HCWs willing to participate in a prospective study of the virus' NtAb response, which was approved by the Comitato per la Sperimentazione Clinica di Treviso e Belluno (prot 812/2020) and performed in accordance with the ethical standards as laid down in the Declaration of Helsinki. NtAb serum was titrated at eight subsequent time points (T1 to T8) ( Figure 1).
Titration of Virus Neutralizing Antibodies
NtAbs to the live B.1 lineage virus (GISAID accession number EPI_ISL_2472896) were titrated in duplicate by testing two-fold serial dilutions of sera, starting at 1/10, with 100 TCID 50 of the virus in VERO E6 cells in a 96-well plate. Virus-induced cell death was calculated by automated measurement of cell viability by the Cell-titer Glo 2.0 system in a GloMax Discover luciferase plate reader (Promega, Madison, WI, USA) [16]. The NtAb titer (ID 50 ) was defined as the reciprocal value of the sample dilution that showed 50% protection from the virus-induced cytopathic effect. Each run included an uninfected cell control, an infected cell control, and the virus back titration to confirm the virus inoculum.
Statistical Analysis
Antibody levels at the eight time points were reported as median, 25th, and 75th percentiles, minima (the lowest value detected), and maxima (the highest value detected). Sera with ID 50 < 10 were defined as negative and scored as 5 for statistical analysis. Since the NtAb value distribution was strongly right-skewed at all time points, the symmetry of the distribution was ameliorated by log 10 -transformation. Clinical symptoms were categorized as absent or mild. The data structure was explored by calculating the pairwise correlation structure of the following variables: T1 to T8, age (expressed as years absolute value), gender (male or female), and symptomatic infection. The Spearman and Wilcoxon test (null hypothesis is equal medians) rank methods were used, as appropriate. dose of the BNT162b2 vaccine, followed by the second dose after three weeks, and then by the third dose 9-12 months later. Written informed consent was obtained from all the HCWs willing to participate in a prospective study of the virus' NtAb response, which was approved by the Comitato per la Sperimentazione Clinica di Treviso e Belluno (prot 812/2020) and performed in accordance with the ethical standards as laid down in the Declaration of Helsinki. NtAb serum was titrated at eight subsequent time points (T1 to T8) ( Figure 1). The effects of the sequential series of time points, age, gender, and symptomatic infection on the antibody levels were assessed by performing a mixed-model linear regression, where the dependent variable was log 10 -transformed antibody titer, and the predictors were time points, gender = female, symptomatic infection, and age. All predictors, except age, were categorical. The variable "time points" comprised eight levels. The original antibody titers had a right-skewed distribution at each time point. For this reason, a logarithmic transformation was performed; the ensuing distribution was symmetric and Gaussian (normal) with reasonable approximation. This permitted the use of the mixed-model linear regression approach. Notably, the logarithmic transformation of antibody titers is a common practice [17].
The use of the mixed-model linear regression where time, the main predictor, was a multinomial categorical variable (time points), instead of a continuous variable, was chosen because we felt that the linear model was too rigid in order to explain the multiphasic evolution of the antibody titer. The mixed-model linear regression is preferable when there are repeated measures in the same subjects. This model is fit to correctly interpret the role of different subjects, obtaining the best available evaluation of the mean titer, and the best confidence intervals, at each time point.
Results
Forty-eight HCWs (31 females and 17 males) were evaluated: the median age was 47 (IQR 40-53) years and 29 (60.4%) were asymptomatic. All HCWs were immunocompetent and comorbidity was present in six of them (four presented with uncomplicated arterial hypertension and two with dyslipidemia).
NtAbs were undetectable in two out of 39 HCWs (5.1%) at T1, six out of 38 (15.8%) at T2, and three out of 31 (9.7%) at T3 but in none of the subjects following vaccination.
Vaccines 2022, 10, 1237 4 of 9 Figure 2 shows the log 10 -transformed antibody levels at the eight time points. One patient (female, 57 years, mild disease) had a left outlier NtAb titer at T4 (1.0 log) and a right outlier at T5 (4.18 log), and two patients (female, 33 years, asymptomatic and female, 64 years, asymptomatic) had left outliers at T6 (1.8 and 1.9 log, respectively). and comorbidity was present in six of them (four presented with uncomplicated arterial hypertension and two with dyslipidemia). NtAbs were undetectable in two out of 39 HCWs (5.1%) at T1, six out of 38 (15.8%) at T2, and three out of 31 (9.7%) at T3 but in none of the subjects following vaccination. Figure 2 shows the log10-transformed antibody levels at the eight time points. One patient (female, 57 years, mild disease) had a left outlier NtAb titer at T4 (1.0 log) and a right outlier at T5 (4.18 log), and two patients (female, 33 years, asymptomatic and female, 64 years, asymptomatic) had left outliers at T6 (1.8 and 1.9 log, respectively). NtAb titers from all postvaccination time points (T4 to T8) were significantly higher than those from all prevaccination time points (T1 to T3) (p < 0.0001). The highest increase with respect to the previous time point was detected at T4, i.e., three weeks following the first vaccine dose. None of the subsequent vaccine doses triggered NtAb titers significantly higher than T4, however, the third vaccine dose significantly revived waning NtAb titers (p < 0.0001 for comparison between T8 and T7).
Pairwise correlations between log10-transformed NtAb titers measured at different time points are shown in Figure S2. NtAb levels were positively correlated at most time points and symptomatic infection was positively correlated to antibody levels at several time points as well. The coefficients of the mixed-model linear regression are reported in Table 1. Age predicted lower NtAb levels, whereas symptomatic infection predicted higher NtAb levels significantly. NtAb titers from all postvaccination time points (T4 to T8) were significantly higher than those from all prevaccination time points (T1 to T3) (p < 0.0001). The highest increase with respect to the previous time point was detected at T4, i.e., three weeks following the first vaccine dose. None of the subsequent vaccine doses triggered NtAb titers significantly higher than T4, however, the third vaccine dose significantly revived waning NtAb titers (p < 0.0001 for comparison between T8 and T7).
Pairwise correlations between log 10 -transformed NtAb titers measured at different time points are shown in Figure S2. NtAb levels were positively correlated at most time points and symptomatic infection was positively correlated to antibody levels at several time points as well. The coefficients of the mixed-model linear regression are reported in Table 1. Age predicted lower NtAb levels, whereas symptomatic infection predicted higher NtAb levels significantly.
Neutralizing antibody titers in asymptomatic and symptomatic HCWs as predictive margins after mixed-model linear regression are depicted in Figure 3. Neutralizing antibody titers in asymptomatic and symptomatic HCWs as predictive margins after mixed-model linear regression are depicted in Figure 3.
Discussion
Most studies assessing the efficacy of the third vaccine dose have included patients with no previous COVID-19 infection or with unknown SARS-CoV-2 infection status or have focused on patients with immunosuppression [18]. Our real-life analysis aimed to describe the long-term dynamics of NtAb titers in a cohort of HCWs with asymptomatic
Discussion
Most studies assessing the efficacy of the third vaccine dose have included patients with no previous COVID-19 infection or with unknown SARS-CoV-2 infection status or have focused on patients with immunosuppression [18]. Our real-life analysis aimed to describe the long-term dynamics of NtAb titers in a cohort of HCWs with asymptomatic or Vaccines 2022, 10, 1237 6 of 9 mild wild-type SARS-CoV-2 infection followed by a three-dose vaccine schedule and no reinfection up to more than 600 days.
The NtAb titer after a median of 34 days from the third vaccine dose was significantly higher with respect to the preceding time point available (7 months after the second dose and approximately three months before the third dose), implying that a third antigenic stimulation can raise a waning NtAb response in subjects undergoing a complete vaccination cycle following natural infection. However, NtAb levels measured around 3-4 weeks following the first, second, and third vaccine doses did not differ significantly from one another. This suggests that, in subjects previously experiencing mild or asymptomatic infection, subsequent vaccine shots do not boost humoral immunity to higher-than-ever levels but rather refresh the waning immune system to comparable levels. However, in another HCW cohort with a median age of 41 years, Romero-Ibarguengoitia et al. [19] reported a significant increase in IgG titers detected 21-28 days after the third dose with respect to the IgG value at 21-28 days after the second dose. Potentially relevant differences with respect to our study include the use of a commercial anti-spike IgG assay instead of authentic neutralization, the inclusion of a proportion of subjects infected twice, the shorter interval (6 months versus more than nine months) between second and third vaccine dose and possibly disease severity (not reported). Omicron spreading caused the ongoing wave of the COVID-19 epidemic: this variant is characterized by more than 30 amino acid substitutions in the S protein and these changes cover almost all of the key mutations of Alpha, Beta, Gamma, and Delta VOC [20]. Since November 2021, when it was identified, Omicron continuously evolved: BA.2.12.1, BA.4, BA.5, BA.2.9.1, BA.2.13, and BA.2.11 are the Omicron lineages first detected from December 2021 to March 2022 included in the World Health Organization (WHO) variants of concern lineages under monitoring as of the beginning of June 2022 [21]. This rapidly evolving scenario had a strong impact on public health: the Omicron variants of SARS-CoV-2 have greater transmissibility than the previously identified variants [22,23]. It is important to note that a third dose of vaccine has been shown to partially restore the neutralization activity against the highly divergent Omicron variant in subjects not previously infected by SARS-CoV-2 [24] and in those with a previous infection [25,26]. In addition, a positive impact on antibody response was demonstrated for high preinfection antibody titers [27]. Nevertheless, recent data showed that omicron variants can evade the humoral immune response following the booster dose with the BNT162b2 vaccine (with a reduction in neutralizing antibody titers ranging from a factor of 6.4 for BA.1 to a factor of 21.0 for BA.4 or BA.5. with respect to reference WA1/2020 isolate) and subvariants BA.2.12.1, BA.4, and BA.5 escape neutralizing responses against a previous BA.1 or BA.2 infection [28]. Further, Omicron may evolve mutations to evade the humoral immunity elicited by a BA.1 infection, suggesting that BA.1-derived vaccine boosters may not achieve broad-spectrum protection against new Omicron variants [29].
Goldblatt et al. [30] estimated a protective threshold in vaccine recipients corresponding to 154 binding antibody units /mL to the original viral strain by using a populationbased model. Again, the titration method used was an anti-spike IgG binding assay and it can be difficult to correlate these data with live virus NtAb titers. In our dataset, the lowest median postvaccination NtAb titer was 421 ID 50 at 9-10 months before the administration of the third dose (T7) and this titer likely continued to decrease to a minimum just before this third dose recall. However, no reinfection was demonstrated despite the very high transmissibility of the Delta [31] and Omicron variants [32] which were prevalent in the last seven and two months of observation, respectively.
The availability of multiple time points data for each HCW allowed us to describe the dynamics of the NtAb response from immunity to natural infection across multiple vaccine stimulations. Overall, NtAb titers were correlated at the different time points, confirming the result published by Mantus et al. [33], who observed a positive correlation between antireceptor binding domain (RBD) antibodies, anti-spike IgG, NtAb titers, and RBD+ memory B cells prior to and after vaccination in subjects recovering from infection. The correlation between natural and artificial active immunity suggests an important Vaccines 2022, 10, 1237 7 of 9 role for immunogenetics, possibly involving differences in the individual B cell receptor repertoire [34] and HLA haplotype [35].
The strengths of the study include the prospective design encompassing almost two years with a study population regularly monitored to rule out breakthrough infections and the use of an authentic live virus neutralization assay; the main limitations are the low sample size, the incomplete availability of samples at intermediate time points, and no evaluation of the cross-protection against the emerging variants. However, the kinetics of the response to the ancestral virus provides useful information on the durability of the immunological memory and on the ability to respond to further stimuli.
Conclusions
We will continue to study the NtAb response over time after the third dose and immediately before and after the fourth dose that is now recommended in Italy for people over sixty and may be recommended for HCW too. This will be continued with the aim to evaluate over time the changes in titer against the ancestral virus, describe titer against Omicron subvariants and, at the same time, monitor the occurrence of COVID-19 reinfection in this high-risk cohort in order to describe the characteristic of humoral response after three or four vaccine stimulations and one or two natural infections.
In addition to being able to pursue new variants and perhaps responses to new vaccine antigens, we have, however, so far been able to study the net kinetics of response to repeated antigenic stimuli that were temporally and quantitatively well documented against the new coronavirus.
We have thus documented the effects of up to four stimuli (natural infection and vaccine boosters) over time, demonstrating similar and non-decreasing titers. This is relevant in terms of public health. We encourage scientists to design studies allowing prolonged regular follow-up of specific categories to define how infection and vaccination determine the durability of protective immunity in the context of the ongoing epidemic.
|
2022-08-05T15:05:35.093Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d88bc51932a9cae6b06efbce0c838f510ac05248",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/10/8/1237/pdf?version=1659433116",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5fb4bc742bb662251c2d96ad8666843e9552863",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219304065
|
pes2o/s2orc
|
v3-fos-license
|
A statistical Testing Procedure for Validating Class Labels
Motivated by an open problem of validating protein identities in label-free shotgun proteomics work-flows, we present a testing procedure to validate class/protein labels using available measurements across instances/peptides. More generally, we present a solution to the problem of identifying instances that are deemed, based on some distance (or quasi-distance) measure, as outliers relative to the subset of instances assigned to the same class. The proposed procedure is non-parametric and requires no specific distributional assumption on the measured distances. The only assumption underlying the testing procedure is that measured distances between instances within the same class are stochastically smaller than measured distances between instances from different classes. The test is shown to simultaneously control the Type I and Type II error probabilities whilst also controlling the overall error probability of the repeated testing invoked in the validation procedure of initial class labeling. The theoretical results are supplemented with results from an extensive numerical study, simulating a typical setup for labeling validation in proteomics work-flow applications. These results illustrate the applicability and viability of our method. Even with up to 25% of instances mislabeled, our testing procedure maintains a high specificity and greatly reduces the proportion of mislabeled instances.
Introduction
The research presented in this paper is motivated by an open problem in the quantification of proteins in a label-free shotgun proteomics work-flow. More generally, it presents a non-parametric solution to the problem of identifying instances that are outliers relative to the subset of instances assigned to the same class. This serves as a proxy for finding errors in the data set: instances for which the class label is recorded incorrectly, or where the measurements for a particular instance are sufficiently inaccurate as to render them uninformative.
In label-free shotgun proteomics, the experimental units of interest (proteins) are not measured directly but are represented by measurements on one to 500+ enzymatically cleaved pieces known as peptides. The amino acid sequences composing each peptide are not known apriori, but inferred based on algorithmic procedures acting on spectrum data from the mass spectrometer. By removing inaccurate peptides (instances) from each protein (class), subsequent quantitative analyses which assume that all measurements are equally representative of the protein are thus more accurate and powerful.
A similar problem exists in classification theory. It has been shown that classification models trained on data with labeling errors tend to be more complex and less accurate than models trained on data without labeling errors [6,7,9]. Multiple algorithms to find and remove such data have been developed. An excellent review of such procedures is presented in [4]. While these procedures can be applied to the proteomics problem, it is important to note a distinction between the problem of interest and the one addressed by these algorithms.
In the general classification filtering problem, the aim is an improved ability to classify instances, and thus this is the primary criterion against which these algorithms are judged. In other words, the accuracy of the filtered training set is secondary to the overall improved performance of the classification algorithm trained on this data. In contrast, the overall accuracy of the filtered data set is of paramount interest in the proteomics filtering problem. In a forthcoming paper, we will evaluate our proposed procedure against alternative algorithms from both proteomic and classification literature. In this paper, our emphasis is on presenting the new testing procedure, and demonstrating its properties.
The paper is thus organized as follows. Section 2 presents the theoretical basis for the algorithm as well as the procedure. Section 3 uses a simulation study to demonstrate the effectiveness of the proposed algorithm. Section 4 wraps up the paper with a discussion and concluding remarks.
The Basic Setup
Consider a data set of N instances, of which N 1 , N 2 , . . . , N K−1 are presumed to belong to classes C 1 , C 2 , . . . , C K−1 respectively, with N k > 1 for k = 1, . . . , K − 1. Let C K be a 'mega-class' consisting of the N K instances unassigned to a specific class or instances which are the sole representative of a class, such that N K = N − K−1 k=1 N k . For simplicity, we use the shorthand notation i ∈ C k to indicate an instance i which belongs to class C k while i / ∈ C k indicates an instance which does not. Let x i = (x i1 , x i2 , . . . , x in ) be the (vector) of observed intensities from instance i, (i = 1,. . . ,N), across n (independent) samples. The available data is thus X = (x 1 , x 2 , . . . , x N ) is an n × N matrix of such observed intensities. The 'distance' between any two instances with observed intensities x i and x j can thus be measured using any standard distance or quasi-distance function, For instance, d ij could be a measure of the dissimilarity between peptides over the n samples in the study. In this case, one popular quasi-distance function is defined by the correlation between x i and x j , d ij = 1 − r ij , where Herex i· = n =1 x i /n, for each i = 1, . . . , N . Let D = {d ij : i, j = 1, . . . N } be the N × N (symmetric) matrix comprised of these between-instance observed distances. Without loss of generality, we assume that the entries of D are ordered such that the first N 1 entries belong to C 1 , the next N 2 entries belong to C 2 , etc. Accordingly, we partition D as where D kk is the N k × N k matrix of between-instance distances within class C k and the elements of D k1k2 represent the distances between the N k1 instances belonging to C k1 and the N k2 instances belonging to C k2 . Note in particular that D k1k2 ≡ D k2k1 .
To begin with, consider at first class C 1 and the N 1 instances initially assigned to it. For a fixed i, i = 1, . . . , N 1 , let i ∈ C 1 be a given instance in class C 1 and set is the i th row of D 1k for k = 1, . . . , K. For the class C 1 , we consider the stochastic modeling of the elements of D 11 , D 12 , . . . , D 1K . We assume that for each instance i ∈ C 1 , the observed withinclass distances from it to the other N 1 − 1 instances in C 1 are i.i.d. random variable according to a class-specific distribution, G(·), so that (since d ii ≡ 0). Here, the c.d.fs G i (·) are defined for each fixed i ∈ C 1 , and any j ∈ C 1 as Our notation in (5) stresses that we are assuming that the distribution of distances from each individual instance to the remaining instances in C 1 is identical. Similarly, the distances between the given instance, i ∈ C 1 , and the N k instances in class C k , k = 2, . . . , K, are also i.i.d. random variables according to some distribution F (k) (·), so that Here F (2) , F (3) , . . . , F (K) are K − 1 distinct c.d.fs defined for each i ∈ C 1 , and any j ∈ C k as We assume throughout this work that G(·), F (2) (·), . . . , F (K) (·) are continuous distributions with p.d.fs g(·), f (2) (·), . . . , f (K) (·) respectively. If we further assume that all the N 1 instances from class C 1 are equally representative of the true intensity across all n samples, we would expect that the distances between any two instances from within class C 1 are stochastically smaller than distances between instances from within C 1 and instances associated with class C k where k = 1. Accordingly, we have Assumption 1 (Stochastic ordering). For each k, k = 2, . . . , K, or equivalently, In light of (7), the distribution of distances from the i th instance in C 1 to any other random instance J, selected uniformly from among the N − N 1 instances not in C 1 , is thus the mixture, where we have taken Pr(J ∈ C k | J / ∈ C 1 ) := N k /(N − N 1 ). Further, if I is a randomly selected instance in class C 1 , selected with probability Pr(I = i|I ∈ C 1 ) = 1/N 1 , for i = 1, 2, . . . , N 1 , then it follows that Similarly, if I and J represent two distinct instances, both randomly selected from C 1 with Pr(I = i, J = j|I ∈ C 1 , J ∈ C 1 ) = 1/N 1 (N 1 − 1), then It follows by Assumption 1 thatF (t) ≤Ḡ(t), ∀t ∈ R.
Now, for any t ∈ R define It can be easily verified that the function h(t) := ψ(t) − t has a uniqe solution, t * , such that t * = ψ(t * ) andḠ By Assumption 1 and (12), it follows thatḠ(t * ) = τ > 0.5. As we will see below, the value of t * serves as a cut-off point to differentiate between the distribution governing distances between instances in class C 1 and the distribution of distances going from instances in C 1 to all remaining instances.
Constructing the test
Consider at first class C 1 and the N 1 instances initially assigned to it. Based on the available data we are interested in constructing a testing procedure for determining whether or not a given instance that was assigned to class C 1 should be retained or be removed from it (and potentially be reassigned to a different class). That is, for each selected instance from the list i ∈ {1, 2, . . . , N 1 } of instances labeled C 1 , we consider the statistical test of the hypothesis for i = 1, . . . , N 1 . The final result of these successive N 1 hypotheses tests is the set of all those instances in C 1 for which H (i) 0 : i ∈ C 1 was rejected and thus, providing the set of those instances in C 1 which were deemed to have been mislabeled. As we will see below, the successive testing procedure we propose is constructed so as to control the maximal probability of a type I error, while minimizing the probability of a type II error.
Towards that end, define for each i ∈ {1, 2, . . . , N 1 } where t * is defined by (14) and I[A] is the indicator function of the set A. In light of the relation (12), Z i will serve as a test statistic for the above hypotheses. The distribution of Z i under both the null and alternative hypotheses can be explicitly defined as Binomial random variables, as is presented in the following lemma (proof omitted).
Lemma 2.1. Let Z i be as define in (17) above with i = 1, 2, . . . , N 1 , then Accordingly, the statistical test we propose will reject the null hypothesis H (i) for some suitable critical value a α (to be explicitly determined below) which should satisfy, for each i = 1, . . . , N 1 and some fixed (and small) statistical error level α ∈ (0, 0.5).
The constant a α is the (appropriately calculated) α th percentile of the Bin( . of a Bin(n, p) distribution, then for given α and τ , the value a α is determined so as The final result of this repeated testing procedure is given by the set of all instances in C 1 for which H (i) providing the set of those instances in C 1 for which the binomial threshold is achieved and therefore have been deemed mislabeled. Similarly, provides the set of instances correctly identified in C 1 . It remains only to determine the optimal value of a α for the test.
Controlling type I and type II errors
With H The hypothesis H * 0 above states that all the instances in C 1 are correctly identified, whereas H * 1 is the hypothesis that at least one of the instances in C 1 is misidentified. We denote by R = |R α | the cardinality of the set R α , so that R is a random variable taking values over {0, 1, . . . , N 1 }. Note trivially that We consider the "global" test which rejects H * 0 in (20) if for at least one i, i = 1, . . . , N 1 , Z i ≤ a α or equivalently, if {R > 0}. The probability of a type I error associated with this "global" test is therefore using the Bonferroni inequality since by (18)-(19),α ≤ α. The calculations for α , can be controlled by taking α = α 0 /N 1 for some α 0 , to ensure that α ≤ α 0 and that . . , Z N1 } were to be independent or associated random variables [3] then under α). In this case, It follows for sufficiently large N 1 (as N 1 → ∞), that The distribution of Z i under the alternative hypothesis is explicitly available (see Lemma 2.1 (b)), so the type II error rate of the procedure can also be explicitly controlled. In fact, the symmetry of the Binomal distribution about τ and 1 − τ , with τ > 0.5, can be exploited to show that when τ is sufficiently large (τ > τ * , say), the type I error rate, α, serves also as a bound on the type II error rate, β, where we define The conditions under which β ≤ α holds are provided in the following lemma whose proof is given in the appendix below. as are given in (15)-(16) and is conducted using the cut-off a α in (18)-(19). Then Lemma 2.2 states that when a α ≥ N1−1 2 , the type I and type II error can be controlled simultaneously through α. It can be similarly shown that an analogous properties also exist when explicitly controlling the type II error rate. Accordingly, the algorithm thus has optimal behavior when τ is sufficiently far from 0.5 to ensure that a α ≥ N1−1 2 . For a given value of N 1 and α, the bound τ * on τ can be explicitly calculated using polynomial solvers or the normal approximation to the binomial, depending on the magnitude of N 1 .
Estimation
We note that bothF andḠ are generally unknown (as are t * and ψ), but can easily be estimated non-parametrically from the available data by their respective empirical c.d.fs, for sufficiently large N 1 and N − N 1 . For each given instance i ∈ C 1 , where, for each k = 2, 3, . . . , K, Clearly,Ĝ i (t) andF i (t) are empirical c.d.fs for estimating, based on the i th instance, G(t) andF (t) respectively. Accordingly, when combined, are the estimators ofF (t) andḠ(t), respectively. Further, in similarity to (13), we set and we lett * denote the "solution" ofψ(t * c ) = t * c ; that iŝ Clearly, the value of τ in (14) would be estimated bŷ Note that in view of (24), N 1τ ≡ N1 i=1τ i , withτ i ≡Ĝ i (t * ) for i = 1, 2, . . . , N 1 . Witĥ t * as an estimate of t * in (14), we have that Z i ≡ (N 1 − 1)Ĝ i (t * ) andτ i ≡ Z i /(N 1 − 1) and therefore an equivalent estimate ofτ iŝ
A Simulation Study
Our simulation study is designed to mimic the conditions of a LC-MS/MS shotgun proteomics study. In this light, we consider a set-up in which N 1 instances (peptides) belong to class/protein C 1 , and N 2 instances model the peptides belonging to any other class/protein. The distance between instances is measured using correlation distance, again mimicking a common way to measure similarity between peptides, although similar results can be obtained using other (quasi-) distance metrics (e.g. Euclidean distance).
The Simulation setup
To establish some notation, suppose that y = (y 1 , y 2 , . . . , y N ) is a N ×1 random vector having some joint distribution H N . We assume, without loss of generality, that the values of y are standardized, so that E(y i ) = 0 and V (y i ) = 1, for each i = 1, 2, . . . , N . We denote by D the corresponding correlation (covariance) matrix for y, D = cor(y, y ).
To allow for misclassification of instances, we included for a certain proportion p, some m := [pN 1 ] of the N 1 'observed' instances intensities from C 1 that were actually simulated with D 1,1 being replaced by D * 1,1 = (1−ρ 2 ) I N1 +ρ 2 J N1 in (30) above. Thus, m is the number of misclassified instances among the N 1 instances that were initially labeled as belonging to C 1 .
Remark 1. In this simulation, ρ 2 reflects (as a proxy) the common characteristics of two mislabeled instances. When ρ 2 = ρ 1 , distances between two mislabeled instances have the same distribution as two correctly labeled instances, as would be the case for binary classification when ρ 2 = ρ 1 . When ρ 2 = ρ 12 , distances for two mislabeled instances have the same distribution as a distances between a correctly labeled instance and a mislabeled instance. This would be the case if the probability that two mislabeled instances come from the same class is zero. Table 1. For a single run, each instance has one of four possible outcomes. The notation for the total count of instances with each these outcomes in a single run.
Truth
Correctly For each simulation run, we recordedτ ,t * , and counted the number of true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs), as defined in Table 1. From this data, we calculated the sensitivity, specificity, false discovery rate (FDR), false omission rate (FOR), and percent reduction in FOR (%∆) for each run; defined as follows: Each statistic was averaged over all 1000 runs.
Results with no mislabeling (case p = 0)
Simulations where p = 0 (i.e. no mislabeling) were used to illustrate the theoretical assumptions of the testing procedure, as this presents a case in which the global null hypothesis in (20) holds. In this case in particular, the FDR measures the proportion of incorrectly rejected hypotheses in (15) out of all rejected hypothesis tests, given that at least one hypothesis test was rejected. B runs provides an estimate of α . Figure 1 and Table 2 show the FDR for various n, N 1 , and ρ 12 . As seen in the figures, the FDR converges to zero as n increases for all values of N 1 , but the convergence slows as N 1 increases. When n is small, at least one instance was removed from almost all classes. We attribute this behavior to the inherent correlation structure of the data. This will be explored further in Section 3.2.2.
The specificity measures the ability to keep instances which are correctly labeled. From Figure 2 and Table 2, it can be seen that while almost all runs remove at least one instance (based on the FDR), most instances are retained. With n = 10, ρ 12 = ρ 2 = 0.2, and N 1 = 25, an average of 23.2 out of 25 were retained in each run. This number decreased as N 1 increased, corresponding to the slower convergence of the FDR when N 1 is larger. Even in this case, for n = 10, ρ 12 = ρ 2 = 0.2 and N 1 = 500, an average of 368.5 instances are retained each time.
For p = 0, the sensitivity is undefined and the FOR is universally zero. These statistics are relevant only when at least one mislabeled instance is present in the data set (i.e. p > 0) and therefore are omitted from Table 2.
Behavior under artificially constructed independence
In light of Remark 2 and the likely impact of the correlation structure present in the data on the FDR, we designed a simulation study to explore this effect. In this study the "distance" matrices were artificially created in a manner which preserved dependence due to symmetry but removed all other dependencies across distances.
To simulate "distance" matrices in this case, we began by randomly generating a distance matrix using the original test procedure with p = 0 and n = 100, N 1 = 500, N 2 = 1000, and ρ = (0.5, 0.2, 0.2). A normal distribution was fit to the within-C 1 distances and the C 1 to C 2 distances, as shown in Figure 3. For N 1 = 25, 50, 100, 500, 1000, 2000, 5000, and 12000 and N 2 = 1000, these normal distributions were used to generate B new distance matrices by drawing d ij for 1 ≤ i < j ≤ N as follows: Figure 4 shows the results of this procedure on five sets of B = 1000 runs. As N 1 increased, the FDR also increased, but never surpassed the theoretical limit of 1 − e −0.05 , consistent with the theoretical result given in (22).
Results under some mislabeling, (case p > 0)
To develop context regarding the results with some mislabeling (i.e. p > 0), we first consider for a moment a trivial filtering procedure which retains all N 1 instances in C 1 . Since the pN 1 incorrect instances are retained, the FOR of the trivial procedure is pN 1 /N 1 = p. Clearly, an FOR of p can be achieved without performing any filtering at all simply by returning all N 1 instances in C 1 . For a testing procedure to improve upon this nominal level, it must result in an FOR below p. With p > 0, the FDR calculates the proportion of correctly labeled instances among those removed. However, in the context of proteomics, where the set of retained peptides are used in subsequent analyses, we found that the specificity was a more relevant metric. Consequently, the FOR and specificity are the primary statistics used to evaluate our proposed testing procedure in the presence of labeling errors (p > 0), with %∆ providing a standardized method of evaluating the decrease in FOR in a manner independent of p. Figure 5 provides the average FOR and %∆ over the B = 1000 simulation runs as a function of n for ρ 12 = 0.2 and varying combinations of N 1 , ρ 2 and p. In all cases, the procedure reduced the FOR relative to p (that is, %∆ > 0 for all results). Small values of n produced the smallest reduction (highest FOR), but this converged at approximately n = 100 to a value dependent on p, N 1 , and ρ 2 . Table 3 shows how the average FOR compares between small sample sizes (n = 10) and large sample Table 3. The mean FOR and %∆ at n = 10 and n ≥ 250 for each combination of p, N 1 , and ρ 2 at ρ 12 = 0.2. sizes (averaged across n = 250, 500, and 700) at each value of p, N 1 , and ρ 2 . For p = 0.05, the FOR converged to 0 for all values of N 1 and ρ 2 . For higher values of p, decreasing N 1 and decreasing ρ 2 caused the FOR to be higher for n ≥ 250. For example, when p = 0.25 and N 1 = 500, for example, the instances remaining in the class after filtering will still include 10.26 % (%∆ = 59.0) mislabeled instances for n = 10 and 2.85 % (%∆ = 88.6) mislabeled instances when n is large. These are both substantial decreases from 25 % mislabeled instances as seen in the unfiltered data. Figure 6 provides the average specificity over the 1000 simulation runs as a function of n for ρ 12 = 0.2 and varying combinations of N 1 , ρ 2 and p. The specificity always converged to 1 as n increased, with convergence by n = 250 in all cases. For larger values of p, convergence was faster (by n = 100). For small values of n, a higher specificity is observed when ρ 2 and N 1 is smaller, although even in the worst case, the specificity was greater than 0.75. Table 4 gives the average estimate of the FDR, FOR, %∆, sensitivity, and specificity in the case where n = 50 and ρ 12 = 0.2 across all combinations of p and N 1 . As already noted above, the table shows that the FOR and sensitivity decrease as p and N 1 increase. The FDR gives the proportion of correctly labeled instances among all the removed instances. This measure increases as a function of N 1 , corresponding to the decrease in the specificity of the procedure resulting in more correctly labeled instances being filtered out. On the other hand, it decreases as a function of p due to the increased proportion of mislabeled instances available to be removed.
The sensitivity of the procedure, as discussed above, gives the proportion of mislabeled instances that are detected out of all mislabeled instances. This measure is increases when p is small and N 1 is large, and decreases for large p and small N 1 ; corresponding to the FOR estimate. For example, when p = 0.20, N 1 = 500, and ρ 2 = 0.2 the data consists of 100 mislabeled instances and 400 correctly labeled instances. On average, the procedure removed 94.5 % of mislabeled instances and only 0.7 % correctly labeled instances, based on the reported sensitivity and specificity. Thus, the resulting filtered data set has an average of 5.5 mislabeled instances and 397.2 correctly labeled instances for an average of 402.7 total instances. This reflects a decrease in the proportion of mislabeled instances by 93.15 %: while 20 % of the original data set was mislabeled, only 1.4 % of the filtered data set remains mislabeled.
Discussion
In this paper, we have presented a testing procedure for identifying incorrectly labeled instances when two or more classes are present. Our non-parametric approach, which requires very few assumptions, yields a very high specificity, and can be implemented very easily and efficiently using standard statistical software.
As demonstrated in the simulation study, our testing procedure has a high specificity and low FOR provided the number of measurements (n) on each instance is large. Decreasing the value of n yields a more conservative test (i.e., one which is less likely to reject the null hypothesis in (15) and remove instances), since each distance is measured less precisely. Even for extremely small values of n, it was still possible to reduce the FOR and maintain a specificity over 80 % for all classes.
For a fixed n, classes with fewer instances (i.e. small values of N 1 ) had a higher FOR and specificity relative to larger classes. Compared to "classical" classification algorithms, however, such a property is relatively unique. Many of the existing classification procedures found in the literature have difficulties when the number of instances varies substantially across classes [8]. This difficulty extends to those procedures utilizing these classification approaches to also detect mislabeled instances 1 . Our testing procedure avoids this problem and maintains the integrity of small classes by analyzing each using a "one-vs-all" strategy that is most conservative for small values of N 1 (say for 25 ≤ N 1 ≤ 50).
On the other hand, when the number of instances is extremely low (2 ≤ N 1 ≤ 25 based on our simulation studies) the accuracy of the non-parametric estimates, especiallyτ , become unreliable. In the most extreme cases, the available data is insufficient to ever reject the null hypothesis even if a reliable estimate of τ could be found. For example, in LC-MS/MS proteomics, extremely small proteins (2 ≤ N 1 ≤ 5) often consist entirely of inaccurate or mislabeled peptides and make up a substantial proportion of the reported proteins. This will be addressed in a subsequent work using a complementary procedure where instances are only retained if the null hypothesis is rejected.
The use of a Bonferroni-type procedure is aimed at protecting against removing correctly identified instances is extremely conservative, prioritizing a high specificity at a cost of a higher FOR. Even in this conservative case, the FOR in our simulations was universally reduced across all values of N 1 , n, ρ and p. Less conservative FWER procedures, FDR-type procedures [1,2], or procedures seeking to explicitly control the FOR could also be considered to further reduce the FOR in the filtered data. The primary convenience of the Bonferroni procedure is its universal applicability, especially in light of the complexity of the dependency structure of the distances and consequently of the tests statistics, Z i .
Because the testing procedure estimatesḠ andF non-parametrically using the available data, these estimates are affected by the presence of mislabeled instances. The resulting estimates,τ andt * of τ and t * , may therefore be biased when mislabeled instances are included in the class. One possible method of remedy is to iteratively remove a small number of instances and re-estimate τ and t * until some stopping criterion is met. Developing such a sequential estimation procedure is left to future work.
Although we have used correlation as a measure distance, the procedure is generally applicable whenever the observed data for each instance can be effectively combined as a "measure of distance". In our subsequent work, we also include a demonstration of the procedure using Manhattan distance.
|
2020-06-05T01:00:47.361Z
|
2020-06-04T00:00:00.000
|
{
"year": 2020,
"sha1": "78d98c89e7e5da896f3995d110ccae9ce257589c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aca351e19c1cb3f9023cf79d389f6f9d0f125156",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Mathematics"
]
}
|
269176918
|
pes2o/s2orc
|
v3-fos-license
|
The Clinical Significance and Involvement in Molecular Cancer Processes of Chemokine CXCL1 in Selected Tumors
Chemokines play a key role in cancer processes, with CXCL1 being a well-studied example. Due to the lack of a complete summary of CXCL1’s role in cancer in the literature, in this study, we examine the significance of CXCL1 in various cancers such as bladder, glioblastoma, hemangioendothelioma, leukemias, Kaposi’s sarcoma, lung, osteosarcoma, renal, and skin cancers (malignant melanoma, basal cell carcinoma, and squamous cell carcinoma), along with thyroid cancer. We focus on understanding how CXCL1 is involved in the cancer processes of these specific types of tumors. We look at how CXCL1 affects cancer cells, including their proliferation, migration, EMT, and metastasis. We also explore how CXCL1 influences other cells connected to tumors, like promoting angiogenesis, recruiting neutrophils, and affecting immune cell functions. Additionally, we discuss the clinical aspects by exploring how CXCL1 levels relate to cancer staging, lymph node metastasis, patient outcomes, chemoresistance, and radioresistance.
Furthermore, CXCL1 significantly contributes to oncogenic processes.Although its relevance across various cancers has been well-researched, there is no comprehensive overview summarizing the entirety of CXCL1's significance in cancer processes.To bridge this gap, we have compiled a comprehensive summary of CXCL1's role and divided our work into several parts.In our previous papers, we focused on delineating CXCL1's importance in gastrointestinal cancers [15] and reproductive cancers [16].In this current work, we expound upon the significance of CXCL1 in the remaining array of malignancies, including bladder cancer, glioblastoma, hemangioendothelioma, hematolymphoid tumors (leukemias), Kaposi's sarcoma, lung cancer, osteosarcoma, renal cancer, and various skin cancers like malignant melanoma, basal cell carcinoma, cutaneous squamous cell carcinoma, as well as thyroid cancer.
The Involvement of CXCL1 in Cancers: A Universal Model
The role of CXCL1 in cancer processes can be divided into its influence on cancer cells and cancer-associated cells.The primary consequences of CXCL1 action on cancer cells include increased proliferation and enhanced migration of cancer cells [17].However, not all types of tumors experience increased proliferation; for instance, in cholangiocarcinoma, CXCL1 may reduce cancer cell proliferation [18].Additionally, CXCL1 induces the migration of cancer cells, particularly through the induction of epithelial-mesenchymal transition (EMT) [19].Moreover, CXCL1 exerts an anti-apoptotic effect on cancer cells [17,20], which contributes to chemoresistance and radioresistance.
CXCL1 also acts on cancer-associated cells.The expression of CXCR2 is present on neutrophils [1], making CXCL1 a chemoattractant for these cells [7].Consequently, CXCL1 directly participates in the recruitment of tumor-associated neutrophils (TAN) [21].Additionally, CXCL1 is involved in the recruitment of granulocytic-myeloid-derived suppressor cells (G-MDSC), as these cells express CXCR2 [22].However, monocytic-myeloid-derived suppressor cells (M-MDSC) do not express CXCR2 [22].Therefore, CXCL1 indirectly affects these cells.It may directly, as well as other CXCR2 ligands, act on granulocyte and macrophage progenitor cells (GMPs), increasing the number of M-MDSCs in the bone marrow and thereby enhancing the recruitment intensity of these cells to the tumor niche by other factors [23].Consequently, CXCL1 contributes to tumor immune evasion.
In general, CXCL1, along with other CXCR2 ligands, acts as a chemoattractant for neutrophils under physiological conditions.In areas where these cells are needed, such as during microbial infections, the expression of CXCL1 and other CXCR2 ligands increases [24].Consequently, neutrophils infiltrate such tissues, where they fulfill their role.However, CXCR2 ligands can inhibit an excessively intense immune system response.Therefore, CXCL1 also participates in the recruitment of myeloid-derived suppressor cells (MDSC) [25].Because tumorigenic mechanisms resemble a chronic non-healing wound, these mechanisms occur in tumors as well [26].
Another crucial role of both CXCL1 and other CXCR2 ligands is the induction of angiogenesis [8,9], influenced by CXCR2 expression on endothelial cells [9].CXCL1 may also affect cancer-associated fibroblasts (CAF) [27], leading to the senescence of these cells and their transformation into cells that support tumor development.
Bladder Cancer
In 2020 alone, more than 570 thousand new cases of bladder cancer were diagnosed, which accounted for 3.0% of all cancers [28].Also, there were 212 thousand deaths caused by this cancer, which accounted for 2.1% of all cancer deaths.These statistics show that treatment methods for bladder cancer are inadequate [29].For this reason, tumor mechanisms are being studied to develop new therapeutic approaches in this disease; one possible target is CXCL1.
In bladder cancer tumors, the expression of CXCL1 is elevated [30,31].At the same time, CXCL1 levels are higher in basal type than in luminal type bladder cancer [32].CXCL1 from bladder cancer tumors makes its way into the urine and for this reason, patients with this cancer have elevated levels of this chemokine in their urine compared to healthy individuals [33][34][35].That is why urine CXCL1 levels may be a marker of this disease.
In the bladder cancer niche, CXCL1 is produced by cancer cells [36].The expression of this chemokine is also found in other cells, including tumor-associated macrophages (TAM) and cancer-associated fibroblasts (CAF) [37].CXCL1 is important in tumorigenesis in bladder cancer, as it induces the proliferation of cancer cells [38].It also causes the migration of bladder cancer cells [36,38] due to the induction of EMT in these cells [39].Radiation therapy may elevate the expression of CXCL1 in bladder cancer cells [40].This induces the migration of bladder cancer cells, potentially leading to treatment inefficacy.
CXCL1 also affects cancer-associated cells.It increases α smooth muscle actin (αSMA) expression in fibroblasts, which indicates that it transforms these cells into CAFs [37].CXCL1, along with other CXCR2 ligands in bladder cancer tumors, is responsible for recruiting neutrophils into the tumor niche [32].This leads to differences between basal and luminal type bladder cancer, where in basal type bladder cancer there is a higher expression of CXCR2 ligands and a higher number of TAN in the tumor niche than in luminal type bladder cancer [32].CXCL1 is also responsible for the recruitment of MDSCs to the bladder cancer niche [41], cells that cause cancer immune evasion and resistance to chemotherapy.The great importance of CXCL1 in cancer processes is also shown by in vivo experiments, for example, when CXCL1 increases bladder cancer tumor growth [37].
CXCL1 also induces angiogenesis in bladder cancer tumors by causing endothelial cells to migrate.At the same time, CXCL1 expression in cancer cells is dependent on epidermal growth factor receptor (EGFR) ligands derived in endothelial cells [42].In contrast, the expression of EGFR ligands in endothelial cells is dependent on VEGF, which indicates a reciprocal communication between endothelial cells and bladder cancer cells.
The level of CXCL1 expression in tumors correlates with tumor stage [30,36,37].Urinary CXCL1 levels may [34,36] or may not [33] correlate with tumor stage depending on the study cited.Higher CXCL1 expression in the tumor is associated with a worse prognosis for the patient (Table 1) [30,31,34,37,40].For this reason, drugs targeting CXCL1 have anti-tumor effects on bladder cancer.An example of this is HL2401, a monoclonal antibody anti-CXCL1 [38], which inhibits the proliferation and migration of bladder cancer cells, as well as bladder cancer tumor growth in an in vivo model.To improve the effect of chemotherapy, either CXCR2 inhibitors or the mentioned antibody can be used.Some anticancer drugs, such as epidoxorubicin, increase CXCL1 expression in bladder cancer cells [39], which leads to the EMT of cancer cells and the production of metastasis as a side effect of therapy.Blocking the action of CXCL1 prevents this side effect from occurring (Figure 1).DFS-disease-free survival; OS-overall survival; RFS-relapse-free survival.
Primary Brain Tumors: Glioblastoma
Primary brain tumors are cancers that originate from brain cells.It is estimated that the incidence of this group of tumors is almost 24 cases per 100 thousand population per year [44].Globally, more than 308 thousand new cases of brain tumors were diagnosed in 2020, accounting for 1.6% of all cancers [28].At the same time, cancers in this group have Figure 1.CXCL1 in bladder cancer.In a bladder cancer tumor, CXCL1 is generated by cancer cells, tumor-associated macrophages (TAM), and cancer-associated fibroblasts (CAF).Consequently, the levels of this chemokine are elevated in bladder cancer compared to healthy tissue.Notably, CXCL1 is released from the tumor, leading to its presence in the urine of patients with bladder cancer, offering a diagnostic marker for this particular cancer type.CXCL1 plays a crucial role in recruiting neutrophils and myeloid-derived suppressor cells (MDSCs) into the tumor microenvironment while also inducing angiogenesis.Additionally, CXCL1 exerts its influence on cancer cells, TAMs, and fibroblasts.It promotes the proliferation and migration of cancer cells by inducing epithelial-mesenchymal transition (EMT).Through its interaction with TAM, CXCL1 enhances the M2 polarization of these cells.Furthermore, CXCL1 acts on fibroblasts, driving their transformation into cancer-associated fibroblasts (CAF).
Primary Brain Tumors: Glioblastoma
Primary brain tumors are cancers that originate from brain cells.It is estimated that the incidence of this group of tumors is almost 24 cases per 100 thousand population per year [44].Globally, more than 308 thousand new cases of brain tumors were diagnosed in 2020, accounting for 1.6% of all cancers [28].At the same time, cancers in this group have an unfavorable prognosis.In 2020, there were more than 250 thousand deaths from these tumors, which accounted for 2.5% of deaths from all cancers [28].
The most important group and most commonly diagnosed primary brain tumors are gliomas [45], of which the most aggressive is glioblastoma, which has the highest grade IV according to the World Health Organization (WHO) classification [45].It accounts for 14.5% of all primary brain tumors and nearly half of malignant primary brain tumors [44].As the median patient survival after diagnosis for this type of cancer is only 8 months [44,46], this type of cancer is being intensively studied to develop better therapeutic approaches.One possible mechanism that can be targeted in glioblastoma tumors may be CXCL1 and its receptor CXCR2.
Depending on the literature cited, CXCL1 expression levels in glioblastoma tumors are either unchanged [47,48] or elevated [49][50][51][52] relative to healthy brain tissue.In glioblastoma tumors, CXCL1 expression is higher than in low-grade gliomas [50,51].Also, CXCL1 expression is higher in recurrent glioblastoma than in primary glioblastoma [51].The level of CXCL1 expression in gliomas may not be highest in glioblastoma.The highest percentage of tumors with high CXCL1 expression is in oligodendrogliomas [49].
In addition, the expression of other CXCR2 ligands may differ in brain tumors from healthy brain tissue.CXCL3, CXCL6, and CXCL8/IL-8 expression is either upregulated or unchanged, depending on the study [47,48], while CXCL5 expression is either downregulated or unchanged, also depending on the study [47,48].One available study shows that cerebrospinal fluid CXCL1 levels are elevated in patients with glioblastoma [53].In other brain tumors, CXCL1 expression may be downregulated relative to healthy brain tissue, e.g., in diffuse astrocytomas [48], or not different relative to healthy brain tissue [47].In pilocytic astrocytomas and anaplastic astrocytomas, CXCL1 expression is not different from that in healthy brain tissue [47].
Increased CXCL1 expression in glioblastoma cancer cells is a result of A-kinaseinteracting protein 1 (AKIP1) activity [54].Also, the activation of P2X7 by extracellular ATP increases the expression of CXCL1 [55].
CXCL1 is involved in tumorigenesis in brain tumors, where it increases the proliferation of glioblastoma cancer cells [50,54].At the same time, in oligodendrogliomas, the mitogenic properties of CXCL1 may depend on platelet-derived growth factor (PDGF) [49].CXCL1 also causes the proliferation and self-renewal of cancer stem cells of glioblastoma [56].CXCL1 also causes cancer cell migration, as shown by experiments on glioblastoma cell lines [54,57].This is in part due to an increase in MMP2 expression [57].CXCL1 also increases programmed death-ligand 1 (PD-L1) expression in glioblastoma cells, which enhances cancer immune evasion [54].
Glioblastoma tumor cells secrete CXCL1, which results in the recruitment of mesenchymal stem cells into the tumor niche [58].These cells secrete various factors, including CXCL1, CXCL8/IL-8, and interleukin-6 (IL-6).Also, high CXCL1 expression in glioblastoma tumors leads to an increase in the number of macrophages with M2 polarity, as well as granulocytic-myeloid-derived suppressor cells (G-MDSC) and monocytic-myeloid-derived suppressor cells (M-MDSC) [51], which indicates an enhancement of cancer immune evasion.Also, these cells secrete S100A9, which has a pro-survival effect on cancer cells [51].
CXCL1 causes resistance to radiotherapy and to chemotherapy with temozolomide (TMZ) [50][51][52]54].With radiation therapy, there is an increase in CXCL1 expression, which increases the resistance of the tumor to treatment [50,[59][60][61][62].This process is dependent on the activation of casein kinase 1 alpha 1 (CK1α) [62] and an increase in the expression of the inhibitor of nuclear factor κBξ (IκBξ) [59].IκBξ binds to NF-κB, which increases the expression of various genes dependent on this transcription factor, such as CXCL1.The increase in CXCL1 expression in glioblastoma cells following radiation therapy persists for up to 35 days [60].Subsequently, CXCL1 increases NF-κB activation in cancer cells, which leads to the mesenchymal transition of cancer cells [50].Also, CXCL1 causes an increase in TAM and MDSC, which secrete S100A9 [51] with a pro-survival effect on cancer cells.As a result of the cited mechanisms, CXCL1 causes resistance to radiotherapy and an increase in resistance to further treatment after the first cycle of radiotherapy [51,60].
CXCL1 is also involved in resistance to anti-angiogenic therapy.CXCL1 has angiogenic properties and for this reason can complement and replace vascular endothelial growth factor (VEGF) [63][64][65][66].However, CXCL1 in glioblastoma tumors also has other pro-angiogenic properties.CXCR2 + cancer stem cells are found in glioblastoma tumors [67].Under the influence of CXCL1, these cells exhibit vascular mimicry independently of VEGF.Also, CXCL1 induces the recruitment of endothelial progenitor cells (EPC) into the tumor niche [55].These cells integrate into the vessels leading to VEGF-independent angiogenesis.The angiogenic properties of CXCL1, EPC, and CXCR2 + cancer stem cells can compensate for blocking VEGF activity during anti-angiogenic therapy, which leads to resistance to treatment.
Due to the important influence of CXCL1 on tumorigenic processes in glioblastoma tumors, elevated levels of this chemokine are associated with a worse prognosis for the patient (Table 2) [43,50,51].Also, elevated levels of CXCL1 are associated with a worse prognosis for glioma patients [43,59] (Figure 2).In glioblastoma tumors, CXCL1 originates from cancer cells, exerting a significant impact on their behavior.It precipitates heightened proliferation and promotes the expression of MMP2, fostering cancer cell migration.Additionally, CXCL1 contributes to elevated PD-L1 expression, facilitating immune evasion by glioblastoma cells.The effects extend to glioblastoma cancer stem cells, where CXCL1 induces increased proliferation and self-renewal.Beyond its direct influence on cancer cells, CXCL1 extends its reach to non-cancerous cells within the glioblastoma tumor microenvironment.Notably, it induces the recruitment of tumor-associated macrophages (TAM) and myeloid-derived suppressor cells (MDSC), triggering the production of S100A9.This molecule confers a pro-survival advantage to tumor cells, leading to resistance against radiotherapy and chemotherapy.Moreover, CXCL1 plays a pivotal role in angiogenesis, with the ability to directly act on endothelial cells.It instigates the recruitment of endothelial progenitor cells (EPC), facilitating their transformation into new vessels.Simultaneously, CXCL1 influences cancer stem cells, promoting vascular mimicry.This intricate interplay contributes to resistance mechanisms against anti-angiogenic therapy in glioblastoma.
Hemangioendothelioma
Hemangioendothelioma is a group of rare blood vessel tumors [68].It is a benign neoplasm that rarely gives metastasis to lymph nodes.To date, more than 200 cases of hemangioendothelioma have been described.CXCL1 plays an important role in the development of this tumor.High basal NF-κB activity has been reported in hemangioendothelioma cells, which results in high CXCL1 expression [69]; for this reason, there is a high expression of CXCL1 in clinical samples of this cancer.Although CXCL1 is not important in hemangioendothelioma cell proliferation, it does induce cancer cell migration.In a mouse model, CXCR2 ligand is important in hemangioendothelioma tumor growth [69].CXCL1 is also responsible for angiogenesis in the tumor of this cancer.
Hematolymphoid Tumors
Hematolymphoid tumors are a group of malignancies that includes myelodysplastic neoplasms, leukemias, mastocytosis, and lymphomas [70,71].These neoplasms originate from hematopoietic stem cells, hematopoietic precursors, and leukocytes at varying degrees of differentiation of these cells depending on the type of disease.It is estimated that nearly 475 thousand new cases of leukemia and 544 thousand cases of non-Hodgkin lymphoma were diagnosed in 2020, accounting for 2.5% and 2.8% of all cancers, respectively [28].Also, there were nearly 311 thousand deaths caused by leukemia and nearly 260 thousand deaths caused by non-Hodgkin lymphoma, which accounted for 3.1% and 2.6% of all cancer deaths, respectively [28].
Acute Myeloid Leukemia
One of these cancers is acute myeloid leukemia (AML) [72], originating from hematopoietic precursors.The incidence of this type of leukemia is 0.5-0.7 cases per 100,000 per year in children and 0.9 cases per 100,000 per year in adults [72].Patients with this cancer have elevated levels of CXCL1 in their blood [73] and after bone marrow transplantation, CXCL1 levels in the blood return to normal.The expression of CXCL1 is lowest in AML cells with M3 FAB phenotype [74,75].In comparison, the expression of CXCR2 on AML cells is notably higher than that of other chemokine receptors [76], suggesting the potential for CXCR2 ligands to influence these cells.Furthermore, CXCR2 expression is elevated in AML cells compared to control samples [77].Notably, CXCR2 expression is lowest in AML cells with M3 FAB phenotype, while it is the highest in AML cells with M4/M5 FAB phenotype [74,75].This heightened expression in AML cells correlates with poorer prognoses [77,78], underscoring the significance of the CXCL1-CXCR2 axis in tumorigenic processes in AML.
CXCL1 expression in AML blasts is correlated with the expression of CCL2, CCL3, CCL4, and CXCL8/IL-8 [79].High CXCL1 expression in AML cells is associated with worse overall survival (Table 3) [43,78,80] and worse event-free survival [80].At the same time, CXCL1 expression in AML blasts is not associated with gender, age, AML cell morphology, or genetic abnormalities [79,80].It is associated with the expression of A-kinase interacting protein 1 (AKIP1) [80].Also, the expression of CXCR2 in the blood of AML patients is elevated and is associated with lower overall survival and lower relapse-free survival [78].[43] EFS-event-free survival, OS-overall survival.
CXCL1 may contribute to the development of AML.This chemokine alone does not cause AML blast proliferation [79].However, with the simultaneous action of granulocytemacrophage colony-stimulating factor (GM-CSF), interleukin-3 (IL-3), and stem cell factor (SCF), CXCL1 increases the proliferation of AML blasts in one-third of patients, which indicates that in bone marrow CXCL1 increases AML blast proliferation, but only in some AML patients [79].CXCL1 expression in AML cells, similar to the expression of other CXCR2 ligands and VEGF, can be increased by hypoxia [81].CXCL1 has pro-angiogenic properties [63][64][65][66].This may explain the increased bone marrow vascularization in AML patients, associated with the formation of a tumor niche in the bone marrow [82,83].
Chronic Myeloid Leukemia
Chronic myeloid leukemia (CML) is a myeloproliferative disorder of hematopoietic stem cells.Leukemia stem cells (LSC) of this cancer are phenotypically similar to granulocyte-macrophage progenitors (GMP) [84,85].The global incidence of CML is 1-2 cases per 100,000 per year [86].CML is a leukemia resulting from the translocation t(9; 22) (q34; q11) [70], which leads to the formation of the Philadelphia chromosome containing the breakpoint cluster region-v-abl Abelson murine leukemia viral oncogene homolog 1 (BCR-ABL1) fusion gene [86].This mutation also occurs in 20% to 30% of adults and 2% to 3% of pediatric patients with acute lymphoblastic leukemia (ALL) [87] and 0.5% to 3% of AML cases [88].The product of this gene is a kinase that has lost the domain responsible for regulating activity.Because of this, BCR-ABL1 is constantly active, causing proliferation and inhibiting apoptosis of the CML cell.
CXCL1 may play an important role in the development of CML.The expression of CXCR2 is higher in LSC CML than in hematopoietic stem cells [89].Studies in mice have shown that CML alters mesenchymal stem cells (MSC) in terms of the secretion profile of various factors [89].Under the influence of CML cells, tumor necrosis factor-α (TNF-α) levels in the bone marrow are increased.This cytokine increases the expression of CXCR2 ligands in MSCs, which increases the proliferation and self-renewal of LSCs; this is an important pathway in LSC function.For this reason, CXCR2 antagonist SB225002 has therapeutic properties against CML [89,90].
Acute Lymphocytic Leukemia
ALL derives from either B cell precursors or T cell precursors.For this reason, it can be divided into B-lineage acute lymphocytic leukemia (B-ALL) and T-lineage acute lymphocytic leukemia (T-ALL) [71,87].The incidence of ALL is 3-4 cases per 100,000 per year in children and 1 case per 100,000 per year in adults [87].
Patients with ALL have elevated levels of CXCL1 in the blood [91], more in ALL-L3 patients and the lowest level in ALL-L1 patients.Significantly, EBV-transformed lymphoblasts show reduced CXCL1 expression [92].After bone marrow transplantation, there is a decrease in CXCL1 levels in the blood even below the levels found in healthy individuals [91].In pediatric patients with ALL, an elevated level of both CXCL1 and CXCL8 is associated with an increased likelihood of bloodstream infections (BSI) [93].This is related to the fact that CXCR2 ligands are involved in mobilizing neutrophils from the bone marrow, particularly during infections [94].Consequently, the levels of CXCR2 ligands in the blood increase during infections.
The role of CXCL1 in ALL is not well understood and it is not known whether it has any important function.However, CXCR2 is expressed in childhood B-ALL cells [95].This means that these cells will respond to CXCL1, whose levels are elevated in the blood of ALL patients [91].The use of CXCR2 antagonist SB225002 in vitro has an apoptotic effect on B-ALL and T-ALL cells [96], although this effect may be due to SB225002's direct action on β-tubulin [96-98].
Multiple Myeloma
Multiple myeloma (MM) is a hematolymphoid tumor derived from plasma cells [99].The incidence of MM is estimated at 4.5 to 6 cases per 100,000 per year [99].An important factor in the development of MM is CXCL1, as the levels of this chemokine are elevated in the blood of patients with this cancer [100] and increase with the consecutive stages of the disease.High blood levels of CXCL1 in patients with MM are not statistically significantly associated with prognosis for patients, showing only a statically insignificant trend of worse prognosis at higher CXCL1 [100].
MM cells have expressions of CXCR2 and CXCR1 indicating that they can respond to CXCL1 [101].Bioinformatics analysis indicates that CXCL1 belongs to one of the key genes in myeloma side population cells [102], a MM cell population analogous to cancer stem cells in solid tumors.Also, CXCL1 is significant in the function of MM cells in the bone marrow.MSCs in the bone marrow produce factors such as CCL4/MIP-1β, IL-6, and CXCR2 ligands CXCL1, CXCL5, CXCL6, and CXCL8/IL-8 [103,104], which is associated with the transfer of miR-146a by MM via exosomes to MSCs [105].Chemokines secreted by MSC cause NF-κB activation in MM cells.
CXCL1 causes the proliferation of MM cells and is pro-angiogenic [63][64][65][66]; for this reason, CXCL1 levels in the blood of patients with this cancer are correlated with bone marrow microvascular density [100,103].The source of CXCL1 in the bone marrow may be MSC [103,104]; in blood, CXCL1 levels are correlated with mast cells in bone marrow, which indicates angiogenesis in the bone marrow in patients with MM [106].
Kaposi's Sarcoma
Kaposi's sarcoma is a cancer associated with Kaposi's sarcoma-associated herpes virus (KSHV)/human herpes virus 8 (HHV8) [107,108] with genetic material in the form of double-stranded DNA with a length of 140.5 kb [108].Together with Epstein-Barr virus (EBV), it belongs to the Gammaherpesvirinae subfamily.It infects various cells, including T cells, B cells, endothelial cells, and keratinocytes, with the main reservoir of latent KSHV/HHV8 being B cells [107].The seroprevalence of KSHV/HHV8 infection varies across the world; it is highest in sub-Saharan Africa (reaching over 80% in some regions), and in Europe and North America, it is estimated that about 6% of the population carries this virus [107].It causes Kaposi's sarcoma, multicentric Castleman's disease (MSD), and body cavity-based lymphoma (BCBL) [108].Infection with this virus is a necessary but not sufficient condition for Kaposi's sarcoma.KSHV/HHV8 infection is harmless in most cases for people with a functional immune system.For this reason, an additional condition for the formation of Kaposi's sarcoma is either human immunodeficiency virus (HIV) infection or immunodeficiency associated with, for example, taking immunosuppressive drugs after organ transplantation.The incidence of Kaposi's sarcoma in HIV-positive patients is estimated at 116 per 100,000 people per year in the U.S. [109].Kaposi's sarcoma also develops in between 0.067% and 2.16% of organ transplant patients, depending on the work cited [110][111][112].Kaposi's sarcoma differs between the two groups.One that is induced by immunosuppressive drugs is iatrogenic Kaposi's sarcoma, and one caused by simultaneous infection with HIV and KSHV/HHV8 is known as epidemic Kaposi's sarcoma [107].
CXCL1 plays an important role in the pathogenesis of Kaposi's sarcoma.KSHV/HHV8 causes an increase in CXCL1 as well as CXCL8/IL-8 expression in infected cells, as shown by experiments on endothelial cells [113,114]; HIV enhances the effect of KSHV/HHV8 on the expression of the described chemokines [113].Also, the KSHV/HHV8 genome encodes miR-K3 [115], a miRNA that downregulates the expression of G protein-coupled receptor kinase 2 (GRK2), involved in the downregulation of CXCR2 activity upon activation of this receptor.Downregulation of GRK2 expression by miR-K3 increases the activation of CXCR2, the receptor of CXCL1 and CXCL8/IL-8.The CXCL1-CXCR2 axis is important in KSHV/HHV8 infection and tumorigenic processes in Kaposi's sarcoma.CXCL1 is crucial for the survival of endothelial cells that are infected with KSHV/HHV8 [114].Also, CXCR2-Akt/PKB is important in KSHV/HHV8 latency [115].As CXCL1 and CXCL8/IL-8 are angiogenic factors, they play an important role in the early stages of Kaposi's sarcoma, particularly in the development of its angiogenic phenotype [113].
Lung Cancer
Lung cancer causes the highest number of deaths among all cancers.It is estimated that in 2020 it caused nearly 1.8 million deaths, which accounted for 18% of deaths caused by all cancers [28].Also, nearly 2.21 million new cases of lung cancer are diagnosed annually, which accounts for 11.4% of all new cancer cases each year [28].Lung cancer can be divided into non-small-cell lung cancer (NSCLC) and small-cell lung cancer (SCLC) [121]; the former comprises about 85% of all lung cancer cases.NSCLC can be further divided into lung adenocarcinomas (the most common) and lung squamous cell carcinomas.The most significant risk factor for lung cancer is smoking [122], including passive smoking [121,123].It is estimated that this factor is responsible for about 85% of lung cancer cases [122].Another significant factor that increases the risk of lung cancer is air pollution, such as from burning fuel in diesel engines [124] and fossil fuels such as coal [125].
CXCL1 expression is elevated in tumors of various types of lung cancer, including atypical lung cancer, lung adenocarcinoma [126], and NSCLC [126][127][128].On the other hand, other available studies have shown that CXCL1 expression is decreased in NSCLC tumors [129,130].Serum levels of CXCL1 are also increased in lung adenocarcinoma patients relative to healthy subjects [126], although another study shows that patients with early-stage NSCLC have lower levels of circulating CXCL1 than healthy subjects [131].
CXCL1 may be involved in the onset of lung cancer.Expression of this chemokine is increased by compounds that constitute air pollutants.For example, the expression of this chemokine in the lung, including lung fibroblasts, is increased by benzo[a]pyrene diol epoxide [132], a carcinogen from cigarette smoke.CXCL1 expression is increased in BEAS-2B bronchial epithelial cells by 1-nitropyrene (1-NP) but not by ultrafine carbon black (ufCB) particles [133], which increase the expression of another CXCR2 ligand: CXCL8/IL-8.1-NP is an air pollutant from the combustion of fuel in a diesel engine and is suspected of having carcinogenic properties [134].Chronic inflammation leads to tumorigenesis.This is important with chronic exposure to carcinogens, such as living in an environment with high air pollution from carcinogens or smoking cigarettes for many years.
CXCL1 expression occurs in lung adenocarcinoma cells [135] and can be increased by interactions with other cells in the tumor niche, as shown by experiments on mouse cells [136].Therefore, high CXCL1 expression in a tumor cell depends on the action of secretory factors such as basic fibroblast growth factor (bFGF) (lung adenocarcinoma [137,138] and squamous cell carcinoma [138]), IL-17 (lung adenocarcinoma and squamous cell carcinoma) [139], VEGF (lung adenocarcinoma) [137], and thrombin (NSCLC) [137,140].Also, doses of ionizing radiation, such as 4 Gray (Gy) with radiation therapy, can cause an increase in CXCL1 expression in lung adenocarcinoma cells, which may contribute to treatment ineffectiveness [141].
CXCL1 expression may also be associated with changes in extracellular matrix (ECM) in non-small-cell human lung carcinoma tumors.In lung cancer tumors, myofibroblasts cause the unfolding of the type III domains of fibronectin [142]; the modified fibronectin increases CXCL1 expression in lung fibroblasts.
Another factor that can affect CXCL1 expression is hypoxia.Nevertheless, experiments on A549 and SPC-A1 lines show that hypoxia does not alter CXCL1 expression [143].
Among the factors that reduce CXCL1 expression and function in lung adenocarcinoma cells, dachshund family transcription factor 1 (DACH1) can also be mentioned [126].In NSCLC cells, CXCL1 expression is also regulated by miR-141 [144].
Also, high CXCL1 expression in lung cancer cells may result from epigenetic changes.In lung adenocarcinoma, there is decreased expression of histone H3 lysine 36 methyltransferase SET-domain-containing 2 (SETD2) [145].This enzyme causes methylation of the region 2.0 k to 1.5 k bp upstream of the transcription start point of the CXCL1 gene, resulting in decreased expression of this gene.This means that a decrease in SETD2 expression in lung adenocarcinoma tumors leads to an increase in CXCL1 expression [145].
The action of CXCL1 in lung adenocarcinoma tumors is regulated not only by changes in the expression of this chemokine but also by alterations in the function of its receptor, CXCR2.Atypical chemokine receptor 1 (ACKR1)/Duffy antigen receptor for chemokines (DARC) may diminish the activity of CXCR2.This receptor is atypical for CXCL1 and other chemokines [146].When both ACKR1/DARC and CXCR2 are expressed in a single cell, ACKR1/DARC reduces the activity of chemokines that activate CXCR2.This mechanism may occur in lung adenocarcinoma [146].
CXCL1 is produced in lung tumors not only by cancer cells, but also by fibroblasts.These cells start producing CXCL1 under the influence of lung cancer cells, as shown by experiments on these cells cultured with NSCLC cells [147].At the same time, fibroblasts secrete other factors, including VEGF, GM-CSF, IL-6, CXCL6, CXCL8/IL-8, and CCL5.
CXCL1 can also be produced by NK cells after contact with the immunosuppressive lung tumor microenvironment.This is related, among other things, to the secretion of extracellular vesicles by lung adenocarcinoma cells, which contains miR-150 [148].This microRNA reduces the expression of cluster of differentiation 226 (CD226)/DNAX accessory molecule-1 (DNAM-1), adhesion proteins important in the cytotoxic functions of lymphocytes [149].NK cells with reduced anti-cancer functions in the immunosuppressive lung tumor microenvironment acquire pro-cancer properties and begin to produce and secrete VEGF and CXCR2 ligands such as CXCL1, CXCL2, and CXCL8/IL-8 as well as IL-6, CCL2, matrix metalloproteinases (MMP), and many others [148].
CXCL1 plays a crucial role in tumorigenesis in lung cancer.It has been demonstrated to enhance the proliferation of lung cancer cells across various lung adenocarcinoma cell lines [135].Although the effect is modest, this chemokine can increase proliferation by 10%.Significantly, this effect may operate in an autocrine manner, wherein CXCL1 is produced by cancer cells and subsequently acts on the same cells.
CXCL1 may also be important in the function of lung cancer stem cells, cells that divide infrequently and have high expression of DNA repair enzymes and transporters that excrete xenobiotics, particularly anticancer drugs, outside the cell [150].These cells show resistance to radiotherapy and chemotherapy.After cancer therapy, cancer stem cells are responsible for tumor recurrence.In this process, insulin-like growth factor-I (IGF-I) causes self-renewal of NSCLC cancer stem cells [151], followed by an increase in CXCL1 and placental growth factor (PlGF) expression in these cells, leading to angiogenesis and recurrence of the tumor.
CXCL1 also causes the migration of lung adenocarcinoma cancer cells [141] and is important in lung cancer metastasis.
The production of large amounts of CXCL1 in a lung adenocarcinoma tumor leads to the recruitment of G-MDSC to the lymph node at an early stage of metastasis [152]; these cells contribute to lymph node metastasis, which is related to the secretion of TGF-β1 by these cells.
CXCL1 is important in the function of tumor-associated cells.CXCL1 causes the recruitment of neutrophils [153] into the tumor niche; these cells express myeloperoxidase (MPO) and Fas ligand (FasL), which inhibits the anti-tumor effect of lymphocytes.CXCL1 can also induce regulatory T cell (T reg ) recruitment to a malignant pleural effusion [144].These cells reduce the antitumor response of the immune system, which is an important part of the tumorigenic processes in NSCLC.
CXCL1 also acts on endothelial cells, causing angiogenesis in lung adenocarcinoma and squamous cell carcinoma [139,151].CXCL1 may be important in resistance to radiotherapy.When lung adenocarcinoma cells are exposed to a dose of 4 Gy of radiation, there is an activation of NF-κB and an increase in CXCL1 expression in these cells [141].This chemokine caused cancer cell migration in that model, which may have contributed to metastasis as a side effect.
CXCL1 expression in NSCLC tumors is positively correlated with the TNM stage and lymph node metastasis, but not with tumor size and carcinoembryonic antigen (CEA) levels [134].Similar results were obtained for lung adenocarcinoma [126].Higher CXCL1 expression in NSCLC tumors [126][127][128]154], lung squamous cell carcinoma [126], and lung adenocarcinoma [126] is associated with a worse prognosis for patients (Table 4).In stage I and II NSCLC, CXCL1 may not affect the prognosis for patients [130] (Figure 3).
Group Size Notes References
Lung cancer: lung squamous cell carcinoma Decreased survival Meta-analysis OS, but not PFS Meta-analysis of GEO databases [126] Lung cancer: lung squamous cell carcinoma No effect 482 OS, DFS, GEPIA dataset [43] DFS-disease-free survival; OS-overall survival; PFS-progression-free survival.
Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 13 of can also induce regulatory T cell (Treg) recruitment to a malignant pleural effusion [14 These cells reduce the antitumor response of the immune system, which is an importa part of the tumorigenic processes in NSCLC.CXCL1 also acts on endothelial cells, causing angiogenesis in lung adenocarcinom and squamous cell carcinoma [139,151].CXCL1 may be important in resistance to rad therapy.When lung adenocarcinoma cells are exposed to a dose of 4 Gy of radiation, the is an activation of NF-κB and an increase in CXCL1 expression in these cells [141].Th chemokine caused cancer cell migration in that model, which may have contributed metastasis as a side effect.
CXCL1 expression in NSCLC tumors is positively correlated with the TNM stage a lymph node metastasis, but not with tumor size and carcinoembryonic antigen (CEA) le els [134].Similar results were obtained for lung adenocarcinoma [126].Higher CXCL1 e pression in NSCLC tumors [126][127][128]154], lung squamous cell carcinoma [126], and lu adenocarcinoma [126] is associated with a worse prognosis for patients (Table 4).In sta I and II NSCLC, CXCL1 may not affect the prognosis for patients [130] (Figure 3).
Osteosarcoma
Osteosarcoma is a rare tumor that arises from mesenchymal cells [155].The primary sites of this cancer are the long bones and pelvis.It is estimated that the incidence of this cancer is about 0.2 per 100 thousand population per year.It occurs mainly in adolescents, with an incidence of about 1 case per 100 thousand population per year [155].Osteosarcoma often gives rise to lung metastasis [155].
CXCL1 expression is higher in osteosarcoma tumors than in healthy tissue [156].CXCL1 expression increases with each tumor stage [156].
CXCL1 expression in osteosarcoma cancer cells is dependent on the factors highly expressed in most tumors of this cancer [157].Osteosarcoma cancer cells secrete extracellular vesicles that act on cells in the bone, in particular, on osteoblasts and osteoclasts [158].This causes an increase in the expression of CXCL1 in these cells as well as other CXCR2 ligands.This pathway is also responsible for an increase in the expression of other cytokines, such as receptor activator of NF-kappaB ligand (RANKL), interleukin-1β (IL-1β), IL-6, lipocalin 2 (LCN2), CCL2, and CCL5 [158].Another factor causing an increase in CXCL1 expression is low pH.Osteosarcoma tumors have a lower pH, as do many other cancers; this causes an increase in the expression of many chemokines and cytokines, including CXCL1 in MSCs [159].
The CXCL1-CXCR2 axis is important in lung metastasis of osteosarcoma.Human pulmonary artery endothelial cells secrete CXCL1 [160], which activates the CXCR2-focal adhesion kinase (FAK)-PI3K-PKB-NF-κB pathway in osteosarcoma circulating tumor cells [156,160], causing an increase in vascular cell adhesion molecule-1 (VCAM-1) expression in osteosarcoma circulating tumor cells [156,160], which increases the adhesion of osteosarcoma circulating tumor cells to blood vessel walls in the lungs.Also, this adhesion protein is important in the transendothelial migration of osteosarcoma cells and for the formation of metastasis in the lung [160].
Renal Cancer
Renal cell carcinoma is cancer located in the kidney [161].It is estimated that 430 thousand new cases of renal cell carcinoma were diagnosed in 2020 alone, which accounted for 2.2% of all cancers [28].Also, there were nearly 180 thousand deaths caused by this cancer, which accounted for 1.8% of deaths caused by all cancers [28].Risk factors for renal cell carcinoma include cigarette smoking, hypertension, and obesity [161].Genetic factors also increase the likelihood of developing this cancer.An example of this is people with von Hippel-Lindau syndrome, which involves a defect in the VHL gene.This gene encodes the von Hippel-Lindau protein (pVHL), which causes ubiquitylation of hypoxia inducible factor (HIF)-1α and HIF-2α [162], leading to the proteasomal degradation of these proteins.A reduced pVHL activity induces an increase in the levels of HIF-1α and HIF-2α and an increase in the transcriptional activity of HIF-1 and HIF-2.These factors in cells without mutations in the VHL gene are activated by hypoxia and are a very significant part of tumorigenesis.It was found that 93% of renal cell carcinomas have a mutation in the VHL gene [163].This is a characteristic feature of this cancer [161].
CXCL1 may be involved in the appearance of renal cell carcinoma, as shown in experiments on mice.Damaged kidney tubular epithelial cells release CXCL1 [164], which activates fibroblasts responsible for the infiltration of the kidney by neutrophils.This leads to inflammatory reactions in this organ, which facilitates the formation of renal cell carcinoma.
CXCL1 expression is increased relative to healthy tissue in renal cell carcinoma [165].Also, CXCL1 levels in the plasma of patients with this cancer are elevated relative to healthy individuals [166], which indicates that this chemokine may be involved in tumorigenesis.
CXCL1 in renal cell carcinoma is produced by cancer cells [167].A pro-inflammatory environment, particularly IL-1β, is responsible for CXCL1 expression in renal cell carcinoma [168].At the same time, elevated IL-1β levels may depend on high basal NF-κB activation.Studies on clear-cell renal cell carcinoma have shown that ubiquitinspecific peptidase 53 (USP53) expression is downregulated in this tumor [167].This causes an increase in NF-κB activation, which leads to an increase in IL-1β and CXCL1 expression.In adrenocortical carcinomas, CXCL1 may also be produced by mast cells [169].
CXCL1 causes the recruitment of G-MDSCs to the tumor niche in renal cell carcinoma [168].Also, CXCL1 induces angiogenesis in renal cell carcinoma [166].CXCL1, along with other CXCR2 ligands, can cause renal cell carcinoma metastasis to the lung [166].
CXCL1 expression is positively correlated with the pathological stage of renal cell carcinoma [165].Also, a higher expression of CXCL1 in renal cell carcinoma tumors is associated with a worse prognosis for the patient (Table 5) [43,165,168,170].DFS-disease-free survival; OS-overall survival.
Rhabdomyosarcoma
Rhabdomyosarcoma is a rare cancer found in children and the most common among sarcomas [171].Cancer cells of this tumor resemble skeletal myoblasts.In the US alone, only 350 cases of this cancer are diagnosed annually.The incidence in North American and European countries is 0.45 cases per 100,000 people under the age of 20.Children with sarcomas have increased blood CXCL1 levels compared to healthy individuals, although a much greater increase occurs in CXCL8/IL-8 [172], whose high blood levels are associated with a poorer prognosis for patients with sarcomas.CXCL1 levels in the blood are not associated with prognosis for patients with sarcomas, which shows a higher significance of CXCL8/IL-8 in tumor processes in sarcomas compared to CXCL1.
Skin Cancer 12.1. Malignant Melanoma
Malignant melanoma is cancer originating from melanocytes [173].It is estimated that nearly 325 thousand new cases of malignant melanoma were diagnosed in 2020 alone, accounting for 1.7% of all cancers [28].Also in 2020, there were 57 thousand deaths caused by this cancer, which accounted for 0.6% of all deaths by cancer [28].The incidence level of this cancer is not uniform worldwide.The highest incidence is in Australia and New Zealand, where the incidence of malignant melanoma is recorded at about 35 cases per 100,000 population per year [173,174].By contrast, in Europe and North America, the incidence of malignant melanoma is about 10 cases per 100 thousand people per year, coincidentally, twice the figure from 1975 [173].The lowest incidences of malignant melanoma are observed in southern Asia and northern Africa, at less than 1 per 100 thousand people per year.Risk factors for malignant melanoma primarily include excessive exposure to UV light from sunbathing or using tanning beds [173].UV light has a mutagenic effect that leads to genetic changes resulting in tumorigenesis, especially in people with fair skin [174].While genetic factors alone can cause malignant melanoma, this is not the main cause of this cancer.
CXCL1 expression is higher in malignant melanoma compared to healthy skin [175,176].At the same time, CXCL1 expression is higher in primary malignant melanoma than in metastatic malignant melanoma [175].Very high CXCL1 expression is found in malignant melanoma cancer cells [177][178][179].Approximately 70% of cell lines derived from this cancer show CXCL1 expression [180], compared to a lack of CXCL1 expression in normal melanocytes [181].The high expression of CXCL1 in malignant melanoma may be due to mutations.Individuals having a duplicated region on chromosome 4q13 have an increased predisposition to developing various cancers, including melanoma [182]; this is the locus of the CXCL1 gene and other CXCR2 ligands.
CXCL1 is also important in photo-carcinogenesis. Ultraviolet rays B (UVB) cause the production of CXCR2 ligands in epidermal keratinocytes and dermal fibroblasts [183], which leads to the infiltration of neutrophils into UV-burned skin.Neutrophils participate in inflammatory reactions; they secrete ROS and RNS, which have mutagenic effects and can initiate malignant melanoma.
High basal NF-κB activity is another reason for increased CXCL1 expression in malignant melanoma [7,184,185] due to increased expression of NF-κB-inducing kinase (NIK), which directly activates inhibitor of NF-κB kinase (IKK) [185] and can indirectly activate NF-κB through the activation of extracellular signal-regulated kinase (ERK) and mitogen-activated protein kinase (MAPK), which phosphorylates NF-κB [185].CXCL1 expression in malignant melanoma cells is also dependent on CXCL1 itself inducing CXCR2 activation, which results in increased NF-κB activity [177,179,186] in a mechanism dependent on Ras and p38 MAPK, which thus increases CXCL1 expression.In addition to the described mechanism, CXCL1 expression in malignant melanoma tumors depends on endothelin-1 (ET-1), which activates its endothelin receptor B (ETB) [187]; in normal melanocytes, ET-1 does not increase CXCL1 secretion.Another factor that increases CXCL1 expression in malignant melanoma tumors is the action of microphthalmia-associated transcription factor (MITF) [12], which directly binds to the CXCL1 promoter, thus increasing the expression of CXCL1.
Malignant melanoma cells secrete CXCL1 [177, 179,180].They also secrete extracellular vesicles that increase CXCL1 expression in various cells in the tumor niche.An example of this is extracellular vesicles that contain CXCL1 mRNA [188].Extracellular vesicles secreted by malignant melanoma cancer cells under hypoxia also contain heat shock protein 90 (Hsp90) and phosphorylated IKKα/β [189], a complex that, upon entering the cell, increases NF-κB activation and thus, CXCL1 expression.This mechanism has been observed in elevated CXCL1 expression in CAFs under the influence of malignant melanoma cancer cells [189].
CXCL1 induces the proliferation of malignant melanoma cancer cells [179].One of the first names given to CXCL1 was derived from this effect: MGSA-melanoma growthstimulatory activity [179].This effect was seen to be autocrine [177,179].Once CXCL1 is secreted, it activates its receptor CXCR2, which further increases CXCL1 expression in malignant melanoma cells, as well as induces tumor cell proliferation.In particular, CXCL1 causes an increase in the expression of Ras proteins, such as M-Ras, K-Ras, and N-Ras [190].This autocrine loop makes the growth of malignant melanoma cells independent of other growth factors [191].
CXCL1 increases the migration and invasion of uveal melanoma cells [192].Melanoma is a tumor that gives rise to metastasis early and often.One such example is metastasis to the liver by uveal melanoma.CXCL1 inhibits the development of metastasis but not the formation of metastasis [193].CXCL1 secreted from the primary malignant melanoma tumor increases E-cadherin expression and reduces matrix metalloproteinase 2 (MMP2) expression.This inhibits the development of metastasis.However, after surgical removal of the primary malignant melanoma tumor, there is a sudden development of metastasis due to a decrease in CXCL1 levels and the inhibitory effect of this chemokine.With that said, there is more work needed on the effect of CXCL1 on the proliferation of malignant melanoma cells.
In vivo experiments have confirmed the important role of CXCL1 in malignant melanoma tumor growth.This chemokine is important in tumor growth [194].In particular, it is associated with CXCL1 causing angiogenesis [194,195].
Malignant melanoma cancer cells increase CXCL1 expression in CAFs [196,197].At the same time, this effect is cell line-dependent.Some lines may not cause this process.Increased CXCL1 expression in CAFs may be caused by extracellular vesicles containing Hsp90 and phosphorylated IKKα/β [189].Such extracellular vesicles are secreted by malignant melanoma cancer cells under hypoxia.They cause NF-κB activation in CAFs, which leads to an increase in CXCL1 expression in these cells.Also, extracellular vesicles can contain CXCL1 mRNA, which increases CXCL1 expression [188].
CXCL1 from malignant melanoma cells also acts on keratinocytes together with bFGF, CXCL8/IL-8, and VEGF-A.CXCL1 causes an increase in keratin 14 expression in keratinocytes [198].This leads to the formation of an epidermis surrounding nodular melanoma.This structure around the tumor occurs in about 90% of nodular melanoma cases.
CXCL1 may not just be important in mechanisms in malignant melanoma tumors but also affect the whole body of a patient with malignant melanoma.In particular, CXCL1 may decrease the overall immunity of the whole body [199].This is because malignant melanoma tumors secrete transforming growth factor β (TGF-β) into the bloodstream.This cytokine increases the expression of CXCR2 ligands in the liver, which causes MDSC to infiltrate this organ.This decreases the liver's ability to enhance its immune system function.
The important action of CXCL1 in tumorigenic processes in malignant melanoma allows the development of anticancer drugs.In vitro experiments have shown that a dual CXCR1/CXCR2 inhibitor, such as SCH-479833 and SCH-527123, has an anti-tumor effect on malignant melanoma [200,201].In vivo experiments using the dual CXCR1/CXCR2 inhibitor Ladarixin (DF2156A) have confirmed this [202].CXCL1 also causes resistance to chemotherapy.Therefore, high CXCL1 expression in the tumor is unfavorable.The chemotherapeutic agent paclitaxel has been found to elevate CXCL1 expression in malignant melanoma cells [185,186].This effect may stem from the cytotoxic action of anticancer drugs, as other medications such as topoisomerase inhibitors [203], 5-fluorouracil [204], epidoxorubicin [39], as well as paclitaxel and carboplatin [205], have also been shown to increase CXCL1 expression in cancer cells across different types of tumors.This increases resistance to chemotherapy following the first treatment, as well as contributing to the development of metastasis.For this reason, the use of CXCR2 inhibitors together with standard chemotherapeutics may yield better results.Some studies of malignant melanoma have not shown a correlation between high CXCL1 expression and tumor characteristics.CXCL1 was not associated with local recurrence, distant metastasis [206], or patient prognosis (Table 6) [43,175,206].[43,175] DFS-disease-free survival; OS-overall survival.
Non-Melanoma Skin Cancer
The most common non-melanoma skin cancers are basal cell carcinoma (BCC) and cutaneous squamous cell carcinoma (cSCC) [207].Also included in this group of cancers are rare skin cancers such as Merkel cell carcinoma, dermatofibrosarcoma protuberans, and atypical fibroxanthoma, each with an incidence of less than 1 case per 100,000 population per year in the US [207].
Globally, the incidence of BCC is nearly 49 cases per 100,000 population per year, or 3.95 million new cases [208].In contrast, the incidence of cSCC worldwide is 30 cases per 100,000 population per year, or 2.40 million per year [208].This is more than the number of diagnosed cases of breast cancer (2.26 million), considered the most common type of cancer [28].The incidence rates of BCC and cSCC are not the same in all regions of the world.The highest incidence of BCC is in Australia, where about 2000 new cases per 100,000 population per year are diagnosed [208,209].The lowest incidences are reported in sub-Saharan Africa, at 2-4 cases per 100,000 population per year [208], and in Southeast Asia, at 1.63 cases of BCC per 100,000 population.In North America, the incidence of BCC is nearly 781 cases per 100,000 population per year.A similar distribution of cSCC incidence is found worldwide.The lowest incidence is in sub-Saharan Africa and South Asia (less than 1 case per 100,000 population per year) and the highest in North America (324 cases per 100,000 population per year) [208].In 2020, there were nearly 64,000 deaths resulting from non-melanoma skin cancer, which accounted for 0.6% of deaths from all cancers worldwide [28].Data on the incidence of non-melanoma skin cancer indicate that light skin complexion combined with too much exposure to UV light are the major risk factors.Another risk factor is Xeroderma pigmentosum and albinism [207].
In cSCC, there is elevated CXCL1 expression in the tumor compared to healthy skin [210][211][212].CXCL1 expression also occurs in migrating tumor cells.In half of the cases, CXCL1 expression is also found in blood vessels in the tumor [211].In contrast, in BCC, most cases do not show CXCL1 expression in the tumor [211].
CD147/Basigin may be responsible for the increased expression of CXCL1 in cSCC cells [212].CD147/Basigin activates AP-1, which directly attaches to the CXCL1 promoter, increasing the expression of CXCL1.CXCL1 may participate in the formation of non-melanoma skin cancer in a process called photo-carcinogenesis. Animal studies have shown that UV light increases the expression of CXCR2 ligands, such as KC, in skin cells [183].This leads to the infiltration of burned skin by neutrophils, cells that participate in inflammatory reactions.Neutrophils secrete reactive oxygen species (ROS) and reactive nitrogen species (RNS), which damage the DNA of skin cells, thus leading to the formation of skin cancer.
CXCL1 is also involved in tumorigenesis in already-formed non-melanoma skin cancer.CXCL1 and CXCL8/IL-8 are important in the autocrine proliferation of non-melanoma skin cancer cells, as shown by experiments on the following cSCC lines: A431, SCC-12, and KB [210,213].This means that CXCL1 is produced and secreted by the cancer cell and then increases the proliferation of the same cell.
Another source of CXCL1 in cSCC may be CAFs, which secrete small amounts of CXCL1 [214].At the same time, normal keratinocytes and cSCC cells increase CXCL1 expression in fibroblasts [214]-an effect that is consistent with the effect on CXCL8/IL-8 when co-cultured for 5 weeks.CXCL1 is also important in tumor nest formation in cSCC [210]-small clusters of tumor cells in the immediate vicinity of a tumor that are indicative of tumor cell migration and tumor growth.
CXCL1 also acts on non-cancerous cells, for example, causing the recruitment of MDSCs to cSCC tumors [212].
Thyroid Cancer
Thyroid cancer is a tumor arising from thyroid cells [215].The most common subtype of this cancer is papillary thyroid cancer.In 2020 alone, more than 586 thousand new cases of thyroid cancer were diagnosed, accounting for 3.0% of all cancers [28].However, this cancer has a good prognosis compared to other cancers.There were more than 43 thousand deaths caused by this cancer in 2020 alone, which accounted for 0.3% of deaths caused by all cancers [28].The role of CXCL1 in tumorigenesis in this cancer is low [216] compared to the significance played by CXCL8/IL-8.
CXCL1 expression is found in thyroid carcinoma cells [217].At the same time, the level of expression varies depending on the cell line studied.In the 8505C cell line, CXCL1 expression is high, while in HTh74 and SW1736 lines it is about 50x lower.Another source of CXCL1 in thyroid cancer tumors may be mast cells [218].
CXCL1 is involved in tumorigenesis in thyroid cancer.However, due to the higher expression of CXCR1 relative to CXCR2, CXCL8/IL-8 is more significant in this tumor's development and function [216].CXCL1 can induce the proliferation and migration of cancer cells.However, this effect is weaker than that of CXCL8/IL-8 [216].At the same time, CXCL1 is not necessary for sphere-forming, stemness, or self-renewal of cancer stem cells in thyroid tumors [216].This role is performed by CXCL8/IL-8.
CXCL1 may be important in the formation or function of brain metastasis in thyroid cancer.Brain metastasis is rare in patients with thyroid cancer.It is estimated to affect about 0.15-1.3% of patients with thyroid cancer [219].In brain metastatic papillary thyroid carcinoma tumors, there is a higher expression of CXCL1 than in non-brain metastatic papillary thyroid carcinoma tumors or primary brain tumors [219], which indicates some association between this chemokine and brain metastasis in thyroid cancer.
Conclusions
The role of CXCL1 in tumors has been thoroughly investigated, as demonstrated by numerous available experimental studies.However, there has been no comprehensive overview summarizing the complete understanding of CXCL1 in tumor processes.This gap is addressed by a series of our reviews on CXCL1 that aims at making it easier to grasp the significance of CXCL1 in tumor processes.Furthermore, this series of works has the potential to stimulate interest in research concerning the role of CXCL1 in tumor processes.
Figure 1 .
Figure 1.CXCL1 in bladder cancer.In a bladder cancer tumor, CXCL1 is generated by cancer cells, tumor-associated macrophages (TAM), and cancer-associated fibroblasts (CAF).Consequently, the levels of this chemokine are elevated in bladder cancer compared to healthy tissue.Notably, CXCL1 is released from the tumor, leading to its presence in the urine of patients with bladder cancer, offering a diagnostic marker for this particular cancer type.CXCL1 plays a crucial role in recruiting neutrophils and myeloid-derived suppressor cells (MDSCs) into the tumor microenvironment while also inducing angiogenesis.Additionally, CXCL1 exerts its influence on cancer cells, TAMs, and fibroblasts.It promotes the proliferation and migration of cancer cells by inducing epithelial-mesenchymal transition (EMT).Through its interaction with TAM, CXCL1 enhances the M2 polarization of these cells.Furthermore, CXCL1 acts on fibroblasts, driving their transformation into cancer-associated fibroblasts (CAF).
Figure 2 .
Figure 2. CXCL1 in glioblastoma.In glioblastoma tumors, CXCL1 originates from cancer cells, exerting a significant impact on their behavior.It precipitates heightened proliferation and promotes the expression of MMP2, fostering cancer cell migration.Additionally, CXCL1 contributes to elevated PD-L1 expression, facilitating immune evasion by glioblastoma cells.The effects extend to glioblastoma cancer stem cells, where CXCL1 induces increased proliferation and self-renewal.Beyond its direct influence on cancer cells, CXCL1 extends its reach to non-cancerous cells within the glioblastoma tumor microenvironment.Notably, it induces the recruitment of tumor-associated macrophages (TAM) and myeloid-derived suppressor cells (MDSC), triggering the production of S100A9.This molecule confers a pro-survival advantage to tumor cells, leading to resistance against
Figure 3 .
Figure 3. CXCL1 in lung cancer.The primary sources of CXCL1 within lung cancer tumors are ca cer cells and cancer-associated fibroblasts (CAF).Notably, CAFs exhibit heightened production CXCL1 under the influence of cancer cells.Once generated, CXCL1 exerts its effects on cancer ce promoting their proliferation and inducing migration.Furthermore, CXCL1 plays a crucial role lung cancer angiogenesis and the recruitment of neutrophils into the tumor microenvironment.A ditionally, it facilitates the recruitment of regulatory T cells (Treg) to pleural effusion and granu cytic myeloid-derived suppressor cells (G-MDSCs) to the lymph node.Within the lymph node, MDSCs contribute to lymph node metastasis by producing transforming growth factor β1 (TGF-β
Figure 3 .
Figure 3. CXCL1 in lung cancer.The primary sources of CXCL1 within lung cancer tumors are cancer cells and cancer-associated fibroblasts (CAF).Notably, CAFs exhibit heightened production of CXCL1 under the influence of cancer cells.Once generated, CXCL1 exerts its effects on cancer cells, promoting their proliferation and inducing migration.Furthermore, CXCL1 plays a crucial role in lung cancer angiogenesis and the recruitment of neutrophils into the tumor microenvironment.Additionally, it facilitates the recruitment of regulatory T cells (Treg) to pleural effusion and granulocytic myeloidderived suppressor cells (G-MDSCs) to the lymph node.Within the lymph node, G-MDSCs contribute to lymph node metastasis by producing transforming growth factor β1 (TGF-β1).
Table 1 .
Impact of CXCL1 expression level on survival of bladder cancer patients.
Table 2 .
Impact of CXCL1 expression level on survival of glioma patients.
Table 3 .
Impact of CXCL1 expression level on survival of patients with AML.
Table 4 .
Impact of CXCL1 expression level on survival of patients with lung cancer.
Table 4 .
Impact of CXCL1 expression level on survival of patients with lung cancer.
Table 5 .
Impact of CXCL1 expression level on survival of patients with renal cancer.
Table 6 .
Impact of CXCL1 expression level on survival of patients with malignant melanoma.
|
2024-04-17T15:10:34.235Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "601d7be64c53c3846cec7e772fa5b6e1db9bd808",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/8/4365/pdf?version=1713189962",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef17b8c30f124cb9b4ec56555dc212ab0d654a39",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
238858627
|
pes2o/s2orc
|
v3-fos-license
|
The role of surgery on primary site in metastatic upper urinary tract urothelial carcinoma and a nomogram for predicting the survival of patients with metastatic upper urinary tract urothelial carcinoma
Abstract Metastatic upper urinary tract urothelial carcinoma (mUTUC) is a relatively rare urothelial carcinoma, and little attention has been given to it. Our study established a nomogram by analyzing the prognostic factors of mUTUC to predict the survival of patients and revealed that the role of surgery at the primary tumor site. We extracted our data (2010–2016) from the Surveillance, Epidemiology, and End Results (SEER) database, and 628 patients with distant metastasis were identified. Propensity score matching (PSM) was used to balance the clinical variable bias in a 1:1 ratio. After PSM, we enrolled 502 patients in our study cohort. Univariate and multivariate Cox regression analyses and Kaplan–Meier curves showed that T stage, N stage, hepatic metastasis, surgery, and chemotherapy were prognostic factors for mUTUC before and after PSM. Based on the findings, a nomogram was constructed to predict the 12‐month survival of patients with distant metastasis. The analysis of subgroups of T stage, N stage, and different metastatic sites demonstrated that the survival of patients with T1/T2, N0/N1/N2/N3, metastasis including liver, and metastasis including bone could be improved by a combination of surgery and chemotherapy, while for the patients with T3/T4/TX, NX, metastasis including lung, and metastasis including distant lymph nodes, chemotherapy alone was a better choice to improve their overall survival. Radiotherapy has been proven to be useful for patients with N1/N2/N3 stage. We have provided more precise treatment strategies for stage IV patients. Our research fully affirms the role of surgery on primary site in UTUC patients with distant metastasis and the significance of classifying the patients into subgroups by integrating variables including T stage, N stage, and different metastatic sites to select the optimal treatment method.
| INTRODUCTION
Urothelial carcinoma (UC) is derived from the lower (bladder and urethra) and upper (renal pelvis and ureter) urinary tracts. 1 The ureter and renal pelvis are usually combined into upper urinary tract urothelial carcinoma (UTUC). It is an aggressive tumor characterized by invasive growth and high incidence of variant histology. As the most common tumor of the urinary system, bladder tumors account for 90%-95% of UCs, while UTUCs are uncommon and account for only 5%-10% of UCs. 2 Metastatic UTUCs (mU-TUCs) are even rarer, whose diagnosis suggests the median survival duration of only 6 months, accounting for only 10% of UTUCs. Moreover, there has been a prominent increase in the incidence of metastatic UTUC from 0.1 to 0.4 per 100,000 person-years over the past 30 years. 3 For patients with high-risk nonmetastatic UTUC, the standard treatment is open radical nephroureterectomy (RNU) with bladder cuff excision, regardless of tumor location. 4 However, for patients with metastatic UTUC (mUTUC), RNU is recommended for palliative treatment in the current guidelines and platinum-based combination chemotherapy--especially using cisplatin--has taken a central role in the management of mUTUC for a long while. However, not all patients can receive chemotherapy because of their impaired renal function and chemotherapy-related toxicity, particularly nephrotoxicity due to platinum derivatives, which may also significantly reduce the survival of the patients. 5 This means that managing the patients with mUTUC can cause a dilemma in the clinic. In recent years some observational studies have revealed that surgery on primary site may not be without an impact on mUTUC outcomes, but the conclusions have been limited by small patient numbers and inhomogeneity of the study populations as related to patient selection, pathologic evaluation, and treatment. 4,6,7,8,9,10,11,12,13 Besides, very few studies have been performed on a model that can predict the prognosis of patients with metastatic UTUCs. Consequently, it is necessary to clarify the role of surgery in patients with mUTUC and determine the independent prognostic factors, which were used to establish a model to predict the survival of the patients (at 4, 8, and 12 months). Finally, through further stratified analysis, a better treatment strategy for stage IV patients with different T/N stages and patients with different metastatic sites was revealed.
| Study cohort
We carried out a retrospective study using data from the Surveillance, Epidemiology, and End Results (SEER) database. Our inclusion criteria were as follows: (1) patients were diagnosed with neoplasms of the renal pelvis and ureter between 2010 and 2016 and (2) carcinoma of the renal pelvis and ureter was their first primary malignancy. The exclusion criteria included were as follows: the information about age at diagnosis, race, sex, histologic type, grade, T stage, N stage, M stage, surgery on primary site, surgery for regional lymph nodes, radiotherapy, chemotherapy, the scope of regional lymph node surgery, metastatic information, survival months, and current status were unavailable. It is worth mentioning that the surgery group included patients with surgery at the primary site for codes 30-80, including partial or subtotal nephrectomy (kidney or renal pelvis) or partial ureterectomy (ureter) (e.g., segmental resection, wedge resection), complete/total/simple nephrectomy (kidney parenchyma) or nephroureterectomy (including the bladder cuff for the renal pelvis or ureter), radical nephrectomy (including removal of a portion of the vena cava, the adrenal gland(s), Gerota's fascia, perinephric fat, or partial/total ureter), and any nephrectomy in continuity with the resection of other organ(s) (colon, bladder).
| Variable definitions and study endpoints
The variables extracted from the SEER database contained age at diagnosis, race, sex, pure or variant histology, grade, the AJCC Staging Manual tumor stage (7 th edition) including TX/T0/Ta/Tis/T1/T2/T3/T4/NX/N0/N1/N2/ N3/M0/M1, metastatic information, and treatment modalities including surgery on primary site, surgery on the regional lymph nodes, radiotherapy, chemotherapy, and the scope of regional lymph nodes surgery. Metastatic information included the distant metastatic sites that were identified at the time of diagnosis and the number of metastatic sites. The study endpoint was the overall survival (OS). According to the European Association of Urology Guidelines on Upper Urinary Tract Urothelial Cell Carcinoma (2020 Update), upper urinary tract tumors with variant histology (UTVH) are one of the highrisk factors. 6 Consequently, the patients in our study were divided into two groups by their histology: one group was UTVH including squamous cell carcinoma (SCC), adenocarcinoma, neuroendocrine carcinoma, and other kinds of UTVH 15 and the other was pure upper urinary tract urothelial cell carcinoma (PUC).
| Statistical analysis
We used the Kaplan-Meier methods and the log-rank tests to compare the survival times. Univariate and multivariate Cox regression analyses revealed the independent prognostic variables related to OS. Before the univariate and multivariate Cox regression analyses, a proportional hazards assumption (PH Assumption) was performed to filter the variables that could be included in the regression model. We set a 1:1 ratio to reduce bias by the 'MatchIt' R package using a propensity score matching (PSM) method. The chi-squared test and the Fisher's exact test were used to make comparisons for categorical variables. A p value <0.05 was considered statistically significant in all analyses. We used R version 4.0.3, IBM SPSS Statistics software version 24, and GraphPad Prism version 8 to perform all statistical analyses.
| Patient characteristics and a poor prognosis predicted by distant metastasis
We enrolled 6724 patients diagnosed with upper urinary tract urothelial carcinoma. Among the final cohort, 628 (9.34%) were recorded as having distant metastasis at the time of diagnosis. The demographic and clinicopathological characteristics of the metastatic group and nonmetastatic group are shown in Table S1. Compared to the nonmetastatic group, the metastatic group tended to have a higher T stage (T4/TX), a higher N stage (N1, N2, N3, NX), and a higher grade (poorly differentiated; Grade III). From the perspective of treatment, distant metastatic patients had a higher probability of receiving chemotherapy and radiotherapy instead of surgery at the primary site. Additionally, it was less likely for them to choose surgery of regional lymph nodes in favor of biopsy in comparison with those without distant metastasis.
Furthermore, univariate and multivariate Cox regression analyses showed that distant metastasis was an independent prognostic factor for OS (Table S2). The K-M survival analysis also showed that the survival of patients with distant metastasis was much worse than that of patients without distant metastasis (p < 0.0001, Figure 1). Then, we performed the univariate and multivariate Cox regression analyses on the metastatic group, which revealed that T stage, metastasis including liver, surgery, and chemotherapy were independent prognostic factors for mUTUC (Table S3).
| The sites of distant metastasis
Lung metastasis was the most common site (264, 42.04%), followed by bone (203, 32.32%), liver (195, 31.05%), distant lymph nodes (159, 25.32%), and brain (13, 0.16%), but the brain was not included in this study because the sample was too small ( Figure 2). In the remaining four sites, the Kaplan-Meier curves demonstrated that the sites of distant metastasis were associated with the prognosis (p = 0.0136, Figure S1A). Patients with liver metastasis had a worse prognosis than those without liver metastasis (p < 0.0001, Figure 3), but no detectable difference was found in other three sites (Figure S1B-D). Kaplan-Meier analysis was also performed on metastatic patients with only one site. The prognosis of liver-only metastasis was worse than that of other one-site metastases (p < 0.0001, Figure S1E), and no detectable difference was found among the other three sites ( Figure S1F-H). It follows that liver metastasis was closely correlated with a worse OS.
Through Kaplan-Meier curves, we found that OS l of patients varied with the number of metastatic sites (p = 0.0005, Figure 4A). Then, we had a further analysis to patients with different numbers of metastatic sites. In all patients with distant metastasis, the OS with a single metastatic site was not significantly different from that of two metastatic sites ( Figure S2A), but it was better than that with three or four metastatic sites (p = 0.0068, p = 0.0443, Figure S2B,C). Moreover, there was no difference in prognosis between those with three and four metastatic sites ( Figure S2D). Our K-M survival curves confirmed that the prognosis of patients with more than two sites of F I G U R E 1 Kaplan-Meier survival curves for upper urinary tract urothelial carcinoma with and without distant metastasis (p < 0.0001) metastasis was much worse than those with one or two sites (p = 0.0032, Figure 4B).
| The role of surgery at the primary site in mUTUC after PSM and the nomogram established from the prognostic factors
To more accurately evaluate the effect of surgery at the primary site in metastatic cancers of the renal pelvis and ureter, PSM was used to unify the background between the surgery group and non-surgery group in a 1:1 ratio, and 502 patients were eventually enrolled in our study. After PSM, the difference in age at diagnosis (years), metastasis including bone, metastasis including liver, and the number of metastatic sites were balanced, while the differences in grade, T stage, and the scope of regional lymph node surgery were still significant. The clinicopathological features of the two groups before and after PSM are shown in Table 1.
Univariate and multivariate Cox regression analyses suggested that OS was associated with T stage, N stage, metastasis including liver, and treatment patterns (surgery and chemotherapy) after PSM (Table 2). According to these five independent prognostic factors, a nomogram was established to predict the 12-month survival of patients with distant metastasis ( Figure 5). The AUC of this nomogram for predicting 4, 8, and 12-months OS was 0.798, 0.775, and 0.752, respectively ( Figure 6A-C). We also verified the superior accuracy of this model by calibration plots, as shown ( Figure 7A-C).
| Stratified OS analysis of T stage after PSM.
Because the sample sizes of T1 and T2 were both relatively small and the Kaplan-Meier analysis suggested no diversity of OS between them ( Figure S3), we combined T1 and T2 into one group to perform univariate and multivariate Cox analyses, which showed that surgery, chemotherapy, and metastasis including liver were independent prognostic features in T1/2 stage patients (Table S4). Combining surgery with chemotherapy to treat metastatic cancers significantly improved OS compared with the other therapy modes (p = 0.0430, Figure S4). In the T3 and TX groups, patients with metastasis including liver or those without chemotherapy, had a much worse prognosis than those without liver metastasis or those undergoing chemotherapy (Tables S5 and S7), while chemotherapy was the only method used to treat patients with T4 (Table S6). As described above, patients with higher T stages could only benefit from chemotherapy instead of surgery and chemotherapy, the same as in the T1/2 stage group.
| Stratified OS analysis of N stage after PSM.
Univariate and multivariate Cox analyses showed that surgery, chemotherapy, and histological types were independent prognostic factors in N0 stage patients (Table S8). Furthermore, Kaplan-Meier survival analysis demonstrated that the combination of surgery and chemotherapy to treat metastatic cancers could improve the prognosis significantly compared with other therapy modes (p < 0.0001, Figure S5). In addition, in terms of histological types, the OS of UTVH decreased distinctly compared with that of PUC (p = 0.0027, Figure S6). In the N1/N2/N3 group, T stage, metastasis including liver, and therapy mode containing surgery, chemotherapy, and radiotherapy were independent prognostic factors of these patients (Table S9). Kaplan-Meier curves revealed that radiotherapy could improve the prognosis of these patients (p = 0.0196, Figure S7), and in terms of surgery and chemotherapy, the combination of them suggested a better prognosis compared to surgery alone, chemotherapy alone, and no therapy (p < 0.0001, Figure S8). According to the comprehensive K-M analysis of these three methods, the prognosis of patients given a combination of surgery, chemotherapy, and radiotherapy was better than that with surgery alone, chemotherapy alone, and radiotherapy alone (p = 0.0079, Figure S9). For patients with NX, the only independent prognostic factor to show a benefit was chemotherapy (Table S10).
| Stratified OS analysis of different metastatic sites after PSM.
First, we performed univariate and multivariate Cox analyses on the liver metastasis group, which indicated that prognostic indicators for this group were surgery and chemotherapy (Table S11). The Kaplan-Meier curves further revealed that combination of surgery and chemotherapy was beneficial to the OS of patients with liver metastasis (p < 0.0001, Figure S10). In patients with bone metastasis, surgery, chemotherapy, and the number of metastatic sites influenced OS (Table S12). Similar to liver metastasis, surgery combined with chemotherapy was the best choice for these patients (p < 0.0001, Figure S11). For the metastasis including lung group, chemotherapy and liver metastasis were clearly related to OS (Table S13). And for the metastasis including distant lymph node group, chemotherapy, T stage, and metastasis including bone were related to OS (Table S14).
| Survival analysis of treatment modalities for stage IV patients with different T/N stages.
According to the AJCC Staging Manual tumor stage (7 th edition), stage IV refers to any T/any N/M1, which means that all of the patients with mUTUC were in this range. Therefore, we grouped the stage IV patients according to the previous results, and performed Kaplan-Meier curves with some new combinations. Because the sample size of patients who underwent radiotherapy in each subgroup after stratification was too small, the treatment modalities included only surgery and chemotherapy. Since fewer than two patients with each staging T1/2/NX, TX/N1/2/3, and TX/N0 underwent surgery, K-M analysis could not be performed. The treatment strategies of stage IV patients with different T/N stages are shown in Table 3. For metastatic patients with T3/N0, T3/N1/2/3, T4/N0, and T4/ Figures S12 (p = 0.0185), S13, S14, and S15.
| DISCUSSION
Metastatic upper urinary tract urothelial carcinoma (mUTUC) accounts for only 10% of UTUCs, but its incidence has been increasing in the past 30 years. 3 As mentioned in a literature review, the role of metastasis in predicting the outcome of urological cancers, such as prostate cancer, 16 bladder cancer, 14,17 and kidney cancer, 18 has been increasingly noted. Therefore, it is necessary to investigate the independent prognostic factors affecting metastatic UTUC and to develop models to predict the OS and select individualized treatment strategies accordingly. NX Chemotherapy Table S5/Table S10 T4 N0/1/2/3 No difference between surgery + chemotherapy and chemotherapy Figure S14/ Figure S15 NX Chemotherapy Table S6/Table S10 TX N0/1/2/3 --NX Chemotherapy Table S7/S10
T A B L E 3 Treatment strategies of stage IV patients with different T/N stages
Many studies have shown that distant metastasis was associated with much worse OS in UTUC and patients with distant metastasis had a higher probability of receiving chemotherapy instead of surgery at the primary site, which were revealed by our results as well. 6,11,12,13,16,17,18 In terms of the distribution of distal metastasis sites, the study by Tanaka N. et al. showed that the predominant site of distant metastasis was the lung, which was more common than the liver and bone. 19 The results of that study are consistent with ours. Moreover, our study demonstrated that as long as liver metastasis occurred, regardless of whether there were simultaneous metastases to other sites, the OS would be very poor. Dong F. et al. also observed a similar phenomenon, in which liver metastases predicted a worse OS, independent of the number of metastases, in their study of metastatic bladder cancer and metastatic UTUC. 10,17 This may be because liver metastases were more likely to cause liver failure, thus making it difficult for patients to tolerate chemotherapy-related toxicity and other treatments. In consequence, being alert to the appearance of liver metastases through regular reviews or follow-up and discussions of whether surgical resection of metastases at distal sites is a beneficial subject we need to investigate further.
According to the univariate and multivariate Cox regression analyses after PSM, we found that T stage, N stage, hepatic metastasis, surgery, and chemotherapy were independent prognostic factors for mUTUC. The studies of both Lughezzani G. et al and Margulis V.et al demonstrated that T stage was indeed a significant prognostic factor related to oncologic outcomes. 4,20 A higher T stage predicts a worse OS in patients with metastatic UTUC, and the patients with TX stage had the worst prognosis. In addition, Burger M. et al. confirmed that N stage was also related to OS in locally advanced UTUC. 21 Our findings agree that for patients with distal metastasis, N stage was also related to OS. Metastatic patients with N0 stage disease have a better prognosis than N1/N2/N3 and NX patients, while NX is a sign of the worst prognosis.
For patients with metastatic UTUC (mUTUC), chemotherapy has been the optimal choice for a long time, and RNU is recommended for palliative treatment, with the aim of controlling the symptoms of the disease. However, our analysis revealed that both surgery and chemotherapy are treatment methods that can significantly improve the prognosis of patients with mUTUC. It is noteworthy that some study suggested these benefits may be limited to those with only one metastatic site, which was not confirmed by our study. 12,22 Further analysis of subgroups of T stage, N stage, and different metastatic sites revealed that the survival of patients with T1/T2, N0/N1/ N2/N3, metastasis including liver, and metastasis including bone could be improved by a combination of surgery and chemotherapy, while for patients with T3/T4/TX, NX, metastasis including lung, and metastasis including distant lymph nodes, chemotherapy alone was the better choice to improve their OS. This result suggests that surgery could have an important role in the treatment strategies of patients with many types of distant metastases. 11 It was proven that radiotherapy was useful for patients with N1/N2/N3, which is to our knowledge is the first study to confirm a role of radiotherapy in the treatment of UTUC, but more samples and studies are needed to confirm this conclusion. Stage IV refers to any T/any N/M1 according to the AJCC Staging Manual tumor stage (7 th edition). Therefore, a more precise treatment strategy was established for stage IV patients based on our stratified analysis of T/N stage. For patients with T1/2&N0/1/2/3 and T3&N0, their prognosis could be significantly improved by the combination of surgery and chemotherapy, and for patients with T3/4/X&NX, their prognosis could just be improved by chemotherapy alone. We hope it will provide more personalized options for patients with mUTUC.
Although this research was performed rigorously, it still has many limitations. First, we cannot know the sequence of chemotherapy and surgery because this information is not recorded in the SEER database. Some studies have shown that neoadjuvant chemotherapy before radical nephroureterectomy might provide better survival outcomes for patients with locally advanced UTUC. 23,24 Second, the SEER database lacks other important information about smoking status 25 and a history of using aristolochic acid, which are both closely related to urothelial carcinomas. 26 Third, our experiment did not eliminate all variable bias between the surgery and non-surgery groups after PSM.
| CONCLUSION
Our SEER database analysis was a retrospective study investigating the influence of surgery on primary site to the metastatic renal pelvis and ureter cancers. Distant metastasis is a factor suggesting a poor OS of renal pelvis and ureter cancer. For patients with mUTUC, there were five independent prognostic factors: T stage, N stage, hepatic metastasis, surgery, and chemotherapy. A nomogram was established to predict the 4, 8, and 12-months survival of patients with distant metastasis. Although chemotherapy alone can improve the OS of patients with distant metastasis, it was less efficient than combining surgery and chemotherapy for patients with T1/T2, N0/N1/N2/N3 stage, and metastasis including the liver and bone. However, for patients with T3/T4/TX, NX, and metastasis including lung and distant lymph nodes, chemotherapy alone was still the best choice to improve the prognosis. It is worth mentioning that radiotherapy was proven to be useful for patients with N1/N2/N3 disease. Moreover, more precise treatment strategies were provided for stage IV patients with different T/N stages.
|
2021-10-15T06:16:49.238Z
|
2021-10-14T00:00:00.000
|
{
"year": 2021,
"sha1": "58b864fd04200e4ff7aadb9c18248ddf8c726d8a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.4327",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "a24ea8558de736911ea328defe2a28626bb53c9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12303981
|
pes2o/s2orc
|
v3-fos-license
|
Family and the Risky Behaviors of High School Students
Background: Family plays an important role in helping adolescent acquiring skills or strengthening their characters. Objectives: We aimed to evaluate the influences of family factors, risky and protective, on adolescent health-risk behavior (HRB). Patients and Methods: In this cross-sectional study, students of high schools in Kerman, Iran at all levels participated, during November 2011 till December 2011. The research sample included 1024 students (588 females and 436 males) aged 15 to 19 years. A CTC (Communities That Care Youth Survey) questionnaire was designed in order to collect the profile of the students’ risky behaviors. Stratified cluster sampling method was used to collect the data. Results: Using logistic regression, 7 variables enrolled; 4 of them were risk factors and 3 were protective factors. The risk factors were age, (linear effect, ORa = 1.20, P = 0.001), boys versus girls (ORa = 2.33, P = 0.001), family history of antisocial behavior (ORa = 2.29, P = 0.001), and parental attitudes favorable toward antisocial behavior (ORa = 1.72, P = 0.03). And, protective factors were family religiosity (ORa = 0.65, P = 0.001), father education (linear effect, ORa = 0.48, P = 0.001), and family attachment (ORa = 0.78, P = 0.001). Conclusions: Our findings showed that family has a very significant role in protecting students against risky behaviors. The education level of the father, family religiosity, and attachment were the most important factors.
Background
Adolescents are valuable resources in human societies who encounter risk factors because of their age and evolutionary features (1). Sometimes, adolescent risk factors may last until adulthood and will become harmful for them and others (2). Risky and protective factors can affect children at different stages of their lives. At each stage, risky events occur that can be prevented through intervention measures. Early-childhood risks like aggressive behavior, can be changed or prevented by family, school, and community. These interventions focus on helping children develop appropriate and positive behaviors. If negative behaviors are not addressed properly, they can lead to worse situations such as academic failure and social difficulties, which put children at further risk like drug abuse (3). When a child enters adolescence, his or her family communications change drastically and attain a new form (4). Adolescent's ongoing attempts to achieve autonomy can result in increased parent-child conflicts at the beginning of this stage and negative feelings during this period.
These conflicts mainly happen because of different expectations of suitable behavior from both parents and children, as well as conflicting understanding of the responsibility, independence, and duties (5). The family is the fundamental factor in supporting adolescents emo-tionally, economically, and providing them an identity and feeling of belonging (6). Any kind of positive or negative change in the family has a direct effect on the larger human society. Family stability or instability directly affects the society. Thus, in societies where family values are unstable, moral values are considered irrelevant. Although adolescents are susceptible to risky behaviors, there are factors such as religious activities, good relationship with parents, and parental support that might buffer against the adolescent's tendency towards highrisk behaviors (7,8).
The concept of health-risk behavior can be defined as; any activity undertaken by people with a frequency or intensity that increases their risk of disease or injury such as substance abuse, risky driving, violence or suicidal tendencies, and antisocial behavior (9,10). There is evidence that health-risk behaviors tend to cluster together with similar risk factors, underlying a lot of risk behaviors (11,12). Inquiries into risk behaviors and protective factors among adolescents are prominent in the social, behavioral, and health sciences, and include study of particular risk factors (13). The significant role of family and its environment on adolescents' tendency towards high-risk behaviors, and the increasing rate of this problem among Iranian adolescents has led many scholars to focus on this important social issue (14,15).
Iran is an Islamic country, respecting family and is currently undergoing a transition towards a modern society. It has a special cultural condition, which emphasizes Islamic values. Because, Iran is in transition from a traditional society to a modern and industrial one, damages to family roles and relations are important problems (16). Moreover, in developing countries, the key role of family in educating adolescents and its effect on juvenile delinquency seems more important than that in western countries. Severe behavioral controls, which are imposed on adolescents by various organizations, also make the range and conditions of adolescent behaviors so different from western countries. On the other hand, cultural issues of every society must be considered while speaking about risky behaviors. Values and norms of every society are effective in the pattern of these behaviors (17). Olds' reviews revealed that social norms are the strongest factor for participating in risky behaviors (18). Although there are some studies in this field in Iran, most published papers explored only the frequency of risk behaviors among Iranian students. This paper, however, presents comprehensively the results of an analytical study which explores the relationship between family factors and the profile of student's behavior.
Objectives
We aimed to evaluate family and its effects on the risky behaviors among Iranian high school students in South-East of Iran.
Participants
The present research is a cross-sectional study, carried out among high school students in one of the main cities in the southeast of Iran (Kerman), with a population of more than 650 000, from November to December 2011. The research sample included 1024 students aged 15-19 years, representing all levels of the high school (first to third grades and pre-university). Eligible schools included any high school with students in their first to third grades and pre-university in Kerman. Students, who were transferring permanently from another city to a Kerman high school during the study period, were not included in the research. In addition, high schools without all grades were excluded from our sample. In our sample, around 90% of schools were included. By receiving permission of the Education Department's Counseling Center in Kerman, we selected our subjects using a stratified cluster sampling technique.
First, we classified high schools based on their gender, location (north, west, east, and south), and type, either governmental or private. Then, in proportion to size, we selected schools randomly, while students were selected from different grades within their classes. All participants were informed about the goals of the survey and received the guidelines and instructions to fill out the questionnaire. Participants signed written informed consent and then completed the questionnaires anonymously.
Instrument
We used "Communities That Care Youth Survey" to assess a broad set of risk and protective factors across the domains of family, school, community, peer, and individual as well as health-risk behavior outcomes. This questionnaire was prepared by Hawkins and Catalano (19,20). In this study, we only used a part of this questionnaire, which measured family domain. The questionnaire consisted of an index of problem behaviors, including 14 items (which measured their frequency during the previous months or year) such as smoking, aggression, fighting, weapons carrying, and suspension from school. An index of protective factors assessing by family rewards for prosocial involvement included 3 components. Assessment of family attachment had 3 components. Family religiosity was assessed by 4 components and family opportunities for prosocial involvement by 4 components. Risk factors consisted of poor family management with 8 components, family conflict with 3 components, family history of antisocial behavior with 7 components and parental attitudes favorable toward antisocial behavior with 4 components.
Validity of the questionnaire was ensured through 3 stages, including scale translation, face validity, and content validity. Regarding scale translation, we used the procedure of forward-backward translation. Then, it was revised by four health education and panel members. They were asked to review each item and evaluate the appropriateness of the translated items for face validity, in other words, to be understandable by the research target. The content validity of CTC questionnaire was investigated both quantitatively and qualitatively by the same experts. We asked the experts to evaluate the quality and quantity of each item of CTC questionnaire. Necessity, relevancy, simplicity, and clarity of each item were assessed using Likert 5-point type scale. An open question was also asked to elicit the opinions of the experts concerning each item. Content validity index (CVI) was computed on the basis of the simplicity, clarification, and relevancy of each item. A CVI score of higher than 0.75 was considered reasonable. Content validity ratio (CVR) scores were calculated based on the necessity of each item. A CVR score of equal to or higher than 0.59 was envisaged a good content validity by 10 experts. The mean of CVI and CVR was 0.87 and 0.78, respectively, signifying a good content validity for CTC questioner.
Using the test-retest technique, 40 students (20 girls, 20 boys) responded twice with a gap of 10 days between the two assessments. The consistency between their scores was computed by the Pearson correlation coefficient of 0.75. Additionally, we computed the Cronbach α value for all participants after the data collection, which was at 0.78. The questions of each risky behavior style were measured by 5 items that were rated on a 5-point Likert scale ranging from never to more than 10 times in the last 30 days or last year. The presence of floor and ceiling effects may influence the reliability, validity, and responsiveness of an instrument. In order to determine floor and ceiling effects, we calculated the percentage of student with very low and very high scores. The rates of floor effect and ceiling effect were calculated for each scale in all questionnaires and were considered suitable when it was below 15%, because there was no consensus on how to define floor and ceiling effects mathematically.
Procedure
Students in grades first to third and pre-university who were enrolled in high schools in Kerman were targeted during the study period. We collected consent form from the students and their parents separately. Students whose parents and themselves provided written consent to participate were identified by the school manager. Having checked with the schools, students were approached in their classes, but they answered the questions in a private environment and their responses were collected without any identifiers. In order to assess the associations between HRB and family factors, we estimated the sample size by comparing two mean formulas. In this calculation and were set at 5% and 10%, and the minimum effect size of 0.5 of the standard deviation, and design effect of 1.5. Based on these assumptions, the estimated sample size was 1050.
Data Analyses
The data was computerized and analyzed using the statistical package of social sciences (SPSS) version 18, and before data entry, all completed questionnaires were evaluated by the main investigator. Then, the distribution of the responses was assessed and the main variables were described. In the next step, risky behaviors were divided into two groups: low-risk if the subject had smoking, aggression behaviors less than three times per month and weapons carrying, fighting, suspension from school less than three times per year, and high-risk groups if the subject had exposure to the above-mentioned items more than three times.
In this analysis, the main dependent variables were age (in year) gender, grade (first to third grades and pre-university), risk and protective factors in the family (in 8 subscales, each one had a score between 0 and 4). Using logistic regression, crude and adjusted ORs between having risky behaviors and other independent variables (sociodemographic variables, risk and protective factors in the family) were computed. In the final multivariate model, only the significant variables in crude models were entered.
Results
A total of 1024 students between 15 and 19 years of age (57.4% females, mean of age = 16.4, SD = ± 1.5 year) completed the questionnaires. The percentage (number) of students in grades first to third and pre-university were 28% (287); 26.6% (272); 26.4% (270); and 19% (195), respectively (Table 1) The number of students stated that they didn't practice any risky behaviors was 443 (41.7%). Conversely 307 (52.3%) girls and 136 (31.2%) boys and 13 (2.2%) of girls and 29 (6.7%) of boys experienced more than 6 instances of HRB (Table 2). A significant positive association was found between age and the frequency of HRB (crude add ratio (ORc) = 1.23, P = 0.01, adjusted odds ratio (ORa) = 1.20, P = 0.001. The results of the logistic regression model showed that boys had more HRB (ORc = 2.40, P = 0.001; ORa = 2.33, P = 0.001). Although the association between grade levels and HRB was significant in the univariate analysis (ORc = 1.16, P = 0.001); it was not significant in the multivariate model (ORa = 0.90, P = 0.42). The number of sibling in family (ORc = 1.28, P = 0.001) had a significant positive association with HRB, but its association was not significant in the multivariate model (ORa = 0.98, P = 0.78). Mothers' education levels had a significant positive association only in the univariate analysis (ORc = 1.29, P = 0.001; ORa = 1.02, P = 0.58). On the other hand, fathers' education levels showed a negative association as a predictor of HRB in both models (ORc = 0.60, P = 0.01; ORa = 0.48, P = 0.001). Risk factors related to "the family history of antisocial behavior" had a very strong positive association with HRB in both models (ORc = 0.3.11, P = 0.04; ORa = 2.29, P = 0.001). Whereas "poor family management" showed a significant association with HRB only in the univariate model (ORc = 1.96, P = 0.01; ORa = 1.23, P = 0.12), "parental attitudes favorable toward antisocial behavior" had a positive association with HRB in both the univariate and multivariate models (ORc = 3.35, P = 0.001; ORa = 1.27, P = 0.003. Also, "family conflict" was another variable, which had statistically significant positive asso-ciation with HRB in the univariate analysis (ORc = 1.15, P = 0.001). However, in the multivariate, the association was absent (ORa = 0.98, P = 0.83).
Discussion
The presented results provide a broad picture of the effect of family risk and protective factors on adolescents' health-risk behaviors. We found that family attachment, father education and family religiosity were protective factors. On the other hand, boys versus girls, age, family history of risky behaviors, and parental attitudes favor toward antisocial behavior, which could result in increased risky behaviors. In recent decades, dealing with the population of adolescents has become an international problem, and this problem is important in Iran. According to the traditional system in Iran, the family plays an important role in training and guiding adolescents. The presence of various competitive institutions like schools, peers, the Internet, and satellite networks, which have deep potential differences in terms of values and ideals has changed the dynamics of the family and challenged family performance.
In studying the reasons of risk factors in families, it is better to pay attention to a combination of factors and relationships and to take an effective step to prevent and treat them. Results of the present study have revealed that when adolescents grow older, they get more involved in risky behaviors. Other studies showed similar results (21,22). This variable can be considered a suitable indicator of adolescent high-risk behaviors. Therefore, initial preventive programs must begin in preadolescent ages in the form of informative and warning programs and must focus more on adolescents who are at higher risks. Another individual risk factor is the role of gender in high-risk behaviors; i.e. boys are at higher risk than girls. It can be attributed to cultural features namely, cultural and educational conditions limit girls and allow boys to have more freedom. Studies have taken sex differences into account and mentioned that different cultures treat girls and boys differently, which subsequently affect their socialization and various behaviors. Studies carried out by Huebner et al. and Kapungu et al. revealed similar results (23,24).
One unique contribution of the existing study was to obtain the most important risk factor which was the predictor of the adolescents' health-risk behaviors in family, i.e. a family history of risky behavior. Adolescent in the families which excuse them for breaking the law are more likely to develop problems with risky behavior. Families whose parents engage in risky behavior inside or outside the home are at greater risks for exhibiting risky behavior. Adolescents whose parents practice drug abuse have higher tendencies towards risky behaviors because they watch their parents' behavior every day and attempt to (under the influence of observational learning) select them as their models in life and act accordingly. In addition, similar studies support the results obtained in this research (20,25). Parental attitudes favorable toward antisocial behavior were another risk factor. Parental attitudes do appear to be influential in their own right; for example, children whose parents behave aggressively or violently at home are more likely to become aggressive and violent adolescents (26,27). However, the independent significance of this risk factor may be most relevant to drug use. A number of US studies have linked 'parental modeling' with favorable attitudes towards the use of alcohol, tobacco, and illegal drugs at home to the chances of children becoming users and abusers (20).
One of the most protective elements in the family was attachment and intimacy between family members, especially parents. It is shown that this factor has a significant effect on health-risk behaviors. In his research, Wisner (2004) concluded that poor attachment between family members, lack of parental empathy, and absence of parents at home are predictors of high-risk behaviors in children (28). Conversely, warm and intimate relationship between parents and children are the basis of emotional security in adolescents and result in strong bonds between parents and adolescents; this conformity causes improved self-esteem in adolescents, makes them spend most of their free times with their families and thus reduces high-risk behaviors (29)(30)(31).
The second protective factor was the role of religious beliefs and religious practices within the family. One of the most important factors in reducing high-risk behaviors is religious beliefs and religious practices as well as encouraging adolescents to practice religion. Religious beliefs play an important role in providing health, especially mental health. Risky behaviors are strongly influenced by religious values and beliefs; for example, it was shown in a research on 299 American adolescents that religious constraints and a sense of belonging to Muslim heritage prevented adolescents from drinking alcohol (27,32). It was shown that religious beliefs prevented high-risk behaviors (33).
Surprisingly, we found a positive association between the level of mother's education and risky behavior. This type of association was also reported by another study in Iran (34). However, most studies in the world showed comparable negative associations between mothers' education levels and risky behaviors. Besides, higher education is associated with higher probability of having a job and being busy. Working mothers spend less time for emotional support, continuous supervision, encouraging and helping with school activities, and this might result in developing riskier behaviors among their children (35). Similar to other findings, our results showed that family has a key role in shaping students' behavior in Iran. A warm family with strong support, and religious practices would have a very significant role in training students. However, we should note that well educated mothers might have less impact on the risky behaviors of their children. This is most probably due to the lack of time these mothers spend with their children.
Limitation
This study could not clearly determine which behavioral factor would result in other behaviors. Moreover, these findings were obtained only from students who attended the school, and thus school dropouts, students who had failed academically, those who could not enter high school and those who studied at night schools were not included. Additionally, because of the sensitivity of some subjects like smoking, students might underreport their behaviors, although by using different techniques, we attempted to convince students to response with minimum barriers.
Human Subjects Approval Statement
Based on the proposal of the study, the Medical Research Ethics Committee of the University of Kerman gave an approval to the researcher to conduct the survey among high school students in Kerman, and informed consents submitted to Medical Science University of Kerman. In this regard, two different written consent forms were taken: the first one involved the permission to do the study; the second one from Ministry of Education for participation in the study. After we have identified the classrooms in a school, enough parental permission forms were delivered to the principal for each selected student. The code and date of ethical approval were K/89/70-2011. We certify that there is no conflict of interests with any financial organization regarding the material discussed in the manuscript.
|
2016-05-12T22:15:10.714Z
|
2014-10-01T00:00:00.000
|
{
"year": 2014,
"sha1": "fea1b093ac12c43c29316c8f568d354ec62e2593",
"oa_license": "CCBYNC",
"oa_url": "http://cdn.neoscriber.org/cdn/serve/313ea/fea1b093ac12c43c29316c8f568d354ec62e2593/16197-pdf.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fea1b093ac12c43c29316c8f568d354ec62e2593",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
117318492
|
pes2o/s2orc
|
v3-fos-license
|
Product fixed points in ordered metric spaces
All product fixed point results in ordered metric spaces based on linear contractive conditions are but a vectorial form of the fixed point statement due to Nieto and Rodriguez-Lopez [Order, 22 (2005), 223-239], under the lines in Matkowski [Bull. Acad. Pol. Sci. (Ser. Sci. Math. Astronom. Phys.), 21 (1973), 323-324].
Introduction
Let (X, d; ≤) be a partially ordered metric space; and T : X → X be a selfmap of X, with (a01) X(T, ≤) := {x ∈ X; x ≤ T x} is nonempty (a02) T is increasing (x ≤ y implies T x ≤ T y). We say that x ∈ X(T, ≤) is a Picard point (modulo (d, ≤; T )) if i) (T n x; n ≥ 0) is d-convergent, ii) z := lim n T n x belongs to fix(T ) (in the sense: z = T z). If this happens for each x ∈ X(T, ≤), then T is referred to as a Picard operator (modulo (d, ≤)); moreover, if these conditions hold for each x ∈ X, and iii) fix(T ) is a singleton, then T is called a strong Picard operator (modulo (d, ≤)); cf. Rus [12, Ch 2, Sect 2.2]. Sufficient conditions for such properties are obtainable under metrical contractive requirements. Namely, call T , (d, ≤; α)-contractive (where α > 0), if (a03) d(T x, T y) ≤ αd(x, y), ∀x, y ∈ X, x ≤ y. Let (x n ; n ≥ 0) be a sequence in X; call it (≤)-ascending (descending), if x n ≤ x m (x n ≥ x m ), provided n ≤ m. Further, let us say that u ∈ X is an upper (lower) bound of this sequence, when x n ≤ u (x n ≥ u), ∀n; if such elements exist, we say that (x n ; n ≥ 0) is bounded above (below). Finally, call (≤), d-self-closed when the d-limit of each ascending sequence is an upper bound of it.
According to certain authors (cf. [8] and the references therein) these two results are the first extension of the Banach's contraction mapping principle to the realm of (partially) ordered metric spaces. However, the assertion is not entirely true: some early statements of this type have been obtained in 1986 by Turinici [13,Sect 2], in the context of ordered metrizable uniform spaces. Now, these fixed point results found some useful applications to matrix and differential/integral equations theory; see the quoted papers for details. As a consequence, Theorem 1 was the subject of many extensions. Among these, we mention the coupled and tripled fixed point results in product ordered metric spaces, constructed under the lines in Bkaskar and Lakshmikantham [3]. It is our aim in the following to show that, for all such results based on "linear" contractive conditions, a reduction to Theorem 1 is possible; we refer to Section 4 and Section 5 for details. The basic tool is the concept of normal matrix due to Matkowski [6] (cf. Section 2). Further aspects will be delineated elsewhere.
Normal matrices
Let R n denote the usual vector n-dimensional space, R n + the standard positive cone in R n , and ≤, the induced ordering. Also, let (R 0 + ) n denote the interior of R n and < the strict (irreflexive transitive) ordering induced by it, in the sense .., n}. We shall indicate by L(R n ) the (linear) space of all (real) n × n matrices A = (a ij ) and by L + (R n ) the positive cone of L(R n ) consisting of all matrices A = (a ij ) with a ij ≥ 0, i, j ∈ {1, ..., n}. For each A ∈ L + (R n ), let us put ν(A) = inf{λ ≥ 0; Az ≤ λz, for some z > 0}; and call A, normal, if ν(A) < 1; or, equivalently, when the system of inequalities has a solution z = (ζ 1 , ..., ζ n ) > 0. Concerning the problem of characterizing this class of matrices, the following result obtained by Matkowski [6] must be taken into consideration. Denote (for 1 ≤ i, j ≤ n) Proof. Necessity. Assume that (2.1) has a solution z = (ζ 1 , ..., ζ n ) > 0; that is nn > 0; hence, in particular, (b03) is fulfilled for i = 1. Further, let us multiply the first inequality of (2.2) by the factor a 11 ≥ 0 and add it to the i-th relation of the same system for i ∈ {2, ..., n}; one gets [if we take into account (b02) (for k = 1) plus a (1) we must have (by these conditions) a 22 , ..., a nn > 0; wherefrom, (b03) is fulfilled for i ∈ {1, 2}. Now, if we multiply the second inequality of (2.4) by the factor a (2) i2 /a (2) 22 ≥ 0 and add it to the i-th relation of the same system for i ∈ {3, ..., n}, one obtains that (b03) will be fulfilled with i ∈ {1, 2, 3}; and so on. Continuing in this way, it is clear that, after n steps, (b03) will be entirely satisfied.
A useful variant of Matkowski's condition (b03) may now be depicted as follows. Letting I denote the unitary matrix in L(R n ), indicate by ∆ 1 ,...,∆ n the successive "diagonal" minors of I − A; that is By the transformations we used in passing from (2.6 to (2.7) and from this to the next one, etc., one gets ∆ i = a After Perov's terminology [9], a matrix A ∈ L + (R n ) satisfying (b06) will be termed admissible (or, equivalently: a-matrix). We therefore proved Proposition 2. Over the subclass L + (R n ), we have normal ⇐⇒ admissible.
Having these precise, call A ∈ L(R n ), asymptotic provided A p → 0 (in the matrix norm ||.|| * ) as p → ∞; or, equivalently (see (2.11)) if it fulfills one of the properties The following simple result will be in effect for us.
In such a case, the sum of this series is (I − A) −1 ; hence, I − A is invertible in L(R n ) and its inverse belongs to L + (R n ).
Proof. Let the matrix A be asymptotic. If x ∈ R n satisfies (I − A)x = 0 then, (by repeatedly applying A to the equivalent equality) x = A p x, for all p ∈ N ; wherefrom x = 0 (if one takes the limit as p → ∞); proving that (I − A) −1 exists as an element of L(R n ). Moreover, in view of I − A p = (I − A)(I + A + ... + A p−1 ), p > 1, one gets (again by a limit process) I = (I − A)(I + A + A 2 + ...); which ends the argument.
As before, we may ask of which relationships exist between this class of matrices and the preceding ones. To do this, the following renorming statement involving normal matrices will be useful.
We may now give an appropriate answer to the above posed problem.
Proposition 3. For each matrix of L + (R n ), we have normal ⇐⇒ asymptotic.
We cannot close these developments without giving another characterization of asymptotic (or normal) matrices in terms of spectral radius; this fact -of marginal importance for the next section -is, however, sufficiently interesting by itself to be added here. Let A ∈ L(R n ) be a matrix. Under the natural immersion of R n in C n , let us call the number λ ∈ C an eigenvalue of A, provided Az = λz, for some different from zero vector z ∈ C n (called in this case an eigenvector of A); the class of all these numbers will be denoted σ(A) (the spectrum of A). Define ρ(A) = sup{|λ|; λ ∈ σ(A)} (the spectral radius of A). Proof. Suppose A is asymptotic. For each eigenvalue λ ∈ σ(A), let z = z(λ) ∈ C n be the corresponding eigenvector of A. We have Az = λz; and this gives A p z = λ p z, for all p ∈ N . By the choice of A plus z = 0, we must have λ p → 0 as p → ∞, which cannot happen unless |λ| < 1; hence ρ(A) < 1. Conversely, assume that the matrix A = (a ij ) in L + (R n ) satisfies ρ(A) < 1; and put A (ε) = (a We have ρ(A (ε) ) < 1, when ε > 0 is small enough (one may follow a direct argument based on the obvious fact: for each (nonempty) compact K of R, we do not give further details). Now, as A (ε) is a matrix over R 0 + (in the sense: a (ε) ij > 0, i, j ∈ {1, ..., n}), we have, by the Perron-Frobenius theorem (see, e.g., Bushell [4]), that for a sufficiently small ε > 0, A (ε) has a positive eigenvalue µ = µ(ε) > 0 (which, in view of ρ(A (ε) ) < 1, must satisfy µ < 1), as well as a corresponding eigenvector z = z(ε) > 0. But then, Az ≤ A (ε) z = µz < z; hence, A is normal. This, along with Proposition 3, completes the argument.
Extension of Theorem 1
Let (X, d) be a metric space; and (≤) be a it quasi-order (i.e.: reflexive and transitive relation) over X. For each x, y ∈ X, denote: x <> y iff either x ≤ y or y ≤ x (i.e.: x and y are comparable). This relation is reflexive and symmetric; but not in general transitive. Given x, y ∈ X, any subset {z 1 , ..., z k } (for k ≥ 2) in X with z 1 = x, z k = y, and [z i <> z i+1 , i ∈ {1, ..., k − 1}] will be referred to as a <>-chain between x and y; the class of all these will be denoted as C(x, y; <>). Let ∼ stand for the relation (over X): x ∼ y iff C(x, y; <>) is nonempty. Clearly, (∼) is reflexive and symmetric; because so is <>. Moreover, (∼) is transitive; hence, it is an equivalence over X. Call d, (≤)-complete when each ascending d-Cauchy sequence is d-convergent. Finally, let T : X → X be a selfmap of X; we say that it (modulo (d, ≤)). Moreover, if (in addition to (c01)) (c02) (∼) = X × X [C(x, y; <>) is nonempty, for each x, y ∈ X], then, T is a strong Picard operator (modulo (d, ≤)).
Proof. I) Let x ∈ X(T, ≤) be arbitrary fixed; and put x n = T n x, n ∈ N . By (a02) and (a03), d(x n+1 , x n+2 ) ≤ αd(x n , x n+1 )), for all n. This yields d(x n , x n+1 ) ≤ α n d(x 0 , x 1 )), ∀n; so that, as the series n α n converges, (x n ; n ≥ 0) is an ascending d-Cauchy sequence. Combining with the (≤)-completeness of d, it results that x n d −→ x * , for some x * ∈ X. Now, if the first half of (c01) holds, we have x n+1 = T x n d −→ T x * ; so that (as d=metric), x * ∈ fix(T ). Suppose that the second half of (c01) is valid; note that, as a consequence, x n ≤ x * , ∀n. By the contractive condition, we derive d(x n+1 , T x * ) ≤ αd(x n , x * ), ∀n; so that, by the obtained convergence property, x n+1 = T x n d −→ T x * ; wherefrom (see above) x * ∈ fix(T ). II) Take a, b ∈ X, a ≤ b. By the contractive condition, d(T n a, T n b) ≤ α n d(a, b), ∀n; whence lim n d(T n a, T n b) = 0. From the properties of the metric, one gets lim n d(T n a, T n b) = 0 if a <> b; as well as (by definition) lim n d(T n a, T n b) = 0 if a ∼ b. This, along with (c02), gives the desired conclusion.
Vector linear contractions
Let X be an abstract set; and q ≥ 1 be a positive integer. In the following, the notion of R q -valued metric on X will be used to designate any function ∆ : X 2 → R q + , supposed to be reflexive sufficient [∆(x, y) = 0 iff x = y] triangular [∆(x, z) ≤ ∆(x, y) + ∆(y, z), ∀x, y, z ∈ X] and symmetric [∆(x, y) = ∆(y, x), ∀x, y ∈ X]. In this case, the couple (X, ∆) will be termed an R q -valued metric space. Fix in the following such an object; as well the usual norm ||.|| := ||.|| 1 , over R q . Note that, in such a case, the map (d01) (d : X 2 → R + ): d(x, y) = ||∆(x, y)||, x, y ∈ X is a (standard) metric on X. Let also ( ) be a quasi-ordering over X.
Define a ∆-convergence property over X as: The set of all such x will be denoted lim n (x n ); when it is nonempty (hence, a singleton), (x n ) will be termed ∆-convergent. Further, call (x n ), ∆-Cauchy provided [∆(x i , x j ) → 0 as i, j → ∞]. Clearly, each ∆-convergent sequence is ∆-Cauchy; but the converse is not general valid. Note that, in terms of the associated metric d, (4.2) Call ∆, ( )-complete when each ascending ∆-Cauchy sequence is ∆-convergent. Likewise, call ( ), ∆-self-closed when the ∆-limit of each ascending sequence is an upper bound of it. By (4.1) and (4.2) we have the global properties (4.4) Finally, take a selfmap T : X → X, according to (d02) X(T, ) := {x ∈ X; x T x} is nonempty (d03) T is increasing (x y implies T x T y). We say that x ∈ X(T, ) is a Picard point (modulo (∆, ; T )) if j) (T n x; n ≥ 0) is ∆-convergent, jj) z := lim n (T n x) belongs to fix(T ). If this happens for each x ∈ X(T, ), then T is referred to as a Picard operator (modulo (∆, )). Sufficient conditions for such properties are to be obtained under vectorial contractive requirements. Given A ∈ L + (R q ), let us say that T is (∆, ; A)-contractive, provided (d04) ∆(T x, T y) ≤ A∆(x, y), ∀x, y ∈ X, x y.
Further, let us say that T is (∆, )-continuous when [(x n )=ascending and x n ∆ −→ x] imply T x n ∆ −→ T x. As before, in terms of the associated via (d01) metric d, we have (by means of (4.1) and (4.2) above) (4.5) The following answer to the posed question is available.
In particular, when ( ) = X 2 (the trivial quasi-order on X) the corresponding version of Theorem 3 is just the statement in Perov [9].
Product fixed points
Let {(X i , d i ; ≤ i ); 1 ≤ i ≤ q} be a system of quasi-ordered metric spaces. Denote X = {X i ; 1 ≤ i ≤ q} (the Cartesian product of the ambient sets); and put, for x = (x 1 , ..., x q ) and y = (y 1 , ..., y q ) in X (e01) ∆(x, y) = (d 1 (x 1 , y 1 ), ..., d q (x q , y q )), (e02) x y iff x i ≤ i y i , i ∈ {1, ..., q}. Clearly, ∆ is a R q -valued metric on X; and ( ) acts as a quasi-ordering over the same. As a consequence of this, we may now introduce all conventions in Section 4. Note that, by the very definitions above, we have, for the sequence (x n = (x n 1 , ..., x n q ); n ≥ 0) in X and the point x = (x 1 , ..., x q ) in X, We are now passing to our effective part. Let (T i : X → X i ; 1 ≤ i ≤ q) be a system of maps; it generates an associated selfmap (of X) Suppose that Note that, as a consequence, (d02) and (d03) hold. For i ∈ {1, ..., q}, call T i , (∆, )-continuous, when: [(x n = (x n 1 , ..., x n q ))=ascending and The following implication is evident: Putting these together, we have (via Theorem 3 above): Then, the associated selfmap T is a Picard one (modulo (∆, )).
(II) By definition, any fixed point of the associated selfmap T will be referred to as a product fixed point of the original system (T 1 , ..., T q ). To see its usefulness, it will suffice noting that, by an appropriate choice of our data, one gets (concrete) coupled and tripled fixed point results in the area, obtainable via "linear" type contractive conditions. The most elaborated one, due to Berinde and Borcut [2] will be discussed below.
If, in addition, either [F is continuous] or [both (≤) and (≥) are d-self-closed] then F has at least one tripled fixed point.
Further aspects will be delineated elsewhere.
|
2019-04-11T19:49:50.037Z
|
2011-10-13T00:00:00.000
|
{
"year": 2011,
"sha1": "8a54c18d5473dfa93b54b824c43630327387245f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8a54c18d5473dfa93b54b824c43630327387245f",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
961976
|
pes2o/s2orc
|
v3-fos-license
|
Flavor and CP violating physics from new supersymmetric thresholds
Treating the MSSM as an effective theory, we study the implications of having dimension five operators in the superpotential for flavor and CP-violating processes, exploiting the linear decoupling of observable effects with respect to the new threshold scale \Lambda. We show that the assumption of weak scale supersymmetry, when combined with the stringent limits on electric dipole moments and lepton flavor-violating processes, provides sensitivity to \Lambda as high as 10^7-10^9 GeV, while the next generation of experiments could directly probe the high-energy scales suggested by neutrino physics.
Weak-scale supersymmetry (SUSY) is a theoretical framework that helps to soften the so-called gauge hierarchy problem by removing the power-like ultraviolet sensitivity of the dimensionful parameters in the Higgs potential. It also has other advantages, notably an improvement in gauge coupling unification and a natural dark matter candidate, which have made it the standard paradigm for physics beyond the Standard Model (SM). However, the simplest scenario -the minimal supersymmetric standard model (MSSM) -suffers from a number of well-known tuning problems, due in part to the large array of possible parameters responsible for soft SUSY breaking [1], and consequently the possibility of catastrophically large flavor and CP violating amplitudes. The absence of new flavor structures and orderone sources of CP -violation in the soft breaking sector, as evidenced respectively by the perfect accord of the observed K and B meson mixing and decay with the predictions of the SM [2] and the null results of electric dipole moment (EDM) searches [3][4][5], motivates continuing work on the specifics of SUSY breaking.
In the present Letter we will instead ask, given a solution to the flavor and CP problems in the soft-breaking sector, what sensitivity do we have to new high-scale sources of flavor and CP -violation? Such effects would arise through SUSY-preserving higher-dimensional operators generated at a new threshold Λ ≫ M W . Such thresholds are indeed expected due to various completions of the MSSM, e.g. via mechanisms for SUSY breaking and mediation, the breaking of flavor symmetries, and moreover via the physics generating neutrino masses and mixings. Intermediate scales are also suggested by the axion solution to the strong CP problem, SUSY leptogenesis scenarios, and more entertainingly as a lowered GUT/string scale arising from large compactification radii of extra dimensions. In contrast to nonuniversal or complex soft-breaking terms, the flavor and CPviolating observables induced by such operators will scale as (Λm susy ) −1 , and thus the constraints on nonminimal flavor or CP translate directly into sensitivity to Λ far above the scale of the superpartner masses, m susy .
At dimension five there are several well-known Rparity conserving operators associated with neutrino masses, H u LH u L, and baryon number violation, U U DE, QQQL [7]. The constraints on proton decay put severe restrictions on the size of baryon-number violating operators, Λ b > 10 24 GeV, where 1/Λ b is the overall normalization scale for these operators. The "super-seesaw" operator H u LH u L is a welcome addition to the MSSM superpotential, as it generates Majorana masses and mixing for neutrinos, which imply Λ ν ∼ (10 14 − 10 16 ) GeV. Note that in the seesaw scenario, the actual scale of righthaned neutrinos, M R , is lower than Λ ν , since Λ −1 In what follows, we analyze in detail the remaining operators allowed in the R-parity conserving MSSM at dimension five level [7]. We write the superpotential as where y h , Y qe , Y qq andỸ qq are dimensionless coefficients, the latter three being tensors in flavor space. The parentheses in (1) denote a contraction of colour indices. Note that since we will only consider supersymmetric thresholds, the superfield equations of motion can be used to eliminate all dimension five corrections to the Kähler potential, e.g. K (5) = c u QU H † d , absorbing them in W (5) and the Yukawa terms, and slightly modifying the softbreaking sector. A renormalizable realization of (1) can easily be obtained, e.g. the MSSM extended by a singlet N (the NMSSM) or an extra pair of heavy Higgses.
The full Lagrangian descending from (1) is rather cumbersome, and we will focus our attention here on those dimension five operators which are of potential phenomenological interest, specifically those that involve two SM fermions and two sfermions. We then proceed to integrate out the sfermions to obtain operators composed from the SM fields (or more precisely those of a type II two-Higgs doublet model). We will impose the requirements of flavor triviality and CP conservation in the soft-breaking sector. Thus all dimension ≤ 4 coefficients in the Higgs potential, trilinear terms A i , gaugino masses M i , and the µ-parameter, will be taken real. We will also make the simplifying assumption of universal sfermion masses, denoted m sq , m sl , which we will take, along with µ, M i , to be somewhat larger than M W . Deferring the full details [8], we quote the relevant results below: Correction to the SM fermion masses: The SM operators of lowest dimension that are of phenomenological interest are the fermion mass operators. From the diagrams of Fig. 1a, we obtain the following corrections: with a similar correction to M u . The notation implies summation over the repeated flavor indices, and we have defined the combination denote unperturbed mass matrices arising from dimension four terms in the superpotential. Note that the corrections proportional to A u directly break SUSY, while those proportional to µ arise from corrections to the Kähler potential. Dipole operators: At dimension five, dipole operators first arise at two-loop order, as in Fig. 1b. In the charged lepton sector they result in where we treated LR squark mixing as a mass insertion, and used P L = 1−γ5 2 and (F σ) = F µν σ µν . In the quark sector the corresponding results are more cumbersome due to a large number of possible diagrams.
Jumping an additional dimension, we now consider dimension six four-fermion operators generated by various terms in (1). Two representative diagrams are shown in Fig. 1c,d.
Semileptonic operators: Integrating out gauginos and sfermions as in Fig. 1c, we find the following semileptonic operators, sourced by QU LE, Here m −1 susy denotes a combination of superpartner masses folded with a loop function F : with F (1) = 1 (see [10] for the unequal mass case). In (4) we have retained only the gluino-squark contribution, which is expected to dominate unless there are additional hierarchies between the masses of sleptons and squarks.
Four-quark operators: Integrating out gluinos and squarks as in Fig. 1c, we arrive at the following fourquark effective operators: where the summation over flavor is carried out exactly as in (1). The largest down-type ∆F = 2 operator arises instead from Fig. 1d, which inevitably contains additional Yukawa suppression originating from the Higgsino-fermion-sfermion vertices.
Here m susy is a combination of SUSY masses as in (4) and (5) with M 3 replaced by µ.
We will now turn to the phenomenological consequences and the sensitivity to Λ qe and Λ qq in various experimental channels. Of course, one of the most important issues is the flavour structure of the new couplings constants, Y qe , Y qq andỸ qq . We will assume that these coefficients are of order one, and do not factorize: With this assumption, we should first determine the natural scale for Λ such that the corrections to SM fermion masses do not exceed their measured values.
Particle masses and θ-term: (2), and assuming a maximal Y qe 3311 ∼ O(1), we arrive at the esti-mate, Eq. (7) clearly implies that the natural scale for new physics encoded in the semileptonic operators in the superpotential is Λ qe ∼ 10 7 GeV, while the corresponding scale in the quark sector is slightly lower. A strikingly high naturalness scale emerges from consideration of the effective shift ofθ due to the mass corrections (2). Assuming uncorrelated phases between Y qq and the eigenvalues of Y u and Y d , we find, Eq. (8) translates directly to an extremely strong bound on Λ qq in scenarios whereθ ≃ 0 is engineered by hand, either by using discrete symmetries at high energies [11] or by imposing an approximate global U (1) symmetry at tree level to ensure m (0) u = 0. In these cases, the experimental bound on the neutron EDM, |d n | < 6 × 10 −26 e cm [5] (soon to be updated [6]), combined with standard estimates for d n (θ) [12] implies remarkable sensitivity to scales Λ qq ∼ 10 17 GeV. Future progress in EDM searches (both for neutrons and heavy atoms) can bring this up to the Planck scale and beyond. In contrast, no constraints from (8) ensue within the axion scenario.
Electric dipole moments from four-fermion operators: Electric dipole moments (EDMs) of neutrons and heavy atoms and molecules are the primary probes for sources of flavor-neutral CP violation [12]. In addition to d n , the strongest constraints on CP -violating parameters arise from the atomic EDMs of thallium, |d Tl | < 9 × 10 −25 e cm [3], and mercury, |d Hg | < 2 × 10 −28 e cm [4].
Assuming thatθ is removed by an appropriate symmetry, EDMs are mediated by higher-dimenional operators and both (4) and (5) are capable of inducing atomic/nuclear EDMs if the overall coefficients contain an extra phase relative to the quark masses. Restricting Eq. (4) to the first generation, we find the following CP -odd operators (with real m e , m u ): Accounting for QCD running from the SUSY scale to 1GeV, and using the hadronic matrix elements over nucleon states, N |(ūu +dd)/2|N ≃ 4N N and n|ūiγ 5 u|n ≃ −0.4(m N /m u )niγ 5 n, we determine the induced corrections to the CP -odd electron-nucleon Lagrangian, L = C SN Nēiγ 5 e + C PN iγ 5 Nēe, using maximal ImY qe and taking m susy = 300 GeV.
Comparing (10) to the limits on C S and C P deduced from the Tl and Hg EDM bounds [12], we obtain the following sensitivity, Λ qe > ∼ 3 × 10 8 GeV from Tl EDM (11) Λ qe > ∼ 1.5 × 10 8 GeV from Hg EDM (12) Λ qq > ∼ 3 × 10 7 GeV from Hg EDM. (13) The last relation results from sensitivity to the CP violating operators (diγ 5 d)(ūu) from (5), leading to the Schiff nuclear moment and the Hg EDM. These are remarkably large scales, and indeed not far below the scales suggested by neutrino physics. In fact, the next generation of atomic/molecular EDM experiments [13] may reach sensitivities sufficient to push Λ qe into regions close to the suggested scale of right-handed neutrinos. Semileptonic operators involving heavy quark superfields are in turn strongly constrained via two-loop corrections (3) to the dipole amplitudes. The bound on d Tl implies |d e | < ∼ 1.6 × 10 −27 e cm, which for maximal ImY qe 1133 implies: Results analogous to (3) apply for the quark EDMs and color EDMs, furnishing a similar sensitivity to Λ qq . Lepton flavour violation: Searches for lepton-flavour violation (LFV), such as µ → eγ decay, and µ → e conversion in nuclei, have resulted in stringent upper bounds on the corresponding branching ratio, Br(µ → eγ) < 1.2 × 10 −11 [14], and the rate of conversion normalized on capture rate, R(µ → e − on Ti) < 4.3 × 10 −12 [15], with further improvement anticipated. The latter bound implies a particularly high sensitivity to the semileptonic operators in (1). The conversion is mediated by (ūu)ēiγ 5 µ and (ūu)ēµ, and involves the same matrix elements as C S . Using bounds on such scalar operators derived elsewhere (see e.g. [16]), we conclude that µ → e conversion probes energy scales as high as The constraint on µ → eγ probes similar, but slightly lower, scales as it requires a two-loop diagram as in Fig. 1b. Disregarding an O(1) factor between (11) and (15), we conclude that searches for EDMs and LFV probe these extensions of the MSSM up to comparable energy scales of ∼ 10 8 GeV.
Hadronic flavor constraints: Often, the most constraining piece of experimental information comes from the contribution of new physics to the mixing of neutral mesons, K and B. However, in the present case, there is necessarily a significant loop and Yukawa suppression arising from (6), and the sensitivity is correspondingly weakened. Taking (∆m K ) exp ≃ 3.5 × 10 −6 eV [17], we find Λ qq > ∼ (tan β/50) × 200 GeV [8]. ∆m B exhibits a similar sensitivity, while ǫ K is about three orders of magnitude more sensitive, but still well below the operator sensitivity to Λ (GeV) source scales probed by EDMs and LFV. In contrast, it is clear that these observables provide much better sensitivity to SUSY dimension-six operators, which impose no additional suppression factors. Denoting the corresponding scale as Λ ′ , we find Λ ′ > ∼ 8 × 10 6 GeV, while ǫ K is sensitive to scales ∼ 10 8 GeV. Two-loop contributions to b → sγ (as in Fig. 1b) are not Yukawa suppressed and, with the current precision ∆Br(B → X s γ) ∼ 10 −4 [17], are somewhat more sensitive. We find Λ qq > ∼ 10 3 − 10 4 GeV (for Y qq 3233 ∼ 1), still well below the sensitivity in other channels.
Constraints on the Higgs operator: The high sensitivity to QU LE and QU QD arises primarily because they can flip the light fermion chirality without Yukawa suppression. It would then come as no surprise if H u H d H u H d were to have little implication for CP and flavor-violating observables; the operator will of course provide corrections to the sfermion and neutralino mass matrices, and can induce CP -odd mixing between A and h, H, but these effects do not lead to high sensitivity to Λ h .
Remarkably enough, it turns out that EDMs do exhibit a high sensitivity to H u H d H u H d at large tan β through corrections to the Higgs potential, and in particular the effective shift of the m 2 12 parameter, Expanding to leading order in 1/Λ h , using (16), and imposing the present limit on d e discussed earlier, one finds impressive sensitivity for large tan β, In conclusion, we have examined new flavor and CP violating effects mediated by dimension five superpotential operators, and shown that the sensitivity to these operators extends far beyond the weak scale (as summarized in Table 1). The semileptonic operators that mediate flavor violation in the leptonic sector and/or break CP could be detectable even if the scale of new physics is as high as 10 9 GeV, and well above the naturalness scale. Our results can be translated into constraints on CP and flavor violation in specific models leading to (1), e.g. the NMSSM or the MSSM with an extra pair of Higgses. Moreover, the sensitivity quoted in (11) and (15) is robust, having a mild dependence on the SUSY threshold. Finally, since these effects decouple linearly, an increase in sensitivity by just two orders of magnitude would already start probing scales relevant for neutrino physics. Our results motivate further searches for EDMs and LFV in the SUSY framework even if the soft-breaking sector provides no new sources, as happens e.g. in models with low scale SUSY breaking.
|
2017-09-24T08:11:27.460Z
|
2005-10-19T00:00:00.000
|
{
"year": 2005,
"sha1": "77bef56d9fe3c0cbe98ead88fec0ed836055cc81",
"oa_license": null,
"oa_url": "http://cds.cern.ch/record/898028/files/PhysRevLett.96.091801.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1b024544e89e747e93f56182f99c24955359ade2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
14427705
|
pes2o/s2orc
|
v3-fos-license
|
Serotonin Receptor 2C and Insulin Secretion
Type 2 diabetes mellitus (T2DM) describes a group of metabolic disorders characterized by defects in insulin secretion and insulin sensitivity. Insulin secretion from pancreatic β-cells is an important factor in the etiology of T2DM, though the complex regulation and mechanisms of insulin secretion from β-cells remains to be fully elucidated. High plasma levels of serotonin (5-hydroxytryptamine; 5-HT) have been reported in T2DM patients, though the potential effect on insulin secretion is unclear. However, it is known that the 5-HT receptor 2C (5-HT2CR) agonist, mCPP, decreases plasma insulin concentration in mice. As such, we aimed to investigate the expression of the 5-HT2CR in pancreatic islets of diabetic mice and the role of 5-HT2CR signaling in insulin secretion from pancreatic β-cells. We found that 5-HT2CR expression was significantly increased in pancreatic islets of db/db mice. Furthermore, treatment with a 5-HT2CR antagonist (SB242084) increased insulin secretion from pancreatic islets isolated from db/db mice in a dose-dependent manner, but had no effect in islets from control mice. The effect of a 5-HT2CR agonist (mCPP) and antagonist (SB242084) were further studied in isolated pancreatic islets from mice and Min-6 cells. We found that mCPP significantly inhibited insulin secretion in Min-6 cells and isolated islets in a dose-dependent manner, which could be reversed by SB242084 or RNA interference against 5-HT2CR. We also treated Min-6 cells with palmitic acid for 24 h, and found that the expression of 5-HT2CR increased in a dose-dependent manner; furthermore, the inhibition of insulin secretion in Min-6 cells induced by palmitic acid could be reversed by SB242084 or RNA interference against 5-HT2CR. Taken together, our data suggests that increased expression of 5-HT2CR in pancreatic β-cells might inhibit insulin secretion. This unique observation increases our understanding of T2DM and suggests new avenues for potential treatment.
Introduction
Type 2 diabetes mellitus (T2DM) is a chronic metabolic syndrome caused by insulin deficiency [1]. T2DM patients usually display loss of insulin sensitivity in white adipose tissue, skeletal muscle and liver, accompanied by disorders of insulin secretion [2]. Insulin is secreted from pancreatic b-cells and is controlled by blood glucose level; low blood glucose level induces basal insulin secretion whereas high blood glucose levels, as encountered postprandially, will increase insulin secretion up to five to ten times [3]. In addition to blood glucose, lipids, cytokines and hormones can all affect insulin secretion, as such the mechanisms underlying the dysfunction of pancreatic b-cells in T2DM patients is complex and is far from being fully understood [2].
As far as we know, the pancreatic islet is highly innervated by parasympathetic and sympathetic neurons [4]. Recent studies have demonstrated that the nervous system can regulate insulin secretion from pancreatic b-cells through neurotransmitters and their respective receptors expressed in b-cells [5]. For example, the cholinergic nerves [6] and the sympathoadrenal axis [7] can modulate insulin secretion from pancreatic b-cells, demonstrating that the nervous system can regulate insulin secretion directly, and in addition to indirect effects on metabolism through the regulation of food intake, body temperature, sleep and activity [8]. There is evidence that the pancreas receives serotonergic nervous inputs from vagus and enteric nervous system [9]. The serotonin (5-hydroxytryptamine; 5-HT) secreted form these intrapancreatic nerves may act as a stimulator or inhibitor of pancreatic exocrine secretion, depending on the expression of different receptor subtypes [9], but whether the 5-HT system could play a role in pancreatic endocrine function is largely unknown [10,11]. Furthermore, it has been suggested that pancreatic b-cells can secrete 5-HT by themselves, which could represent a form of autocrine regulation [12].
5-HT is a biogenic amine that is synthesized in the enteric nervous system and the central nervous system [13]; it has a wide variety of physiological functions, including regulating body temperature, cardiovascular function, mood, bodyweight, and cognitive functions [14]. Recent studies have demonstrated that the 5-HT system could be involved in glucose and lipid metabolism [15,16], as well as adipocyte differentiation [17]. The 5-HT receptors, excluding 5-HT receptor 3, are all members of the GPCR superfamily of signal transducing receptors [18]. The 5-HT receptor 2C (5-HT 2C R) is through to be the most critical 5-HT receptor for regulating energy homeostasis [19,20,21]. The predominant function of signaling through the 5-HT 2C R is thought to drive the anorexic effect in hypothalamus [22]; however there are also reports that 5-HT is increased in the plasma and brains of diabetic patients and hyperphagic people [23,24]. Moreover, chronic hyperphagia of Ay mice increases the expression of 5-HT 2C R in the hypothalamus, demonstrating that the function of the 5-HT system could be enhanced in diabetic and obese individuals [25]. Interestingly, Zhou et al. found that after 14 days of treatment with mCPP (a 5-HT 2C R agonist), mice with diet-induced obesity exhibited reduced circulating insulin concentrations, while blood glucose, body weight, and feeding remained unchanged [26]. This data suggests a direct effect of mCPP in decreasing insulin secretion through activating 5-HT 2C R.
It is known that T2DM patients usually display delayed elevation of plasma concentration of insulin postprandially [27], which is when the vagus and enteric neurons are highly activated, resulting in increased circulating levels of 5-HT [28]. As such, our first aim was to study the expression of 5-HT 2C R in pancreatic islets from diabetic mice, and then study its effect on insulin secretion. Given that patients with T2DM usually have hyperlipidemia, which can cause insulin resistance and decreased insulin secretion [29], our second aim was to investigate the effect of palmitic acid on the expression of 5-HT 2C R in pancreatic b-cells.
Cell Culture
Min-6 cells (passage 20 to 30) were grown in DMEM medium containing 15% FBS, 25 mmol/l glucose, 50 mmol/l 2-mercaptoethanol, 100 U/ml penicillin and 100 mg/ml streptomycin. The cells were cultured at 37uC in a humidified atmosphere containing 95% air and 5% CO2. For all compounds prepared in DMSO and ethanol, the final concentration in the culture medium was kept at less than 0.2%.
Islet Purification and Culturing
All animal studies were performed according to guidelines established by the Research Animal Care Committee of Nanjing Medical University. Male db/db mice (17 weeks of age; C57BL/ KsJ; n = 6 per group) and control C57BL/KsJ mice were purchased from the Shanghai Institute of Materia Medica, Chinese Academy of Sciences. Male ICR mice (23 to 25 g body weight) were purchased from Nanjing Medical University Laboratory Animal Centre, Nanjing, China. Islet isolation and culturing techniques have been described previously [30]. Freshly isolated islets were transferred to sterile six-well dishes and cultured in 1640 medium containing 11.1 mmol/l glucose supplemented with 10% FBS, 10 mmol/l HEPES, 100 U/ml penicillin and 100 mg/ml streptomycin. The islets were allowed to equilibrate for 3 h, after which they were counted and repicked into six well plates (400 islets per well for RNA or protein extraction) or 48 well plates (8 islets per well for glucose stimulated insulin secretion [GSIS]) and cultured overnight at 37uC for future analysis.
Real Time Reverse Transcription-polymerase Chain Reaction
Total RNA of primary mouse islets and Min-6 cells were extracted by TRIzol reagent (Invitrogen, Life Technologies Co.), according to the manufacturer's protocol. After quantification by spectrophotometry, 1 mg of total RNA was used for reverse transcription in a final volume of 20 ml with AMV Reverse Transcriptase (Promega, Madison, WI, USA), according to the manufacturer's instructions. Aliquots of cDNA corresponding to equal amounts of RNA were used for the quantification of mRNA by real-time PCR using the ABI Prism 7000 Sequence Detection System (Applied Biosystems, Life Technologies Co.). The reaction mixture contained the corresponding cDNA, forward and reverse primers, and SYBR green PCR Master Mix (Applied Biosystems, Life Technologies Co.). Relative expression of the different gene transcripts was calculated by the 2 DDCt method. The specific primers were as follows
GSIS Assay
Isolated mouse islets were seeded into 250 ml of RPMI-1640 medium with 11.1 mmol/l glucose in 48-well dishes at 8 islets/ well; Min-6 cells were seeded into 250 ml RPMI-1640 medium with 11.1 mmol/l glucose at 1610 5 cells/well in 48-well dishes, then cultured and treated under several conditions. Following preincubation for 1 h in KRB buffer containing 3.3 mmol/l glucose, the islets were treated for 1 h in KRB buffer and drug solutions with low (3.3 mmol/l glucose) or stimulatory (16.7 mmol/l glucose) concentrations of glucose. The supernatants were then obtained from each reaction well and frozen at 270uC for subsequent determination of insulin concentration. The insulin levels were measured using a radioimmunoassay as described previously [31].
MTT Assay
Cell viability was determined using an MTT assay (Sigma-Aldrich Co.). Briefly, the cells were seeded in 96-well dishes at 1610 4 to 2610 4 cells per well, and treated with or without mCPP for 12 h. Then each well was supplemented with 10 ml MTT and incubated for 4 h at 37uC. The medium was then removed and 150 ml of DMSO (Sigma-Aldrich Co.) was added to solubilize the MTT formazan. For quantification, the optical density was read at 490 nm.
Statistical Analysis
Statistical analysis was performed with SPSS 11.0 software (SPSS Inc., Chicago, IL). Comparisons were performed using Student's t test between two groups, or ANOVA in multiple groups. Results are presented as means 6 SEM. A P-value ,0.05 was considered to be a statistically significant difference.
Expression of 5-HT 2C R in Pancreatic Islets of db/db Mice
The db/db mouse line is a widely used animal model of T2DM, showing insulin resistance and impaired insulin secretion. In order to study the expression of 5-HT 2C R in pancreatic islets of diabetic mice, we used 17 week old male db/db mice. Our data indicate that db/db mice have a higher body weight (Fig. S1A) and higher random blood glucose level (Fig. S1B) than control mice. We isolated pancreatic islets from db/db mice and control mice, and after overnight culture the total RNA and protein were extracted for analysis. Results from real-time quantitative PCR demonstrated that the mRNA level of 5-HT 2C R was much higher in pancreatic islets isolated from db/db mice, compared to control mice (Fig. 1A). This result was confirmed by Western blot, which showed that the protein level of 5-HT 2C R was also higher in pancreatic islets of db/db mice compared to control mice (Fig. 1B). Next, we investigated whether the higher expression level of 5-HT 2C R could affect insulin secretion from pancreatic islets. We used 1, 5 and 10 mmol/l of the 5-HT 2C R antagonist SB242084 to treat isolated pancreatic islets of db/db mice and control mice for 12 h, and then performed a GSIS test. We found that SB242084 treatment could improve insulin secretion from pancreatic islets isolated from db/db mice in a dose-dependent manner ( Fig. 2A), but SB242084 had no significant effect on pancreatic islets from control mice (Fig. 2B).
mCPP Decreased Insulin Secretion of Min-6 Cells and Mice Pancreatic Islets, which could be Reversed by SB242084 One limitation to using db/db mice is that they are genetically modified animals, which might not fully represent normal physiology. In order to further study the effect of 5-HT 2C R signaling on insulin secretion from pancreatic b-cells we used 1, 5, 15, 25, 50 and 100 mmol/l doses of the 5-HT 2C R agonist mCPP to treat pancreatic islets isolated from ICR mice, as well as treating the pancreatic b-cell line Min-6 cells, for 12 h followed by a GSIS test. The results demonstrated that mCPP could decrease insulin secretion from both Min-6 cells (Fig. 3A) and isolated mouse pancreatic islets (Fig. 3B) in a dose-dependent manner. Five micromolar mCPP began to inhibit insulin secretion of pancreatic b-cells, and at 25 mmol/l mCPP was close to reach its maximum effect. The inhibitory effect of mCPP on insulin secretion of pancreatic b-cells in our study is in accordance with published in vivo data [26]. Considering the potential effect of mCPP on cell viability, we used a MTT assay to study whether mCPP had an effect on viability of Min-6 cells. After treatment with 1 to 100 mmol/l of mCPP for 12 h, Min-6 cells were assayed with MTT. The results showed that mCPP did not affect cell viability of Min-6 cells at any concentration (Fig. S2), which suggested a direct effect of mCPP on insulin secretion. In order to confirm that the effect of mCPP on insulin secretion was through activating 5-HT 2C R, we used coadministration of 5, 15, or 25 mmol/l mCPP with 5 mmol/l SB242084 in Min-6 cells and isolated mouse pancreatic islets for 12 h, before carrying out a GSIS test. The results showed that SB242084 could significantly reverse the inhibitory effect of mCPP on insulin secretion both in Min-6 cells (Fig. 4A) and isolated mouse pancreatic islets (Fig. 4B).
RNA Interference of 5-HT 2C R of Min-6 Cells Reversed the Effect of mCPP on Insulin Secretion
To further investigate the role of 5-HT 2C R and the inhibitory effect of mCPP on insulin secretion from pancreatic b-cells, we used RNA interference against the 5-HT 2C R in Min-6 cells. Transfection of Min-6 cells with si5-HT 2C R-1, si5-HT 2C R-2, si5-HT 2C R-3 and control siRNA was carried out separately.
After culturing for 48 h in the presence of the siRNA, we analyzed the expression of 5-HT 2C R in each group. We found that si5-HT 2C R-3 could significantly decrease the expression of 5-HT 2C R at the mRNA (Fig. 5A) and protein level (Fig. 5B). As such, further experiments were carried out with this siRNA. After transfection of Min-6 cells with si5-HT 2C R-3 and 36 h in culture, we added 5, 15, or 25 mmol/l mCPP to treat the cells for 12 h, and then performed a GSIS test. The results showed that RNA interference of 5-HT 2C R could reverse the inhibitory effect of mCPP on insulin secretion in Min-6 cells (Fig. 6), which was in accordance with the effect of SB242084.
Palmitic Acid Increases the Expression of 5-HT 2C R in Min-6 Cells
Patients with T2DM and obese people usually have hyperlipidemia, which can cause insulin resistance and impaired insulin secretion [32]. There is evidence that excess free lipids in the plasma could generate neural toxicity, inducing altered neurotransmitter levels and/or abnormal function of their receptors, resulting in neural dysfunction, such as Alzheimer disease [33]. Furthermore, excess free lipids in plasma has also been shown to stimulate thrombocytes to secrete more 5-HT, leading to a high plasma concentration of 5-HT [34]. Considering the possible effect of elevated free lipids on the 5-HT system of pancreatic islets, we investigated the effect of palmitic acid on the expression of 5-HT 2C R in pancreatic bcells. We used 0.1, 0.2 and 0.3 mM concentrations of palmitic acid to treat Min-6 cells for 24 h, and then analyzed the effect on expression of 5-HT 2C R. We found that at both the mRNA level (Fig. 7A) and protein level (Fig. 7B) of 5-HT 2C R was
SB242084 and RNA Interference of 5-HT 2C R Improved Insulin Secretion of Min-6 Cells Treated with Palmitic Acid
Next we investigated whether the increased expression of 5-HT 2C R stimulated by palmitic acid in Min-6 cells was related to the deleterious effects of palmitic acid on insulin secretion. We used 0.1, 0.2 and 0.3 mM concentrations of palmitic acid to treat Min-6 cells for 12 h, and then added 5 mmol/l SB242084 for a further 12 h before performing a GSIS test. Our results show that palmitic acid had a deleterious effect on insulin secretion from Min-6 cells, with an approximately 65% decrease in insulin secretion at a concentration of 0.3 mM palmitic acid; however, addition of 5 mmol/l SB242084 lead to a higher level of insulin secretion than the control groups (Fig. 8A), demonstrating an improvement on the function of Min-6 cells even in such cytotoxic circumstances. RNA interference of 5-HT 2C R expression in Min-6 cells was next used to study their ability to secrete insulin after treatment with palmitic acid. We transfected Min-6 cells with si5-HT 2C R-3 and cultured then for 24 h. Next we added 0.1, 0.2, or 0.3 mM palmitic acid for a further 24 h, and then performed a GSIS test. We found that RNA interference of 5-HT 2C R could improve insulin secretion from Min-6 cells after treated with palmitic acid (Fig. 8B), in accordance with the results from our experiments with SB242084.
Expression of tph in Pancreatic Islets of db/db Mice and Min-6 Cells Treated with Palmitic Acid
5-HT is synthesised in two steps from the essential amino acid tryptophan, which is acquired in the diet. Tryptophan is first hydroxylated at the 5 position of the indole ring by tryptophan hydroxylase (TPH), yielding 5-hydroxytryptophan; this product is then decarboxylated by aromatic L-amino acid decarboxylase, yielding 5-HT. Tryptophan hydroxylase is the rate-limiting enzyme in 5-HT synthesis [35]. There are two isoforms of tryptophan hydroxylase in pancreatic b-cells: TPH1 and TPH2 [36]. And it is reported that expression of both isoforms of TPH increased in b-cells in some cases, such as pregnancy, leading to high content of 5-HT [36]. So we investigated the expression of TPH1 and TPH2 in pancreatic islets of db/db mice and Min-6 cells treated with palmitic acid. We found that in islets of db/db mice, the mRNA level of TPH1 increased significantly (Fig. 9G), but the change of protein level did not reach the statistical significance (Fig. 9A). Both mRNA level and protein level of TPH2 significantly increased than control mice ( Fig. 9H;9A). Then we used 0.1, 0.2 and 0.3 mM concentrations of palmitic acid to treat Min-6 cells for 24 h, and then analyzed the effect on expression of TPH1 and TPH2. Interestingly, expression of TPH1 ( Fig. 9B;9I) and TPH2 ( Fig. 9B;9J) did not change in Min-6 cells after treatment with palmitic acid.
Discussion
T2DM is a metabolic disease characterized by elevated blood glucose levels; both environmental and genetic factors can lead to the development of diabetes [37]. Due to changes in food structure and lifestyle, the global incidence of T2DM has become much higher [38]. In addition to insulin resistance and impaired function of pancreatic b-cells in patients with T2DM [39,40], this study focused on insulin secretion affected by 5-HT 2C R, which has been suggested the possibility that an abnormal 5-HT system could also affect regulation of energy metabolism [41].
In our study, we found that expression of the 5-HT 2C R was much higher in pancreatic islets of db/db mice than control mice, which was in accordance with the higher 5-HT 2C R level reported in the hypothalamus of obese Ay mice [42]. We used the 5-HT 2C R antagonist SB242084 to study whether the inhibition of 5-HT 2C R could affect insulin secretion from pancreatic islets, finding that after treatment with SB242084, pancreatic islets isolated from db/db mice had improved insulin secretion in a SB242084 dose-dependent manner. Interestingly, pancreatic islets from control mice were also weakly affected by SB242084, though this was without statistical significance. In order to further study the effect of 5-HT 2C R on the function of pancreatic b-cells, we used the 5-HT 2C R agonist mCPP to treat Min-6 cells and isolated pancreatic islets for 12 h, finding that insulin secretion from both Min-6 cells and isolated pancreatic islets was significantly decreased in a mCPP dose-dependent manner, with no effect on cell viability or insulin content (Fig. S3). This inhibitory effect of mCPP on insulin secretion from pancreatic b-cells could be reversed by treatment with SB242084 or RNA interference of 5-HT 2C R, which demonstrated that 5-HT 2C R played an inhibitory role in insulin secretion from pancreatic b-cells. We had also taken other 5-HT receptors into consideration. In our preliminary experiments, we examined the expression of 5-HT receptors 2A, 2B, 2C, 1A, 1B in mice islets and Min-6 cells with RT-PCR, but we could not detect other 5-HT receptors except for 5-HT 2C R (Fig. S4). We also examined whether 1 mmol/l SB242084 could reverse the inhibitory effect of 5 to 25 mmol/l mCPP on insulin secretion in Min-6 cells in our preliminary experiments, but we found that 1 mmol/l SB242084 could not work well when the concentration of mCPP was 25 mmol/l (Fig. S5), which may suggest insufficient dosage of SB242084. It has been reported that activation of 5-HT 2C R could inhibit the firing rate of dopaminergic neurons and reduce dopamin release [43,44,45]. The firing rate is also critical for pancreatic b-cells to secret insulin [46]. The 5-HT 2C R may decrease insulin secretion through inhibiting firing rate of b-cells, and could also affect b-cell membrane capacitance/ voltage-fated calcium current, docked granule pools, SNARE complex, etc. All these potential mechanism and which secretion phase was affected need further research.
Considering the negative effect of excess free lipids on the function of pancreatic b-cells, we investigated whether 5-HT 2C R facilitated the deleterious effect of palmitic acid on Min-6 cells. We found that after treatment with palmitic acid for 24 h, Min-6 cells expressed higher levels 5-HT 2C R in a dose-dependent manner, suggesting a possible role for 5-HT 2C R in the deleterious effect of palmitic acid on insulin secretion from Min-6 cells. Subsequent experiments demonstrated that increased expression of 5-HT 2C R partly induced the inhibitory effects of palmitic acid on insulin secretion from Min-6 cells, which could be reversed by SB242084 or RNA interference of 5-HT 2C R.
At last, we investigated the expression both isoforms of TPH in islets of db/db mice and Min-6 cells treated with palmitic acid. The higher expression of tph2 in islets of db/db mice may reveal higher content of 5-HT. Interestingly, we did not observe any change of expression of TPH1 and TPH2 in Min-6 cells after treatment with different concentration of palmitic acid. But it has been reported that unsaturated fatty acids could induce 5-HT release from platelets [47] and the fish oil will affect 5-HT turnover in hypothalamus [48]. The effect of fatty acids on 5-HT system may largely rely on 5-HT release. Anyway, the serotonin biosynthesis and release was not measured in islets or Min-6 cells, it is still unclear whether 5-HT 2C R mediates the effect of serotonin release by neuron synapses and/or via b-cells in vivo, and the affect of high glucose plus high fatty acids to TPH expression in pancreatic b-cells should be done in future study.
Our results might help to explain the delayed elevation of postprandial plasma insulin level in T2DM patients. Though this phenomenon has a close relationship to incretins [49], we believe the action of the incretins does not entirely mediate this effect. In our working model, T2DM patients have high expression levels of 5-HT 2C R in pancreatic b-cells, thus, after meal ingestion the intrapancreatic serotonergic nerves secrete 5-HT around pancreatic islets, which could activate the 5-HT 2C R of pancreatic b-cells, resulting in decreased insulin secretion. Anyway, such a phenomenon could also be recognized as the result of some kind of protective physiological changes in T2DM patients or obese people. That is to say, when the body has excess energy storage, it could up regulate the expression of 5-HT 2C R in hypothalamus to drive the anorexic effect [50], and could also up regulate the expression of 5-HT 2C R in pancreatic b-cells resulting in less insulin secretion after meals, leading to less energy intake from food ingestion. Increased expression of 5-HT 2C R in both hypothalamus and b-cells could mediate this protective strategy to prevent excess energy intake. Moreover, in evolutionary terms the presence of 5-HT synthesis in plants [51] as well as all branches of metazoan life [52,53] demonstrates that the 5-HT system arose relatively early in the evolution of life. This indicates that the 5-HT system evolved before the plant-animal evolutionary divergence, which was estimated to have occurred 1.5 billion years ago. Functioning as a trophic factor in plants, 5-HT signaled in even the most primitive nervous systems to regulate the primitive energy metabolism systems [52,53]. Considering that pancreatic islet cells and neurons share common functions and similar ontogenies [54], it is not surprising that the serotonergic nervous system might regulate pancreatic islet function, to form an intricate energy metabolism regulatory system with the effect on hypothalamus. Our data strongly suggest that the 5-HT system is important for metabolic control, though much remains to be understood about the function of the 5-HT system in energy metabolism, including the specific roles of each of the 5-HT receptor subtypes, and the nuances of the effector pathways. In summary, our data demonstates that 5-HT 2C R might play a role in the dysfunction of pancreatic b-cells in T2DM patients, this novel finding brings a new understanding of T2DM etiology, and may provide new avenues to treat this disease. Figure S1 Body weight, random blood glucose of db/db mice and control mice. A: Body weight of db/db mice was higher than control mice. B: Random blood glucose of db/db mice was higher than control mice (n = 6; ** P,0.01).
|
2016-05-12T22:15:10.714Z
|
2013-01-17T00:00:00.000
|
{
"year": 2013,
"sha1": "2984c846d9f7d671b645906187f659bed3ea7c26",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0054250&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2984c846d9f7d671b645906187f659bed3ea7c26",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
216439464
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Informal Cooperative Activity through Online Learning on the Understanding of Physics Concept
Educators believe that online learning has many opportunities in providing a conducive learning environment relevant to student characteristics. This study explores the impact of cooperative online learning on student learning outcomes. To reduce the influence of other effected variables, covariates used as control variables were students' interest in learning and numerical abilities. The type of cooperative learning model used in this study was Think Pair Share (TPS) which were integrated into Schoology as a learning management system (LMS). This learning strategy was applied to eleventh-grade students for elasticity topic. The research method was a pretest-posttest controlled group design. Test techniques (10 items of essays) were used to measure learning achievement. Questionnaires with a Likert scale were used to measure learning interest using the ARCS (attention, relevance, and satisfaction) model (34 items). Numerical ability data was collected from secondary data (standard psychological test). The statistical test used in this study was Conbrach's Product Moment from Pearson for item validity, using Cronbach's Alpha for item reliability, ANCOVA to analyze the differences among group means as the effect analysis. The results finding were the group of students taught by the proposed strategy had higher learning outcomes than another group of students. Learning interests and numerical abilities could be used as predictors for improving student learning outcomes. It means that we can apply the online cooperative learning using a specific LMS to promote the increase of student's learning performance.
Abstract Educators believe that online learning has many opportunities in providing a conducive learning environment relevant to student characteristics. This study explores the impact of cooperative online learning on student learning outcomes. To reduce the influence of other effected variables, covariates used as control variables were students' interest in learning and numerical abilities. The type of cooperative learning model used in this study was Think Pair Share (TPS) which were integrated into Schoology as a learning management system (LMS). This learning strategy was applied to eleventh-grade students for elasticity topic. The research method was a pretest-posttest controlled group design. Test techniques (10 items of essays) were used to measure learning achievement. Questionnaires with a Likert scale were used to measure learning interest using the ARCS (attention, relevance, and satisfaction) model (34 items). Numerical ability data was collected from secondary data (standard psychological test). The statistical test used in this study was Conbrach's Product Moment from Pearson for item validity, using Cronbach's Alpha for item reliability, ANCOVA to analyze the differences among group means as the effect analysis. The results finding were the group of students taught by the proposed strategy had higher learning outcomes than another group of students. Learning interests and numerical abilities could be used as predictors for improving student learning outcomes. It means that we can apply the online cooperative learning using a specific LMS to promote the increase of student's learning performance.
Introduction
As the development of technology, all aspects of society's life always connect to from digital influence, including the education field. Mobile learning becomes one of the trends in education that utilizing cellular devices as learning aid or media [1,2]. Using cellular mechanisms enables learning to be more exciting and interactive compared to others, so the term of Learning Management System (LMS) appears.
The more features available on cellular or smartphone encourage educators to think about their use in learning. Because of this, online education or e-learning or m-learning develops. More advanced is the growth of MOOCs (Massive Open Online Courses) which make knowledge accessible to many parties [3,4]. Along with this development, learning requires a learning management system that can meet the various needs of learning interactions. Teachers need simple LMS but sufficient learning needs.
The learning process can utilize some types of LMS such as Schoology, LearnBoost, Edmodo, Moodle, and others. Among others, Schoology is one of the websites that can combine social network and LMS; it is a social web that offers to learn as in the classroom for free [5,6]. One of LMS that has excellent and easy-to-use features and visualization is Schoology. Besides it is easy to use and open access for students involved in the learning process, Schoology also has various features that ease students to study, so it could encourage interest and increase student's learning achievement.
70
The Effect of Informal Cooperative Activity through Online Learning on the Understanding of Physics Concept Schoology is a website that combines e-learning and social network. Its concept is similar to Moodle, but e-learning with Schoology has many advantages and benefits compared to Moodle because it does not need hosting and the management is more user-friendly. Even though its features are not as complete as Moodle, it has been adequate for e-learning at school [7]. The advantage of Schoology is the attendance facility used to check student's attendance and analyze facility to observe all student's activities on each course, assignment, discussion and other exercises prepared for students [8]. Besides, to create more effective learning, it needs easy and efficient learning strategies, so it can liven up the situation in the classroom and encourage student's learning interest. Therefore, informal cooperative learning is suitable for that since it is a simple method with a lot of benefits. The importance of simple strategy in the learning practice is for the development of teacher's teaching technique daily, so it is more creative and more developed and grows broader learning process [9][10][11][12].
One of them is Think Pair and Share (TPS). Thus, it expects that with the strategy, students enjoy and engage in learning as learning will run effectively if students find it fun, comfortable and enjoyable [13,14]. Fun learning will appear if learning is meaningful for students [15,16]. Therefore, it seems that student's learning achievement is highly influenced by a series of specific lesson plans, which is coordinating fundamental components in learning [17,18]. However, the learning process also becomes less useful if in the learning process students are less involved, students tend to be more passive and only keep silent although they do not understand what their teacher is explaining. They just become passive listeners and do not want to give questions because they are less interested in learning. Whereas, the expectation is student's participation, so students actively participate in learning to achieve good learning process and achievement.
Research Design
This research is quasi-experimental research with pretest and posttest control group design. The objective of this research is to find out the influence of informal cooperative learning by using Schoology as a learning management system towards learning achievement viewed from the student's learning interest and numerical ability. This learning activity was applied in a public high school in Bima, West Nusa Tenggara. Through this research, researchers would find out whether the implementation of Think Pair and Share method using Schoology can improve student's learning interest and achievement in physics learning.
Population and Sample
The population covered five classes of eleventh-grade student. Randomly, we took two groups and assigned as control and experiment groups. Each group consists of 38 students. One teacher conducted informal cooperative learning using LMS at the experiment group (CL1). The other teacher implements the usual teaching method in the control group (CL2). Both groups used the same learning material as mandatory from Education Ministry of Republic Indonesia. We called BSE (Buku Sekolah Elektronik or e-book for school). See Fig. 1 for the textbook of Elasticity topic.
Variables
The independent variable is the physics learning strategy (IVAR). As mention before, the experiment group (CL1) taught using informal cooperative learning method supported by Schoology. The procedure of this strategy will describe in the next section.The dependent variable is the student's learning achievement (DVAR). DVAR was measured after all student had finished their learning activity for five sessions (90 minutes per session; 2 times per week).
As discussed in the introduction, student's learning interest (COV1) and numerical ability (COV2) are some factors influencing the learning achievement. We choose these variables to control this effect on the learning achievement measurement.
Translation:
In the laboratory practicum for finding the spring constant, we found this data: Table is here If F is force and ∆x is the extension of the spring, then calculate the spring constant.
Instruments and Statistical Technique
Prior knowledge (pretest) and learning achievement (posttest) were measured using the essay problems. These problems were arranged for higher order thinking skills of Bloom taxonomy. The number of problems is ten items. This problem has been analyzed related to the item validity, the difficulty index, the discrimination index.
We used the weighted score to find prior knowledge and learning achievement. The criteria are namely identification of the problem, interpretations of data or information, determination of strategy, application of the approach, and reflection or justification. The score of each is from 1 (below expectation) to 4 (exceed expectation). The example of the problem in Figure 2 One of the simple instrument to measure the learning interest is the ARCS model. We used this standard instrument to measure COV1. ARCS model consist of attention, relevance, confidence, and satisfaction factors representing the student interest. There are 34 statement items in this questionnaire which used Liker scale from 1 (SD: strongly disagree) to 5 (SA: strongly agree). Table 1 shows the structure of the questionnaire. Statisfication 7, 12, 14, 16, 18, 19, 32, 33, 34 31 We took the secondary data for the numerical ability of students. Every year, for every fresh student, will be tested in his/ her intelligent quotient (IQ) including the numerical ability. We used this data to represent COV2 in this study.
The teacher gave the test at the end of the learning activity to measure the students learning achievement (DVAR). This test consists of posttest by using self-ability analysis tested the student's learning achievement at the end of learning after all materials finished.
The primary technique of analysis was ANCOVA by using SPSS. The variables are independent variable (IVAR) consist of CL1 and CL2, dependent variable (DVAR), covariates consist of COV1 and COV2. The error margin was 0.05. We used T-test for comparing the pretest mean to ensure that both groups had equality. Normality and homogeneity as a prerequisite before applying the ANCOVA. The linear regression technique was used to find out the covariate contribution to the dependent variable.
Learning Mechanism
The teacher used cooperative learning strategy with the think pair share (TPS) method. This method helps students to think individually about a topic or answer a question. In the first stage (Think), the teacher provides material in the LMS for students to read. Teachers need to create an exciting learning atmosphere by greeting them online. After that, the teacher directs students to study the material provided in the LMS.
In the second stage (Paired), the teacher encourages discussion. At this stage, the teacher must ensure that all students have studied the material. At this stage, problems can also be given to be resolved in groups.
In the third stage (Share), each group can convey the results of their work to students throughout the class and discuss the results with each other. Thus, this discussion teaches students to share ideas with classmates and build oral communication skills. This strategy also helps to focus attention and involve students in understanding reading material.
The learning object is video format, textbook, and link to OER (open educational resources). The teacher provides various forms so that students have a variety of alternative learning resources. Also, students can learn more from some examples of problems presented in LMS.
This stage of learning is carried out for other topics until it's finished. At the end of learning, the teacher gives a problem essay to be used as a measure of student learning outcomes.
In this learning activity, what is essential to be noticed by the teacher is to maintain intensive communication with students, providing feedback on each response made by
Learning Activity
Schoology as the learning management system can be accessed via website https://app.Schoology.com/home or downloaded on Play Store. As mentioned in the method section, in the learning that has been carried out, the stages used are with informal cooperative learning type TPS. Before students individually do Think activities, the teacher ensures that students are present online. The teacher gives a greeting and also encourages fluid communication with students. Figure. 3 shows events at the beginning of learning.
Think
In the next activity, the teacher provided teaching material uploaded to LMS. In this study, teaching material was taken from an electronic textbook. This book is a standard book from the Indonesian Ministry of Education. The cover screenshot of this book is as shown in Figure 1. Apart from the textbook, the teacher also provides other learning resources to enable students to have alternative learning resources. Figure 4 shows the screenshot in LMS related to learning resources.
Pair
The teacher asked some concepts related to the material on the post to find out whether the students had read the content or not. This posting was done to ensure the readiness of students on discussion (Pair) to be done with a partner. After students studied the material independently, the teacher encouraged the implementation of discussion activities with a partner. In this activity, the teacher gave a problem that was done in pairs. Problems were related to the material and issues in daily life. The teacher provided an example problem to i mprove understanding. Figure 5 shows one of the problem examples. Figure 5. The example of the Elasticity problem. This problem needs the numerical manipulation to solve it. At the first question (a), he/ she had to find the area first before calculated the tensile stress. At the second question (b), the student had to analyze before making a decision. Based on the Bloom taxonomy, this problem is in C4 analysis level.
We can see that from the problem, students learn high-level thinking skills. These skills are competencies that must be promoted in learning in Indonesian schools which stated in physics competencies.
Share
In the Share activity, all students discussed in posting; therefore, all student participated in the related solution of the problem. At this stage, the teacher encouraged students to communicate with other students actively. In this way, students develop their communication skills. If there are students who do not express their opinions, the teacher greets them to argue in the post.
The critical thing at this stage was the level of communication intensity in the posting. Teachers needed to immediately provide feedback on posts made by students, giving appreciation for the right student's comments, rectifying possible misconceptions.
T-test
T-test was conducted to find out whether there were significant differences between the mean of the pretest (prior knowledge) in the control group (CL2) and the experiment group (CL1). This test is carried out to ensure that the two groups have equality. Table 2 shows the test results. From this T-test, it was found that at margin error 0.05 there was no significant difference in the two groups. Thus, the two groups have equality.
Descriptive data
After learning, students from both groups were given questions as many as ten problems. The results of the assessment of these problems were used as a learning achievement (DVAR) measure. Table 3 and Table 4 show the descriptive statistics of the measurement result of (DVAR), (COV1), and (COV2) respectively. In this data, T-test for COV1 and COV2 was also carried out to ensure that between the experimental group and the control group had equality. The results showed that the two covariates did not differ significantly for the two groups. It's just that the data shows that the standard deviation in the experimental group (CL1) is smaller than the control group (CL2) for both interset learning (COV1) and numerical abilities (COV2).
Analysis of Prerequisite Test
Reviews of prerequisite test used in this research were normality test and homogeneity test.
Normality Test
The normality test used Kolmogorov Smirnov test. From the normality test on the experiment group (CL1), the significance probability of the learning achievement (DVAR) was 0.101. The significance probability of student's learning interest (COV1) was 0.101 and significance probability of numerical ability (COV2) was 0.118. Since the significance probability of the three variables was higher than the significant level (0.05), then the three variables had a normal distribution. Meanwhile, from normality test on the control group (CL2), the significance probability of learning achievement (DVAR) was 0.116. The significance probability of student's learning interest (COV1) was 0.118, and the significance probability of numerical ability (COV2) was 0.345. It means that the three variables have a normal distribution.
Homogeneity Test
Based on the result of the homogeneity test using the Levene statistic calculation, the result showed that the three variables were homogenous. Seeing from the significant score of each variable that was higher than the significant level of 0.05, it meant that the experiment group (CL1) and control group (CL2) were homogenous.
ANCOVA
To find out whether there is an influence on student's learning achievement from posttest with student's numerical ability and learning interest here is the result of calculation by using ANCOVA as shown on the following Table 5.
Calculation of Contribution
Predictor contributions (COV1 and COV2) can be seen in relative contributions and effective contributions. Linear regression analysis can be used to find out both types of contribution. The sum of the effective contributions of these two covariates is the same as the R square value or the coefficient of determination. The relative contribution is the magnitude of predictor contribution to the square of regression. Table 6 shows the result of the linear regression analysis for the experimental group. Table 7 shows the correlation coefficients of each variable to others. We used this table for calculate the contribution of the variates to the dependent variable. Table 8 describes that COV2 has higher correlation to DVAR than COV1. The linear regression analysis is needed to find out the contribution of both variates to the DVAR. The model summary of this case was used to calculate the COV1 and COV2 contribution. The number of R-square was .6788. This value was used to calculate the relative contribution of the independent variable to the dependent variable. The final calculation of the contribution is in Table 8. The percentage of effective contribution of experiment group from learning interest towards learning achievement was 34.62% and from numerical ability towards one was 32.27%. The relative contribution of experiment group from learning interest toward learning achievement was 51% and from numerical ability toward one was 49%.
4.1.The Influence of Student's Learning Interest towards Learning Achievement
The result of this research shows that the analysis of covariant and double linear calculated based on each construct from ARCS interest questionnaires used to measure the influence of student's learning interest towards learning achievement shows that the higher student's learning interest will influence student's learning achievement, vice versa. The lower student's learning interest, the lower learning achievement [17]. Students, who get a small achievement, usually do not like the learning materials or the way teacher explains in the class is not exciting and fun, so they tend to avoid studying at home or listening carefully to the teacher in the class.
On the calculation result of control and experimental classes, it appears the similar result which is student's learning interest influences the learning achievement, even though there is a difference in the outcome because learning interest and achievement of the experiment group is higher than the average compared to the result of the control group. This result could appear because of the different treatment given to the two groups. The experiment group used informal cooperative learning with Think Pair Share by using Schoology application, while the control group used a lecture or conventional learning method.
Think pair share is a simple learning method with a lot of benefits. One of them is students can more participate and blend in the class, work in a group, discuss, and share what they understand to other friends in the class [21]. Schoology is an application that is easy to be used by students with various features that could ease students in learning, interact with the teacher and other friends, students are easy to get materials taught by the teacher [8].
The influence of Student's Numerical Ability towards Learning Achievement
The result of this research shows that analysis of covariates calculated to measure the impact of student's numerical ability towards learning achievement demonstrates that the measurement of the influence of student's capability towards student's learning achievement was positive. The numerical ability of each group measured in this research has no significant difference, but seeing from the average of each group, the mean of the numerical ability of experiment group was little bit higher than the control group. It appears that numerical ability has influences in learning achievement. Students who have the good numerical ability get higher learning achievement; students who have low numerical ability have lower learning achievement because the numerical ability is the ability to calculate and process numbers. Physics always relates to concept understanding and counting; if students have no excellent concept understanding and good counting ability, it's hard to get proper learning achievement. As an opinion said that students who are smart in counting tend to have a passion for finishing every question given to them, while students 76 The Effect of Informal Cooperative Activity through Online Learning on the Understanding of Physics Concept who are the lack in counting tend to be lazy in doing problems and often depend on their friends. Therefore, an excellent numerical ability is necessary for doing and understanding questions to improve learning achievement [19].
The Significant Influence of Learning Achievement between Experimental and Control Class
In this research, learning achievement data from the posttest of experimental and control classes have differences. The average score of the experiment group is higher than control class because the higher score of student's learning interest of experiment group compared to control group and the different treatment towards the two groups; experiment group used think pair share by using Schoology application while the control group used a lecture or conventional method. Besides, the high and low student's interest is also influenced by the high and low of student's numerical ability. It means learning strategy, learning interest, numerical ability, and learning achievement have a relationship. A good learning strategy is a strategy that makes students comfortable, enjoyable and fun during learning [13,14]. From liking and enjoying learning, student's interest to study can grow. Besides, learning interest can also improve if students have an excellent numerical ability so that students will be more enthusiastic in learning [20]. Thus, the student's learning achievement will highly increase. From the explanation, it seems that there is a significant difference between the learning achievement of the experimental and control class.
Effective and Relative Contribution of Learning Interest and Numerical Ability towards Learning Achievement
The implementation of informal cooperative learning that supported by Schoology gave the positive impact on the learning achievement. The learning interest gave the effective contribution of 34.64% and the relative contribution of 51% towards learning achievement. While, numerical ability gave the effective contribution of 33.27% and the relative contribution of 49% towards learning achievement. The rest contribution (32.1%) of the effective contribution of learning interest and numerical ability towards learning achievement is influenced by other factors which are not measured. The higher number of the effective and relative contribution of learning interest implies that students accept the informal cooperative learning strategy using Schoology. Therefore, it is a recommendation for teachers to implement this learning strategy as one of the strategy to give optimal influence on learning achievement [19,20,22].
The Research Influence on Learning
This research used Schoology as teaching aid with think pair share informal cooperative learning strategy to support the learning process. By using Schoology, the teacher and students experienced a lot of ease in the learning process such as [8] 1. The teacher could post online assignment and evaluation for the class. 2. Students could check or download assignment documents, 3. Students then could submit assignments via Schoology course dropbox, 4. The teacher could evaluate with comments and or annotation on student's works and give them back to students to be reviewed.
Besides, teachers could also make a code for parents to follow the course and observe their children's progress. It makes Schoology exciting and easy for them to navigate and study. Besides, almost all teachers could register for accounts, develop courses, collect resources, join one or more groups, and start to post online course materials in the first hour of professional development training. Students could also access class materials anywhere and anytime [7].
Conclusions
This study concluded that Schoology, as an online teaching aid, combined informal cooperative learning strategies can encourage the achievement of student learning performance. This result also showed that the integration of ICT in teaching could facilitate the process of sharing, exchanging, and maintaining active online classrooms. In further research, it is necessary to study the application of this strategy to different disciplines and to apply it to the more significant students and classes. Therefore, the impact of its utilization is more visible.
|
2020-04-23T09:13:28.555Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "41ebe8dce7613201757ba67af7c03b5f859bb8b8",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20200229/UJERB8-19591002.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7756d36d61a0d48f170a2c88d4d534e783d3e13f",
"s2fieldsofstudy": [
"Education",
"Physics"
],
"extfieldsofstudy": []
}
|
15466574
|
pes2o/s2orc
|
v3-fos-license
|
Microbleeds as a predictor of intracerebral haemorrhage and ischaemic stroke after a TIA or minor ischaemic stroke: a cohort study
Objectives We examined whether patients with cerebral microbleeds on MRI, who started and continued antithrombotic medication for years, have an increased risk of symptomatic intracerebral haemorrhage (ICH). Design Prospective cohort study. Settings Multicentre outpatient clinics in the Netherlands. Participants We followed 397 patients with newly diagnosed transient ischaemic attack (TIA) or minor ischaemic stroke receiving anticoagulants or antiplatelet drugs. 58% were men. The mean age was 65.3 years. 395 (99%) patients were white Europeans. MRI including a T2*-weighted gradient echo was performed within 3 months after start of medication. 48 (12%) patients had one or more microbleeds. They were followed every 6 months by telephone for a mean of 3.8 years. Primary and secondary outcome measures Primary outcome was a symptomatic ICH. Secondary outcome were all strokes, ischaemic stroke, myocardial infarct, death from all vascular causes, death from non-vascular causes and death from all causes. Results Five patients (1%) suffered from a symptomatic ICH. One ICH occurred in a patient with microbleeds at baseline (adjusted HR 2.6, 95% CI 0.3 to 27). The incidence of all strokes during follow-up was higher in patients with than without microbleeds (adjusted HR 2.3, 95% CI 1.0 to 5.3), with a dose–response relationship. The incidences of ischaemic stroke, vascular death, non-vascular death and death of all causes were higher in patients with microbleeds, but not statistically significant. Conclusions In our cohort of patients using antithrombotic drugs after a TIA or minor ischaemic stroke, we found that microbleeds on MRI are associated with an increased risk of future stroke in general, but we did not find an increased risk of symptomatic ICH.
INTRODUCTION
Secondary prophylaxis with antithrombotics in patients with a transient ischaemic attack (TIA) or ischaemic stroke caused by atherosclerosis is effective. 1 In patients with non-valvular atrial fibrillation, oral anticoagulants are the first choice, whereas in patients with sinus rhythm, antiplatelet drugs should be prescribed. 2 3 A feared side effect of antithrombotics is an intracerebral haemorrhage (ICH), which has an annual incidence of 0.3-2%, depending on the type of antithrombotic medication. 4 In patients with no or minor residual deficits after the initial stroke, this issue is of particular importance. Although the incidence of ICH is low, the less residual symptoms there are after the initial stroke, the larger the impact is of new neurological deficit due to ICH caused by prophylactic medication.
Microbleeds are small hypointense dots on T2*-weighted gradient echo MRI. Probably, they represent the remnants of small ARTICLE SUMMARY Article focus ▪ Use of antithrombotic medication is associated with increased risk of intracerebral haemorrhage (ICH). ▪ Patients who use antithrombotic medication and have cerebral microbleeds on MRI may have an increased risk of ICH compared with patients without cerebral microbleeds. ▪ This finding is consistent with those in Asian studies of patients with microbleeds, but European studies showed conflicting results.
Key messages
▪ In our cohort of mainly white European patients using antithrombotic drugs after a TIA or minor ischaemic stroke, we did not find an increased risk of symptomatic ICH in patients with microbleeds on MRI. ▪ We found that the number of microbleeds on MRI in patients using antithrombotic drugs after a TIA or minor ischaemic stroke was associated with the risk of future stroke in general.
asymptomatic ICHs and are considered a sign of cerebral microangiopathy. 5 They have been reported in 5% of the healthy elderly and in 10-15% of patients with cerebral infarcts. 6 Cerebral microbleeds may be a marker of the tendency of the brain to bleed. Patients with these lesions have an increased risk for ICH. 6 In patients with normal haemostasis, these spontaneous intracerebral bleedings are thought to remain small and asymptomatic. However, under the influence of antithrombotic drugs, they might grow into a larger symptomatic bleeding. This mechanism might be the explanation for the higher risk of ICH in Asian patients using antithrombotics and the unexpected high proportion of ICH in a previous trial with anticoagulants in patients with nondisabling cerebral ischaemia of arterial origin. [7][8][9][10] We studied a prospectively collected cohort of European patients with no or minor residual deficits after the initial ischaemic stroke who were prescribed antithrombotic drugs, with the aim of calculating the risk for a future symptomatic ICH that is associated with the presence of microbleeds.
SUBJECTS AND METHODS
From June 2000 to January 2010, we prospectively included patients with a TIA or a minor ischaemic stroke (modified Rankin score of 3 or less) from 10 academic and non-academic hospitals in the Netherlands. This study started as a satellite study of the ESPRIT trial and continued on its own after completion of ESPRIT in December 2005. 11 Patients had to be started on anticoagulants or antiplatelet drugs because of the TIA or minor stroke within the previous 3 months. We excluded patients who were expected to die within months, with a low ability to understand or express themselves in the Dutch language, or those who were pregnant. We also excluded patients with a history of intracranial haemorrhage, brain concussion, or cerebral tumour and patients with a TIA or minor stroke caused by vasculitis. Follow-up was performed by a central trial office.
All patients underwent prespecified MRI including a T2*-weighted fast field echo (FFE) gradient echo within 3 months after the start of medication. We provided all centres with a protocol of MRI sequences (see online supplementary table S1). The MRI sequence parameters that were eventually used in most centres are provided in online supplementary table S2. Data and imaging were collected centrally. MRIs were read centrally by two independent investigators familiar with these imaging techniques. The presence of microbleeds was scored with the Microbleed Anatomical Rating Scale. 12 The κ of this rating was 0.36, which is within the limits of 0.33-0.95 that have been published before in studies of microbleeds. 6 White matter lesions were scored according to the Age-Related White Matter Changes scale. 13 The institutional medical ethics review boards of the participating hospitals approved the study protocol, and all patients provided written informed consent.
Patients were followed up every 6 months by telephone. At each contact, the occurrence of possible outcome events and hospital admissions was recorded. In case an event occurred, a report on the clinical details was requested. For all strokes during follow-up, we required a confirmation by CT scan, MRI or autopsy to determine the ischaemic or haemorrhagic nature of the stroke. For subtyping the ischaemic strokes, we used the Oxford classification. 14 Except for four patients who were lost to follow-up, all patients had a close-out contact between 1 February 2011 and 1 May 2011. All data were collected and checked at the central trial office and entered in a database. During the study, none of the investigators had any knowledge of the event rates according to the presence or absence of microbleeds.
The primary outcome event was an ICH. Secondary endpoints included all strokes, ischaemic stroke, myocardial infarct, death from all vascular causes, death from non-vascular causes and death from all causes.
Death from vascular causes included death caused by cerebral infarction, intracranial haemorrhage, unspecified stroke, myocardial infarction, heart failure, pulmonary embolism, arterial bleeding or sudden death. 11 If no information was available about the cause of death, we classified the reason as vascular other, according to a priori probabilities. 15 Non-fatal ischaemic stroke was diagnosed in case of sudden onset of a new or increasing neurological deficit that persisted for more than 24 h, resulting in an increase in handicap of at least one grade on the modified Rankin scale, and no signs of haemorrhage on the CT scan or MRI of the brain made within 2 weeks of the event. We used the same clinical criteria for the diagnosis of haemorrhagic stroke if a corresponding ICH was detected on CT scan or MRI of the brain. If no brain imaging or autopsy was performed
ARTICLE SUMMARY
Strengths and limitations of this study ▪ The strength of this study is its specific design for following patients with microbleeds. The follow-up is nearly complete and is the longest in this type of studies until now. Owing to the multicentre design, the results can be extrapolated to a general neurology practice. The population of patients whom we studied is an important subgroup of stroke patients, since they had no or only minor neurological deficits after their event and had much to lose if they would have a complication of their prophylactic treatment. ▪ The cohort is representative of a contemporary outpatients vascular clinic with a comparable prevalence of microbleeds, proportion of patients using oral anticoagulants and overall risk for symptomatic ICH and ischaemic events, similar to that in prior recent studies with the same population of patients. ▪ A limitation of this study is that it is underpowered due to a lower incidence of ICH than we expected. Our estimate of the relationship between microbleeds and ICH might be considered as imprecise.
and clinical evidence of stroke was present, we classified the event as stroke, unspecified. The outcome event of myocardial infarction required at least two of the following characteristics: a history of chest discomfort for at least half an hour, level of specific cardiac enzymes more than twice the upper limit of normal or the development of specific abnormalities (eg, Q waves) on the standard 12-lead ECG. Outcome events were reported to the central trial office where all relevant data, including a brain scan or ECG, were obtained from the physician in charge. The clinical report of the outcome event was presented to two investigators (VIHK, LJK); they independently classified the event. If the classifications differed, the outcome event was discussed by the investigators, who made a decision by consensus.
Statistical analysis
We assumed a prevalence of microbleeds of 15% and an annual risk of ICH in patients on antiplatelet or anticoagulant drugs of 0.31%. 11 With 1800 patient-years, we would be able to detect a risk of symptomatic ICH with a risk ratio (RR) of 7.6 with a 95% CI of 1.3 to 52.
We compared the risk of ICH between patients with and without microbleeds at baseline by means of Cox regression analysis. Resulting HRs were accompanied by corresponding 95% CI. In addition to crude HRs, we calculated age-adjusted and sex-adjusted HRs.
RESULTS
We included 448 patients, of whom 51 did not have an FFE-gradient echo series during the MRI due to various reasons, leaving 397 analysable patients (figure 1). Patients who did not have an FFE-gradient echo MRI were, on average, 5 years younger, smoked more often (45% vs 27%) and had hyperlipidaemia (20% vs 42%) or atrial fibrillation (0% vs 8%) less frequently than patients who had an FFE-gradient echo MRI. There were no other differences between the two groups concerning sex, ethnicity, Rankin scores and other vascular risk factors (see online supplementary table S3).
Patient characteristics of the 397 analysable patients are given in table 1. At inclusion, 101 patients (25.4%) were using or had used antiplatelet agents some time in the past. One hundred and ninety-four (48.9%) had a TIA, 21 (5.3%) a transient monocular blindness and 183 (45.8%) a minor stroke. 392 (98.7%) had a Rankin score of 3 or less. MRI was performed within a median of 11 days (25th-75th centiles: 4-55 days) after the TIA or minor stroke. Of these patients, 19 (4.8%) were scanned with a 0.5 T MRI, 71 (17.9%) with a 1.0 T MRI and 307 (77.3%) with a 1.5 T MRI. Although the number of patients scanned with a 0.5 T MRI was small, there were no microbleeds detected with this type of scans. The proportion of patients detected to have microbleeds with a 1.0 or 1.5T MRI did not differ significantly (16.9% and 11.7%, respectively).
Forty-eight (12%) patients had microbleeds, of which 29 (7%) had one, 16 (4%) had between 2 and 10, and 3 (1%) had more than 10 microbleeds. Patients with microbleeds were older and had slightly worse Rankin scores. The mean duration of follow-up was 3.8 (SD 2.5) years, varying from 0.1 to 10.4 years, accumulating to 1509 patient-years. Four (1%) patients were lost to follow-up. During follow-up, 40 (10.1%) patients used oral anticoagulants, but all others used antiplatelet drugs. Data on adherence to the medication for patients who also participated in the ESPRIT trial were as follows: at 5 years, 66% patients were on aspirin plus dypiridamole, 84% on aspirin alone and 68% on oral anticoagulation. 11 There were five (1.3%, overall incidence 0.3%/year (95% CI 0.1 to 0.8%/year)) patients who had a symptomatic ICH during follow-up; of these, only one occurred in a patient with microbleeds at baseline. In this patient, we detected 5 microbleeds. The corresponding crude HR was 1.8 (95% CI 0.2 to 16) and 2.6 (95% CI 0.3 to 27) after adjustment for age and sex (table 2).
The incidence of all strokes was higher in patients with microbleeds (crude HR 2.6, 95% CI 1.1 to 6.2), also after correction for age and sex (HR 2.3, 95% CI 1.0 to 5.3; figure 2). The more microbleeds a patient had, the higher was the risk for future strokes (table 3). The incidence of ischaemic strokes was also higher in patients with microbleeds, but the HRs did not reach statistical significance. Interestingly, of the ischaemic strokes in patients with microbleeds, the proportion of lacunar infarcts (67%) was larger than that in patients without microbleeds (35%), but again this was not statistically significant (see online supplementary table S4). The incidence of myocardial infarction was low and did not occur at all in patients with microbleeds. Vascular deaths, non-vascular deaths and deaths of all causes were also more frequent in patients with microbleeds, but did not reach statistical significance.
Since white matter lesions are strongly associated with microbleeds, an additional analysis with white matter lesions was performed, but white matter lesions were not associated either with a higher risk of future ICH or ischaemic stroke (data not shown).
DISCUSSION
In our cohort, we did not observe an increased risk for future symptomatic ICHs in patients with microbleeds who used antithrombotic drugs after a TIA or minor ischaemic stroke. However, microbleeds were associated with an increased risk of future stroke in general, with a dose-response relationship: the higher the number of microbleeds, the higher the risk of stroke. These findings confirm that microbleeds should be considered as a biomarker that is associated with an increased risk of future cerebral vascular events. We included patients with no or minor deficits, because in these patients new neurological deficits would have a larger impact than in patients with severe residual deficit caused by the initial stroke.
Our cohort is representative of a contemporary outpatients vascular clinic. The low overall risk for symptomatic ICH and ischaemic events resembles that of prior recent studies in patients on antithrombotic agents for secondary prevention and concurs with our assumptions in the power calculation. 10 16 The prevalence of microbleeds was similar to that reported in patients with a TIA or ischaemic stroke. 6 Moreover, the proportion of patients who used oral anticoagulants equals that in other studies. 16 The risk for ICH in stroke patients treated with antithrombotic agents after an ischaemic stroke probably differs between Eastern and Western patients. In two Chinese studies, a higher risk of ICH was found in patients with microbleeds with HRs of 7.95 and 7.3, respectively, but with large CI (95% CI, 2.56 to 24.66, and 0.8 to 63, respectively). 7 8 A Japanese study reported an RR of 50 (95% CI 16.7 to 150.9). 9 In a European hospital-based cohort of patients with a major stroke, a trend to a higher risk of ICH was observed in patients with than in patients without microbleeds (adjusted HR 1.6, 95% CI 0.8 to 3.1). 17 In a Canadian cohort of stroke patients, only one ICH occurred in both group of patients with and without MB. 18 In a large European case-control study of patients with ICH, a higher risk of a new ICH during the use of antiplatelet drugs was observed in patients with than in patients without microbleeds (OR 1.27, 95% CI 1.04 to 1.55). 19 In these publications, the population studied was either of a different ethnicity or had another type of cerebrovascular disease than our population with inherently different baseline risks of ICH. A systematic review of published and unpublished cohorts, including a small European prospective cohort of ischaemic stroke or TIA patients using antithrombotic drugs, suggested that anticoagulants may be hazardous in patients with microbleeds. 20 The strength of our study is its specific design for the following patients with microbleeds with a nearly complete and the longest follow-up until now. Ninety-nine per cent of patients could be followed up until the close-out visit. Second, owing to the multicentre design, the results can be extrapolated to a general neurology practice. Third, we think that the population of patients we studied is an important subgroup of stroke patients, since they had no or only minor neurological deficits after their event and had much to lose if they would have a complication of their prophylactic treatment.
There are several limitations to our study. Since the incidence of ICH was lower than we expected, the study is underpowered and our estimate of the relationship between microbleeds and ICH might be considered as imprecise. To reach a more accurate result, many more patient-years would be needed. A formal meta-analysis of all available studies might solve this problem. However, even when the risk turns out to be higher in patients with microbleeds, the absolute numbers of ICH are so low that, in our opinion, the efforts to screen patients for microbleeds to prevent a single ICH would not be cost-effective compared with the benefits of antithrombotic drugs to prevent ischaemic events. Our results do not support the need for MRI scanning in every patient in whom antithrombotic treatment should be prescribed.
An explanation for the low incidence of ICH is that most of our patients used aspirin instead of oral anticoagulants. The incidence of ICH was similar to that in studies on patients on antiplatelet drugs. For the few patients on oral anticoagulants, we cannot exclude that the presence of microbleeds predicts a higher risk of ICH. Further studies with European cohorts of patients using anticoagulants in the presence of microbleeds are underway.
A biological explanation for the low incidence of ICH is that the effects of microangiopathy in a white European population differ from those in an Eastern population. The incidence of ICH in Eastern populations is twice as high as in white Western people. 21 The interaction between genetic variations with environmental factors differs in Asian and Western patients. 21 For example, Asian carriers of the APOE genotype ɛ2 and ɛ4 have almost twice as large a risk of future ICH compared with European carriers. 22 A third possible explanation for the low incidence of ICH is that the presumed microhaemorrhages are in fact no small ICHs caused by breaking of the small vessels, but merely leakage of the blood-brain barrier, a hypothesis in which breakdown of the blood-brain barrier would lead to both lacunar infarcts and microbleeds, as well as white matter lesions. 23 Such a hypothesis might explain the higher risk for both ICH and ischaemic strokes in patients with microbleeds, since microbleeds are in that case also a marker of the underlying disease leading to both ICH and lacunar infarcts. An important portion (40%) of our patients with microbleeds only had one microbleed. Radiologicalpathological correlations of microbleeds are scarce. Although we excluded microbleed-mimics, a single microbleed might not be a sign of general microangiopathy. Also, not all centres had high quality MRI's at the time of the study. Although a minority, some patients were scanned with a 0.5T MRI, and in none of them did we find a microbleed. Therefore, we might have underestimated the number of microbleeds. 24 With newer high-field MR-techniques, more microbleeds in more patients can be detected, perhaps revealing true microangiopathy. 25 In conclusion, we have demonstrated that in our cohort of mainly white European patients the number of microbleeds on gradient echo MRI is associated with the risk of future stroke in general, but we could not demonstrate an association of microbleeds with future symptomatic ICH. There is no need to withhold antiplatelet agents in these patients. The number of patients who used anticoagulants in this study was too small to draw definitive conclusions in this perspective.
|
2017-06-22T12:53:22.562Z
|
2013-05-29T00:00:00.000
|
{
"year": 2013,
"sha1": "8a9e26469e3e4c8344260016f3e44b9c5ce0dc86",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/3/5/e002575.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a9e26469e3e4c8344260016f3e44b9c5ce0dc86",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257337465
|
pes2o/s2orc
|
v3-fos-license
|
A Diet Containing Rutin Ameliorates Brain Intracellular Redox Homeostasis in a Mouse Model of Alzheimer’s Disease
Quercetin has been studied extensively for its anti-Alzheimer’s disease (AD) and anti-aging effects. Our previous studies have found that quercetin and in its glycoside form, rutin, can modulate the proteasome function in neuroblastoma cells. We aimed to explore the effects of quercetin and rutin on intracellular redox homeostasis of the brain (reduced glutathione/oxidized glutathione, GSH/GSSG), its correlation with β-site APP cleaving enzyme 1 (BACE1) activity, and amyloid precursor protein (APP) expression in transgenic TgAPP mice (bearing human Swedish mutation APP transgene, APPswe). On the basis that BACE1 protein and APP processing are regulated by the ubiquitin–proteasome pathway and that supplementation with GSH protects neurons from proteasome inhibition, we investigated whether a diet containing quercetin or rutin (30 mg/kg/day, 4 weeks) diminishes several early signs of AD. Genotyping analyses of animals were carried out by PCR. In order to determine intracellular redox homeostasis, spectrofluorometric methods were adopted to quantify GSH and GSSG levels using o-phthalaldehyde and the GSH/GSSG ratio was ascertained. Levels of TBARS were determined as a marker of lipid peroxidation. Enzyme activities of SOD, CAT, GR, and GPx were determined in the cortex and hippocampus. ΒACE1 activity was measured by a secretase-specific substrate conjugated to two reporter molecules (EDANS and DABCYL). Gene expression of the main antioxidant enzymes: APP, BACE1, a Disintegrin and metalloproteinase domain-containing protein 10 (ADAM10), caspase-3, caspase-6, and inflammatory cytokines were determined by RT-PCR. First, overexpression of APPswe in TgAPP mice decreased GSH/GSSG ratio, increased malonaldehyde (MDA) levels, and, overall, decreased the main antioxidant enzyme activities in comparison to wild-type (WT) mice. Treatment of TgAPP mice with quercetin or rutin increased GSH/GSSG, diminished MDA levels, and favored the enzyme antioxidant capacity, particularly with rutin. Secondly, both APP expression and BACE1 activity were diminished with quercetin or rutin in TgAPP mice. Regarding ADAM10, it tended to increase in TgAPP mice with rutin treatment. As for caspase-3 expression, TgAPP displayed an increase which was the opposite with rutin. Finally, the increase in expression of the inflammatory markers IL-1β and IFN-γ in TgAPP mice was lowered by both quercetin and rutin. Collectively, these findings suggest that, of the two flavonoids, rutin may be included in a day-to-day diet as a form of adjuvant therapy in AD.
Introduction
Decline in cognitive function is a fundamental clinical neurodegeneration symptom strictly related to age [1]. The impact of nutrition on age-associated cognitive decline is an increasingly growing topic, as it is a vital factor that can easily be modified. Pathological changes in the brain observed during cognitive decline take place well before any clinical manifestation, which mostly occur in old age. This provides a lengthy period of time to establish prevention strategies concerning age-related cognitive decline and dementia, which is a major public health concern [2]. For many years, intensive research on compounds gous males and wild-type females. Genotyping of mice was pe genic individuals. Of all the mice tested, approximately 40% we (TgAPP).
The PCR products obtained were separated by electrophor in 0.5X TBE (Tris-Borate-EDTA) buffer at 70 V (constant volta staining with GelRed (Millipore). The amplification profile for mice is shown in Figure 1.
Glutathione
In order to evaluate intracellular redox homeostasis in neur and GSSG levels were quantified, and the GSH/GSSG ratio was of cellular-reducing power in both males and females (
Glutathione
In order to evaluate intracellular redox homeostasis in neurons in TgAPP mice, GSH and GSSG levels were quantified, and the GSH/GSSG ratio was determined as a marker of cellular-reducing power in both males and females ( Figure 2).
Genotyping of Mice
The TgAPP mouse colony was developed in our laboratory from Tg2576 heterozygous males and wild-type females. Genotyping of mice was performed to detect transgenic individuals. Of all the mice tested, approximately 40% were found to be transgenic (TgAPP).
The PCR products obtained were separated by electrophoresis in 1.5% agarose gels in 0.5X TBE (Tris-Borate-EDTA) buffer at 70 V (constant voltage) and then imaged by staining with GelRed (Millipore). The amplification profile for both transgenic and WT mice is shown in Figure 1.
Glutathione
In order to evaluate intracellular redox homeostasis in neurons in TgAPP mice, GSH and GSSG levels were quantified, and the GSH/GSSG ratio was determined as a marker of cellular-reducing power in both males and females ( Figure 2). In the assessment of the effect of the transgene on the glutathione system, a decline in the cellular-reducing power (GSH/GSSG) was observed in the TgAPP mice with respect (4) TgAPP + R. * p < 0.05 or ** p < 0.001 vs. TgAPP control (ANOVA followed by the post-hoc Newman-Keuls multiple comparison test); # p < 0.05 (Student t-test).
In the assessment of the effect of the transgene on the glutathione system, a decline in the cellular-reducing power (GSH/GSSG) was observed in the TgAPP mice with respect to WT animals, in both males and females, and in both areas of the brain ( Figure 2C3,H3), especially in hippocampus. In both WT and TgAPP mice, the GSH/GSSG ratio was significantly lower in males than in females ( Figure 2C3,H3; p < 0.05). While in TgAPP females this decline is the result of lower GSH levels ( Figure 2C1,H1), in TgAPP males it is mostly attributed to an increase in GSSG levels ( Figure 2C2,H2).
Changes in GSH and GSSG levels of TgAPP mice, respectively, upon quercetin or rutin treatment, are more prominent in males than in females. It seems that quercetin tends to augment GSH levels ( Figure 2C1,H1; p < 0.05) and rutin to lower GSSG levels ( Figure 2C2,H2; p < 0.05).
Quercetin and rutin treatments, in both males and females, were able to reverse the fall in the ratio GSH/GSSG in hippocampus ( Figure 2H3) where the recovery of redox power was significant versus the untreated TgAPP mice. In males, this index achieved similar values to those of WT mice in hippocampus (H3). In females, although this ratio is not raised up to that of the WT mice, treatment with quercetin and rutin enhanced it significantly in comparison to that of the TgAPP mice in their hippocampi (H3: quercetin, p < 0.001; rutin, p < 0.05).
Thiobarbituric Acid Reactive Substances (TBARs)
Levels of TBARs were determined as a marker of lipid peroxidation. Using calibration curves, the results were expressed as malondialdehyde (MDA) concentration ( Figure 3).
to WT animals, in both males and females, and in both areas of the brain (Figure 2, C3, H3), especially in hippocampus. In both WT and TgAPP mice, the GSH/GSSG ratio was significantly lower in males than in females (Figure 2, C3, H3; p < 0.05). While in TgAPP females this decline is the result of lower GSH levels ( Figure 2, C1, H1), in TgAPP males it is mostly attributed to an increase in GSSG levels ( Figure 2, C2, H2).
Changes in GSH and GSSG levels of TgAPP mice, respectively, upon quercetin or rutin treatment, are more prominent in males than in females. It seems that quercetin tends to augment GSH levels ( Figure 2, C1, H1; p < 0.05) and rutin to lower GSSG levels ( Figure 2, C2, H2; p < 0.05).
Quercetin and rutin treatments, in both males and females, were able to reverse the fall in the ratio GSH/GSSG in hippocampus (Figure 2, H3) where the recovery of redox power was significant versus the untreated TgAPP mice. In males, this index achieved similar values to those of WT mice in hippocampus (H3). In females, although this ratio is not raised up to that of the WT mice, treatment with quercetin and rutin enhanced it significantly in comparison to that of the TgAPP mice in their hippocampi (H3: quercetin, p < 0.001; rutin, p < 0.05).
Following APP overexpression, a significant increase in MDA levels when compared to WT mice was observed in both the cortex (Figure 3, C) and the hippocampus (Figure 3, H), and in both males and females (p < 0.001).
In TgAPP females, both quercetin and rutin treatments almost restored MDA levels to the same as those of WT mice (Figure 3, C, H). In TgAPP males, likewise, both quercetin and rutin treatments reinstate MDA levels to the same as those of WT mice in the cortex (Figure 3, C), and are decreased even further in the hippocampus (Figure 3, H). In both WT and TgAPP mice, untreated and flavonoid diet-treated, MDA levels were sex-dependent ( Figure 3, C, H; p < 0.05), except for the quercetin-treated TgAPP mice in hippocampus. (1) WT; (2) TgAPP control; (3) TgAPP + Q; (4) TgAPP + R. * p < 0.05 or ** p < 0.001 vs. TgAPP control (ANOVA followed by the post-hoc Newman-Keuls multiple comparison test); # p < 0.05 (Student t-test).
Following APP overexpression, a significant increase in MDA levels when compared to WT mice was observed in both the cortex ( Figure 3C) and the hippocampus ( Figure 3H), and in both males and females (p < 0.001).
In TgAPP females, both quercetin and rutin treatments almost restored MDA levels to the same as those of WT mice ( Figure 3C,H). In TgAPP males, likewise, both quercetin and rutin treatments reinstate MDA levels to the same as those of WT mice in the cortex ( Figure 3C), and are decreased even further in the hippocampus ( Figure 3H). In both WT and TgAPP mice, untreated and flavonoid diet-treated, MDA levels were sex-dependent ( Figure 3C,H; p < 0.05), except for the quercetin-treated TgAPP mice in hippocampus.
Enzyme Activity and Expression of Antioxidant Enzymes
To address whether regulation of the enzymatic activity or the gene expression of the main antioxidant enzymes, or both, occurs upon a quercetin or rutin diet, determination of the enzymatic activity and mRNA levels was performed. Figure 4 shows the enzyme activities of SOD, CAT, GR, and GPx, determined in female and male mice, in the cerebral cortex ( Figure 4a) and the hippocampus (Figure 4b).
Enzyme Activity and Expression of Antioxidant Enzymes
To address whether regulation of the enzymatic activity or the gene expression of the main antioxidant enzymes, or both, occurs upon a quercetin or rutin diet, determination of the enzymatic activity and mRNA levels was performed. Figure 4 shows the enzyme activities of SOD, CAT, GR, and GPx, determined in female and male mice, in the cerebral cortex ( Figure 4a) and the hippocampus (Figure 4b).
(a) (b) As a consequence of APP overexpression, only a significant decrease in CAT activity was observed in TgAPP mice compared to WT mice in both the cortex (Figure 4a(C2); p < 0.05) and the hippocampus (Figure 4b(H2); p < 0.05).
Quercetin treatment did not produce any significant variation in enzyme activities in comparison to TgAPP mice, in males or females, in the brain areas studied. In contrast, animals treated with rutin experienced an increase in CAT activity in the cortex (Figure 4a(C2)) and in GR activity in the hippocampus (Figure 4b(H3)) in both males and females (p < 0.05). Moreover, rutin increased hippocampal CAT activity in TgAPP males (Figure 4b(H2)) and GPx activity in females (Figure 4b(H4)). Figure 5 shows the gene expression of the main antioxidant enzymes, SOD, CAT, GR, and GPx, determined in female and male mice, in the cerebral cortex ( Figure 5a) and the hippocampus (Figure 5b).
No differences in gene expression between TgAPP and WT mice are observed for the main antioxidant enzymes (Figure 5a,b). TgAPP males treated with rutin showed a significant increase in the expression of CAT in the hippocampus (Figure 5b(H2)). As for the hippocampal GPx, a similar pattern to CAT was observed, although the increase was not significant (Figure 5b(H4)).
APP Processing: BACE1 and ADAM10
The results of BACE1 enzyme activity in both the cerebral cortex and the hippocampus in males and females are shown in Figure 6, expressed as percentages of activity with respect to untreated TgAPP mice. As a consequence of APP overexpression, only a significant decrease in CAT activity was observed in TgAPP mice compared to WT mice in both the cortex (Figure 4a, C2; p < 0.05) and the hippocampus (Figure 4b, H2; p < 0.05).
Quercetin treatment did not produce any significant variation in enzyme activities in comparison to TgAPP mice, in males or females, in the brain areas studied. In contrast, animals treated with rutin experienced an increase in CAT activity in the cortex ( No differences in gene expression between TgAPP and WT mice are observed for the main antioxidant enzymes (Figure 5a,b). TgAPP males treated with rutin showed a significant increase in the expression of CAT in the hippocampus (Figure 5b, H2). As for the hippocampal GPx, a similar pattern to CAT was observed, although the increase was not significant (Figure 5b, H4).
APP Processing: BACE1 and ADAM10
The results of BACE1 enzyme activity in both the cerebral cortex and the hippocampus in males and females are shown in Figure 6, expressed as percentages of activity with respect to untreated TgAPP mice. No differences in gene expression between TgAPP and WT mice are observed for the main antioxidant enzymes (Figure 5a,b). TgAPP males treated with rutin showed a significant increase in the expression of CAT in the hippocampus (Figure 5b, H2). As for the hippocampal GPx, a similar pattern to CAT was observed, although the increase was not significant (Figure 5b, H4).
APP Processing: BACE1 and ADAM10
The results of BACE1 enzyme activity in both the cerebral cortex and the hippocampus in males and females are shown in Figure 6, expressed as percentages of activity with respect to untreated TgAPP mice. In Figure 6, BACE1 enzyme activity in TgAPP mice was found to increase by around 10% when compared to WT mice, in both the brain areas under investigation and in both sexes, and was found to be statistically significant (p < 0.05).
The increase in activity observed in the transgenic mice was lowered by both quercetin and rutin treatments, both in the cortex and in the hippocampus. Nevertheless, it could still be noted that the rutin effect was slightly greater than that of quercetin in males.
Once the activity of BACE1 was known, we decided to carry out the gene expression study of APP, the main characteristic of the transgenic animal model, and its main processing enzymes: BACE1 and ADAM10. Figure 7 shows the results obtained in female and male mice, both in the cortex and the hippocampus.
In Figure 6, BACE1 enzyme activity in TgAPP mice was found to increase by around 10% when compared to WT mice, in both the brain areas under investigation and in both sexes, and was found to be statistically significant (p < 0.05).
The increase in activity observed in the transgenic mice was lowered by both quercetin and rutin treatments, both in the cortex and in the hippocampus. Nevertheless, it could still be noted that the rutin effect was slightly greater than that of quercetin in males.
Once the activity of BACE1 was known, we decided to carry out the gene expression study of APP, the main characteristic of the transgenic animal model, and its main processing enzymes: BACE1 and ADAM10. Figure 7 shows the results obtained in female and male mice, both in the cortex and the hippocampus. In both sexes, a significant increase in APP expression greater than 85% was observed with respect to WT mice, demonstrating the overexpression of the gene both in the cerebral cortex (Figure 7, C1; p < 0.05) and in the hippocampus (Figure 7, H1; p < 0.05). Treatments with quercetin and rutin were able to reduce this expression by more than 45% (p < 0.05) for both male and female mice in both brain areas under investigation (Figure 7, C1, H1), with the effects being more prominent in the hippocampus (Figure 7, H1).
As for the BACE1 protein expression, though BACE1 activity was altered, there were no significant differences between transgenic and non-transgenic mice regardless of sex and flavonoid treatment examined.
Thus, we evaluated the ADAM10 expression involved in the non-amyloidogenic processing of APP. Although the changes in ADAM10 expression in TgAPP mice in comparison to WT mice were not statistically significant, a slight decrease was observed. Regarding the flavonoid treatments, rutin displayed an increasing trend in ADAM10 expression, both in males and females and in both areas of the brain (Figure 7, C3, H3).
Expression of Caspase-3 and Caspase-6
TgAPP mice showed an increase in caspase-3 gene expression (Figure 8, C1, H1), which was significant and greater than 30% compared to the hippocampi of WT mice In both sexes, a significant increase in APP expression greater than 85% was observed with respect to WT mice, demonstrating the overexpression of the gene both in the cerebral cortex ( Figure 7C1; p < 0.05) and in the hippocampus ( Figure 7H1; p < 0.05). Treatments with quercetin and rutin were able to reduce this expression by more than 45% (p < 0.05) for both male and female mice in both brain areas under investigation ( Figure 7C1,H1), with the effects being more prominent in the hippocampus ( Figure 7H1).
As for the BACE1 protein expression, though BACE1 activity was altered, there were no significant differences between transgenic and non-transgenic mice regardless of sex and flavonoid treatment examined.
Thus, we evaluated the ADAM10 expression involved in the non-amyloidogenic processing of APP. Although the changes in ADAM10 expression in TgAPP mice in comparison to WT mice were not statistically significant, a slight decrease was observed. Regarding the flavonoid treatments, rutin displayed an increasing trend in ADAM10 expression, both in males and females and in both areas of the brain ( Figure 7C3,H3).
Expression of Caspase-3 and Caspase-6
TgAPP mice showed an increase in caspase-3 gene expression ( Figure 8C1,H1), which was significant and greater than 30% compared to the hippocampi of WT mice ( Figure 8H1; p < 0.05). As for caspase-6 expression, no differences were observed between transgenic and non-transgenic mice ( Figure 8C2,H2).
4, x FOR PEER REVIEW 9 of 23 (Figure 8, H1; p < 0.05). As for caspase-6 expression, no differences were observed between transgenic and non-transgenic mice ( Figure 8, C2, H2). With regard to caspase-6, quercetin and rutin treatments did not exert any statistically significant effect in the cortex or in the hippocampus (Figure 8, C2, H2), though in the latter the Caspase-6 in TgAPP males showed a tendency to decrease (Figure 8, H2).
Inflammation Markers
The results obtained for gene expression of the inflammatory mediators IL-1β, TNFα, and IFN-γ are shown in Figure 9. Quercetin and rutin treatments were able to lower caspase-3 mRNA levels in the hippocampus in a statistically significant manner ( Figure 8H1; p < 0.05), with inhibition percentages of around 17% and 27% for female and male mice, respectively. In the cerebral cortex, significant differences were only observed in the treatment with rutin in males ( Figure 8C1; p < 0.05).
With regard to caspase-6, quercetin and rutin treatments did not exert any statistically significant effect in the cortex or in the hippocampus ( Figure 8C2,H2), though in the latter the Caspase-6 in TgAPP males showed a tendency to decrease ( Figure 8H2).
Inflammation Markers
The results obtained for gene expression of the inflammatory mediators IL-1β, TNF-α, and IFN-γ are shown in Figure 9.
In the TgAPP, there was a significant increase in IL-1β gene expression of around 20% in the cortex and hippocampus in both sexes compared to WT mice (Figure 9C1,H1; p < 0.05). As regards TNF-α, although higher mRNA levels are shown in TgAPP, they are not statistically significant in relation to WT mice ( Figure 9C2,H2). Regarding IFN-γ, there was an increase of around 30% in its expression in males, which was only statistically significant in the cortex ( Figure 9C3; p < 0.05). In the TgAPP, there was a significant increase in IL-1β gene expression of around 20% in the cortex and hippocampus in both sexes compared to WT mice (Figure 9, C1, H1; p < 0.05). As regards TNF-α, although higher mRNA levels are shown in TgAPP, they are not statistically significant in relation to WT mice (Figure 9, C2, H2). Regarding IFN-γ, there was an increase of around 30% in its expression in males, which was only statistically significant in the cortex (Figure 9, C3; p < 0.05).
Treatments with quercetin and rutin, both in females and males, were able to diminish IL-1β expression in the cerebral cortex and hippocampus in comparison to control TgAPP mice (Figure 9, C1, H1; p < 0.05), obtaining similar values to those of WT mice, and particularly lower in the hippocampi of male mice (Figure 9, H1; p < 0.05).
Assessment of Degenerating Neurons and Its Projections
No characteristic signs of neurodegeneration were observed at the age at which the transgenic TgAPP mice were tested, compared to WT mice, nor did treatments with quercetin and rutin show any change for 4 weeks in comparison to TgAPP ( Figure S1, Supplementary data).
Expression of Ionotropic Glutamate Receptors
No significant differences were found in receptor expression, comparing the values obtained for the control TgAPP mice with those obtained for the WT mice. There were also no notable effects on the expression of these ionotropic receptors in the presence of quercetin or rutin treatment (Table S5, Supplementary data). Treatments with quercetin and rutin, both in females and males, were able to diminish IL-1β expression in the cerebral cortex and hippocampus in comparison to control TgAPP mice ( Figure 9C1,H1; p < 0.05), obtaining similar values to those of WT mice, and particularly lower in the hippocampi of male mice ( Figure 9H1; p < 0.05).
Discussion
The overall effect of both flavonoid treatments on IL-1β expression was not observed with TNF-α nor with INF-γ. Thus, TgAPP males, upon quercetin treatment, underwent a significant decrease in hippocampal TNF-α ( Figure 9H2; p < 0.05) and in cortical IFN-γ expression ( Figure 9C3; p < 0.05).
Assessment of Degenerating Neurons and Its Projections
No characteristic signs of neurodegeneration were observed at the age at which the transgenic TgAPP mice were tested, compared to WT mice, nor did treatments with quercetin and rutin show any change for 4 weeks in comparison to TgAPP ( Figure S1, Supplementary data).
Expression of Ionotropic Glutamate Receptors
No significant differences were found in receptor expression, comparing the values obtained for the control TgAPP mice with those obtained for the WT mice. There were also no notable effects on the expression of these ionotropic receptors in the presence of quercetin or rutin treatment (Table S5, Supplementary data).
Discussion
The purpose in our present study was to assess the impact of two flavonoids, quercetin and rutin, at the first stages of AD pathogenesis, regardless of their effect on neurodegeneration and/or cognitive function. The cortex and hippocampus were the areas of the brain under analysis, as they are the most affected brain structures in AD. It should be taken into account that quercetin and rutin were administered through a formulated diet containing either one of the two flavonoids, with the aim to mimic, in an AD animal model, the intake of a healthy human diet, containing an active ingredient.
In particular, the transgene APPswe in the C57B6 mouse exerted a significant impact on GSH/GSSG ratio, MDA levels, antioxidant enzyme capacity, APP expression, BACE1 activity, and caspase-3 and IL-1β expression. Whilst APP mutations in humans generally result in typical AD, they are predominantly linked to solely amyloid pathology in APP transgenic mice and there is no noticeable neurodegeneration [14,15], as there were no characteristic signs observed in our transgenic mice TgAPP, contrary to the WT mice (Supplementary data, Figure S1). Counterstaining with 4 -6-diamidino-2-phenylindole (DAPI) of hippocampal neurons allowed us to observe the nuclear morphology, as this compound is a fluorescent dye for nucleic acids. We did not observe fragmented or lobular nuclei, typically apoptotic; nor did we observe any remarkable differences comparing the hippocampal histological sections of the control transgenic line TgAPP with respect to the WT sections; nor did we observe any differences between the quercetin and rutin treatments with respect to the control TgAPP mice.
In the panel of AD biochemical features to be analyzed, we focused primarily on determining the GSH/GSSG ratio upon either one of the two flavonoid diets, since depletion of GSH levels represents one of the most important early biochemical markers in AD [16,17] and has been observed during its pathogenesis and disease progression. Measurement of brain GSH levels [18] and, more recently, blood GSH levels [19] have been promising as diagnostic markers for early stages of AD. Moreover, efforts have also been made to supplement endogenous GSH stores by themselves or their precursors [20][21][22]. In our study, a decline in the cellular reducing power (GSH/GSSG) was observed in the TgAPP mice with respect to WT animals, in both males and females, and in both areas of the brain. In cortex and hippocampus of both WT and TgAPP mice, the GSH/GSSG ratio was lower in male than in female. Quercetin and rutin diets significantly increased the GSH/GSSG ratio in comparison to untreated TgAPP mice, and this increase was more pronounced in the hippocampus. The changes in GSH and GSSG levels and GSH/GSSG ratio upon quercetin or rutin treatment of males, regarding increasing redox power, were more prominent than in females. The results from our determinations may reveal an important basis underlying sex-associated differences in Tg2576 mice in the susceptibility to the oxidative damage of macromolecules on one hand, since the glutathione system is a versatile reductant in multiple biological functions, and in the impact of preventive flavonoid diets in restoring its physiological status on the other hand. As we will see throughout this discussion, we have set the increase in the GSH/GSSG ratio as the main axis that might explain the set of effects observed in the TgAPP mice.
It has recently been proposed that the GSH/GSSG ratio, rather than simply functioning as a redox buffer, would instead operate as a main regulatory mechanism, allowing proteins to attain their native conformation and functionality by tightly controlling the thiol-disulphide balance of the cellular proteome. In short, the glutathione system arises as essential to preserve a healthy proteome, showing that disruption of glutathione redox homeostasis (i.e., genetically or pharmacologically) increases protein aggregation due to disturbances in the efficacy of autophagy [23]. Therefore, strategies aimed at maintaining glutathione redox homeostasis may have a therapeutic potential in diseases associated with protein aggregation, such as AD. Closely related to the preservation of the proteome is the ubiquitin-proteasome degradation machinery, which is involved in the pathogenesis of AD. The proteasome selectively degrades multiple substrates that are crucial in maintaining neuronal homeostasis, including the catabolism of oxidized and aggregated proteins. BACE1 undergoes ubiquitination, and it has been demonstrated that blocking the ubiquitinproteasome pathway will inhibit BACE1 degradation and consequently lead to increased production of BACE1 enzymatic activity, more β-cleavage product C99, and increases in both Aβ 1-40 and Aβ 1-42 in neuronal and non-neuronal cells [10]. Our previous studies have found that both flavonoids, quercetin and rutin, affect various signaling pathways and molecular networks associated with modulation of proteasome function in neuroblastoma cells [9]. In addition, it has been demonstrated that neurons supplemented with reduced glutathione (GSH) recovered the proteasome activity and reduced aggregate formation [11], since the proteasome function is redox status-regulated [24]. Therefore, the increase in the GSH/GSSG ratio experienced by the animals upon having a quercetin or rutin diet is consistent with the modulation of proteasome by quercetin and rutin, demonstrated ex vivo previously.
As previously mentioned, redox imbalance leads to highly oxidatively-modified proteins that tend to accumulate and create aggregates resulting in proteasome impairment [25]. Thus, given the crucial role of oxidative stress in the pathogenesis of AD, biomarkers of oxidative stress, including lipid peroxidation (MDA levels) and antioxidant enzymes, were assessed in the cortex and hippocampus in the TgAPP and WT mice. SOD, CAT, GR, and GPx are the most important antioxidant enzymes that act against oxygen free radicals and regulate the metabolism of free radicals in the body and play a role in the free radical scavenging system, protecting the cells in the body from lipid peroxidation. In our study, as a consequence of APP overexpression, a generalized decrease in antioxidant enzyme activities was observed in TgAPP mice compared to WT mice, being statistically significant for CAT. Consistent with reduced GSH levels, lipid peroxidation was significantly increased in the TgAPP mice. While the source of oxidative stress in human AD is highly complex and multifactorial, the amyloid pathology developed in mice seems to be sufficient to initiate the pathological process leading to increased oxidative stress in the brain [26]. Animals treated with rutin experienced an increase in CAT activity in the cortex and in GR activity in the hippocampus, in both males and females. Only animals treated with rutin experienced changes in gene expression of CAT and GR in the cortex and the hippocampus in both males and females, and GPx in the hippocampi of female mice. In this context, several natural compounds have been shown to affect the crosstalk between the proteasome and redox regulation. More precisely, quercetin is a known Nrf2 activator [27] which exhibits antioxidant properties through the stimulation of proteasome function, promoting increased oxidative stress resistance and conferring enhanced cell longevity [28].
Tissue-specific expression of BACE1 is critical for normal APP processing, and its dysregulation expression may play a role in AD pathogenesis. BACE1 is predominantly expressed in hippocampal neurons, the cortex, and the cerebellar granular layer [10]. It should be noted that earlier studies have shown that Swedish mutant APP transgenic mice had significantly increased brain levels of Aβ at a steady state [29], suggesting that BACE1 plays an essential role in the amyloidogenic pathway in AD pathogenesis and is a good therapeutic target for AD treatment. In our study, we observed a significant reduction of BACE1 activity upon quercetin and rutin treatments, which might contribute to the decrease of Aβ deposition in mice. We argue that more than solely operating as BACE1 inhibitors of the enzyme, quercetin and rutin might exert a reduction in BACE1 activity related to an increase in the ratio GSH/GSSG, based on the hypothesis of an enhancing recovery of proteasome activity. In this sense, it is known that targeting of BACE1 inhibitors to the β-cleavage site of APPswe (Swedish mutation) occurs before it reaches the plasma membrane, whereas APPwt (Wild-type) is processed in an early endosome originating at the cell surface. Therefore, BACE1 that cleaves APPwt is sometimes bound to the BACE1 inhibitor on the cell surface prior to APP processing, however, the enzyme that processes APPswe is not [30]. It is for this reason that the aberrant localization of APPswe processing might significantly lower the potency of quercetin and rutin as BACE1 inhibitors. Thus, we are more inclined to support that the BACE activity's decreasing in this in vivo model is not so much due to the inhibition of the enzyme but to the increase in the GSH/GSSG ratio. In any case, reduced BACE1 activity could be interpreted as a putative attempt to reduce β-amyloid production in the TgAPP mice. As for the most remarkable effect of quercetin and rutin in the hippocampus on BACE1 activity attenuation, it is worth noting that the cortex has a significantly higher neuron density than the hippocampus [31], and a selective impairment of the proteasome in AD pathological phenotype makes the cortex more vulnerable and affected than the hippocampus [32].
After determining the effect of the treatments on BACE1 enzyme activity, we were interested in evaluating its expression. Curiously, no significant differences in BACE1 expression were found between TgAPP mice compared to WT mice and no noticeable changes were observed with quercetin or rutin treatment. Therefore, it seems that the increase in BACE1 enzyme activity is not associated with an increase in expression. In this context, it is remarkable that Apelt et al. [14] found an increase in cortical BACE1 activity in Tg2576 mice between ages of 9 and 13 months while the expression level of BACE1 protein and mRNA did not change with age. Furthermore, evidence has been found supporting that fibrillar amyloid Aβ ., rather than soluble amyloid Aβ 1-42 , is able to upregulate BACE1 protein expression, and thus small modifications in the ratio of amyloid isoforms may modulate amyloid aggregate conformations and cell damage [33]. Thus, the absence of change in BACE1 expression upon an increase of its activity that we found may account for the prevalence of soluble amyloid Aβ 1-42 over the fibrillar amyloid Aβ 1-42 isoform in our mouse model TgAPP.
Following the determination of gene expression of the enzymes involved in APP processing, we evaluated the effect of quercetin and rutin on the enzyme α-secretase involved in the non-amyloidogenic processing of APP. We focused on ADAM10 because it is the physiologically most important constitutive isoform of α-secretase. ADAM10 counteracts the generation of neurotoxic oligomeric Aβ plaques via cleaving APP within the Aβ domain to produce sAPPα and C-terminal fragment (α-CTF) [34,35]. Although the changes in ADAM10 expression found in our study were not statistically significant, a slight decrease in ADAM10 expression was observed in TgAPP mice relative to WT mice. Predominantly, rutin treatment showed a tendency to increase ADAM10 gene expression in both brain areas under study. Postina et al. [36] showed that the up-regulation of wild-type ADAM10 in the hippocampus of an AD mouse model mediated sAPPα secretion, leading to inhibition of Aβ plaques generation. The effect of quercetin has been studied in an aluminum chloride-induced AD rat model showing a significant enhancement of the α-secretase (ADAM10 and ADAM17) in the hippocampus compared to untreated ones. This indicates that quercetin possesses the potential to increase the non-amyloidogenic pathway through the activation of α-secretase genes [37]. Preclinical data reinforce the hypothesis that enhancing brain sAPPα levels is a potential strategy to improve AD-related symptoms and attenuate synaptic deficits. ADAM10 and BACE1 compete for the APPβ cleavage, therefore potentiating ADAM10 activity might inhibit the neurotoxic amyloid generation. Moreover, sAPPα can prevent the activation of the stress JNK-signaling pathway, leading to activation of NF-κB-induced phosphorylation activity, which leads to proteasome degradation [38]. Therefore, the formation and the accumulation of disease-related protein aggregates are significantly reduced, and the cellular proteasome activity is enhanced, thereby providing evidence for a function of sAPPα in the regulation of proteostasis [39]. Furthermore, it has been demonstrated that sAPPα specifically upregulates glutamate AMPA receptor synthesis and its trafficking [40]. In our study, we explored whether the slight increase of ADAM10 expression upon rutin treatment exerts some influence in glutamatergic synaptic transmission. As shown in the Supplementary data section, no significant effects on the expression of these ionotropic receptor were observed upon quercetin and rutin diets, perhaps due to a weak increase in ADAM10 expression, which is not sufficient for the upregulation of the AMPA receptor (Supplementary data, Figure S2 and Table S5).
It should be taken into consideration that in vitro studies have shown a wide variety of ADAM10 substrates [41], and therefore, undesirable effects obtained by non-specific ADAM10-targeting might be found in cancer proliferation, cell adhesion, promotion of T cell/NK-cell precursor and inflammation, etc. [42]. To circumvent this constraint, our study suggests a strategy aimed at promoting the release of sAPPα in a more physiological manner. This approach might be based on a long-term intake of an active ingredient (quercetin or rutin), which is consumed through a healthy human diet. However, further studies are needed to find out whether the increase in ADAM10 is flavonoid dose-dependent and whether the potential beneficial effects outweigh putative side effects.
As for the expression of APPswe, although the insertion of the human APP transgene in the mouse genome guarantees that APPswe is overexpressed from birth, it has been reported that APP mRNA and protein hippocampal levels show significant fluctuations during the animal development, being maximal when mice are asymptomatic (1-month-old) and decreasing when full symptomatology occurs [43]. Notwithstanding this issue, APP expression both in the cortex and the hippocampus was significantly higher compared to that of WT mice in our study. Further treatment with quercetin or rutin was able to significantly reduce such expression for both male and female mice, in both areas of the brain. These findings are in line with those reported by Augustin et al. [44] who studied a standardised extract of Ginkgo biloba (Egb761), rich in flavonols such as quercetin, in 4-month-old female TgAPP mice, finding decreased APP mRNA and protein levels. Taking into consideration that upregulation of APP translational in Tg2576 mice occurs in the prodromal and early symptomatic stages [45], it is likely that a restoration of APP translation by quercetin or rutin might have taken place in our TgAPP mice and, likely, in an early symptomatic stage, resulting in reduction of cortical and hippocampal levels of APP, BACE1 activity, and caspase-3 activation.
Furthermore, it has been reported elsewhere that in Tg2576 mice (in the absence of neuronal loss) there is an increase in caspase-3 activation in the hippocampus [46], as found in our study, at the onset of memory impairment, together with a reduction in dendritic spines prior to the deposition of extracellular amyloid [46]. There is evidence in support of non-apoptotic roles for caspases in the nervous system without neuronal death [47], and caspase-3 activity has been localized to dendritic spines where it may elevate calcineurin levels. In turn, the dephosphorylation of GluR1 subunit of AMPA-like receptors, triggered by calcineurin is thought to result in postsynaptic dysfunction. Our values of caspase-3 expression, as a consequence of transgenesis, are in agreement with those obtained by other researchers who reported an increase in caspase-3 expression at the level of dendritic spines in the hippocampus of TgAPP mice [48]. Since APP contains three distinct cleavage sites for caspase-3 in its amino acid sequence, two of which are located at the level of the extracellular domain and one in the intracellular C-terminal portion of the APP tail [49], hydrolysis of APP by caspase-3 may alter the proteolytic processing of APP in favor of the amyloidogenic pathway [50], leading to the release of a cytotoxic C-terminal-derived peptide of 31 amino acids in length (C31), for example [51]. This suggests that, since caspase-3 can mediate the amplification of toxic fragment release from APP, lowering caspase-3 expression by quercetin or rutin may allow for the clearance of aggregated protein. In addition, as mentioned earlier, we have explored the influence of both the increase and decrease of caspase-3 expression in glutamatergic synaptic transmission, based on the ability of calcineurin-activated caspase-3 to dephosphorylate the GluR1 subunit of AMPA receptors at the postsynaptic level. These molecular modifications alter glutamatergic synaptic transmission and neuronal plasticity at the level of dendritic spines in the hippocampus [48]. Theoretically, pharmacological inhibition of caspase-3 activity in TgAPP mice might save the AD-like phenotypes from a mechanism that drives synaptic failure. However, despite the augmentation in caspase-3 expression in our TgAPP mouse, we found no significant differences in AMPA receptor expression compared to that in WT mice, as mentioned earlier. It might be that the changes in caspase-3 expression are not prominent enough to produce significant modifications in AMPA receptor expression (Supplementary data, Figure S2 and Table S5).
As for the values of caspase-6 expression, no significant differences were found between TgAPP and WT mice. Activation of caspase-6 has been identified as an important mediator of neuronal stress that cleaves important cytoskeletal proteins (Tau and α-tubulin), thus disrupting the ubiquitin-proteasome degradation of misfolded proteins, and a number of actin-regulating post-synaptic density proteins [52]. The unchanged expression of caspase-6 in our study agrees with the absence of characteristic signs of neurodegeneration at the age at which these transgenic mice were evaluated, compared with the WT mice (Supplementary data, Figure S1).
A marked increase in neuroinflammatory mediators has been observed in AD patients, mainly around senile plaques [53][54][55]. Astrocytes are the main supplier of GSH to microglia and neurons. During chronic inflammation and oxidative stress, astrocytes release toxic inflammatory mediators and free radicals, accelerating activation of microglia and neurode-generation [56]. It is worth noting that decreased intracellular glutathione is related to the activation of the inflammatory pathways, p38 MAP-kinase, Jun-N-terminal kinase (JNK), NF-κB, in human microglia and astrocytes [57]. In this regard, we decided to quantify the levels of IL-1β, IFN-γ, and TNF-α in our animal model and determine the effect of quercetin and rutin on them. As is known, inflammation promotes defective processing of Aβ peptide and APP, promoting Aβ peptide aggregation and in turn modifying Aβ reactivity [58]. Thus, in our study, we observed that TgAPP mice had increased mRNA levels of the pro-inflammatory mediators IL-1β, TNF-α and IFN-γ, compared to WT mice, showing that overexpression of APPswe might induce neuro-inflammatory cascades triggering a series of molecular pathways in glia and neurons, which would activate the inflammatory response. Quercetin and rutin were able to attenuate IL-1β gene expression in both males and females and in the brain areas studied. Several pieces of evidence support the antiinflammatory effect exerted by quercetin at the CNS level, as it may inhibit the activation of transcription factors such as the nuclear factor-kappa B (NF-κB) [59], involved in the induction of iNOS, and therefore, decrease the release of mediators such as IL-1β, TNF-α and IFN-γ [60]. Regarding the impact of GSH on the inflammatory response, it should be noted that GSH is involved in the maintenance of optimal cytokine levels in such a way that the expression of pro-inflammatory cytokines (TNF-α, IL-1β, and IL-6) are increased due to GSH depletion, whereas the expression of anti-inflammatory cytokines (i.e., IL-10) remained unaltered. This GSH homeostasis alteration happens due to upregulation in NF-κB and JNK signaling pathway which could be the feasible apoptotic pathway towards neuronal cell death [61]. In our study, down-regulation of NF-κB by quercetin and rutin might be a plausible mechanism to recover the GSH/GSSG homeostasis and therefore the cause of the balance between pro-inflammatory and anti-inflammatory cytokines. Lastly, since BACE1 promotor has an NF-κB binding site, inflammation-induced activation of NF-κB facilitates the upregulation of BACE1 expression, and subsequently increases Aβ production [62]. Thus, if down-regulation of NF-κB occurs upon quercetin and rutin diets, BACE1 activity would decrease as a result of the release regulation of pro-inflammatory and not anti-inflammatory cytokines.
Experimental Animals
A transgenic mouse (Tg2576, B6;SJL-Tg(APPswe)2576 Kha) that expresses the Swedish double mutation of human amyloid precursor protein (hAPP) was used as the animal model of experimental AD [14]. The mouse is a knock-in heterozygote line which expresses the human AβPP695 isoform with the double Swedish mutation (K670N/M671L; Lys670→Asn and Met671→Leu) under the control of the hamster prion protein promoter [63]. As a result, this mouse exhibits levels of human amyloid-β precursor protein (Aβ PP), six times greater than that of a mouse's Aβ PP levels. In addition, this mouse shows higher levels of Aβ40 and Aβ42. Aβ deposits begin at 9 months of age [63]. Within the Tg2576 hippocampus and cortex, APPswe transgene expression is primarily neuronal [64].
As a negative control, wild-type (WT) mice from the same colony [65,66] were used. The Tg2576 (B6;SJL-Tg(APPswe)2576 Kha) mouse colony was developed in our laboratory from Tg2576 heterozygous males and wild-type females. The transgenic parents were donated by Dr. Diana Frechilla from the Neuroscience Division at the Centre for Applied Medical Research at the University of Navarra (Pamplona, Spain) [67].
Animals were housed in individual ventilated cages and kept at 22-24 • C on a 12-h light/dark cycle in 50-60% humidity. Animal protocols were approved by the Institutional Animal Care and Use Committee (IACUC) at the Complutense University of Madrid and were in full accordance with the European Directive 2010/63/on the protection of animals used for scientific purposes and Spanish legislation on Animal Welfare (Royal Decree 53/2013, 1 February 2013).
Genotyping Analyses of Mice
Transgenicity was determined within 30 days of birth by tail biopsy. Genotyping analyses of animals were carried out by PCR. Considering that Tg2576 (TgAPP) is a heterozygous line, the insertion gene (PrP, from prion protein) was used as a positive reaction control. Genomic DNA was extracted from mouse tails digested with proteinase K (0.1 µg/µL) in NID buffer ( In all cases, negative controls (without DNA mold) and positive controls of the APP gene were considered. The PCR products obtained were separated by electrophoresis in 1.5% agarose gels in 0.5X TBE (Tris-Borate-EDTA, Merck KGaA, Darmstadt, Germany) buffer at 70 V (constant voltage) and then imaged by staining with GelRed (Millipore).
Animal Treatments
TgAPP mice and wild-type littermates, both aged 6-7 weeks with an initial body weight of 16.2 ± 0.8 g, were randomized into the following four groups (n = 8/group): (a) Untreated TgAPP; (b) Quercetin-treated TgAPP; (c) Rutin-treated TgAPP; and (d) Untreated wild-type. Since both male and female mice were studied, two sets of groups were established. At the age of 45 weeks, the mice started to be treated with quercetin or rutin for 4 weeks.
Quercetin (3,3 ,4 ,5,7-pentahydroxyflavone) and rutin hydrate (quercetin-3-O-rutinoside hydrate) were ≥95% pure and purchased from Sigma-Aldrich. Each one of the flavonoids was incorporated into a standard diet (Harlan Ibérica, Barcelona, Spain) at a concentration of 200 ppm, corresponding to an intake of 30 mg flavonoid/kg body weight/day. The untreated mice received exclusively the un-supplemented standard diet. Diets and water were provided for ad libitum intake.
Brain Tissue Preparation for Biochemical and Histological Assays
At the end of treatment, mice were fasted overnight, they were euthanized by means of cervical dislocation, and the entire brain was quickly removed. The brain was rinsed in saline at 4 • C and the arachnoid membrane was carefully removed. Then, the hippocampus and cortex were isolated. Samples were immediately stored at −80 • C until further use.
The entire brains of some animals were used for obtaining histological sections, for which, once euthanized by means of cervical dislocation, brains were frozen by immersion in isopentane at −80 • C. Immediately afterwards, coronal sections of the brain (30 µm of thickness) were made from the olfactory bulb to the cerebellum, 120 µm apart in a cryostat (Leica CM1850, Nussloch, Germany). The whole procedure was performed at −20 • C. Histological sections were collected on slides and kept at −80 • C until analysis (See Supplementary Methods).
Glutathione
For glutathione tests, cerebral cortex and hippocampus samples were homogenized in a redox-quenching buffer-5% Trichloride acetic acid (RQB-5% TCA) (previously bubbled with N 2 for 15 min on ice) at a concentration of 25 mg/mL (w/v). Samples were resuspended by sonication for 10 s, then centrifuged at 12,000× g for 10 min at 4 • C, and supernatants were collected.
Then, in the supernatant obtained, spectrofluorometric methods were adopted to determine GSH and GSSG levels using the o-phthalaldehyde method, described by Senft et al. [68]. GSH and GSSG values were corrected for spontaneous reaction in the absence of biological sample. In both cases, supernatants were incubated for 30 min at room temperature and afterwards fluorescence was measured using a FLUOSTAR microplate reader (BMG LABTECH, Ortenberg, Baden-Württemberg, Germany), with the excitation filter set at 360 nm (bandwidth 5 nm) and the emission filter set at 460 nm (bandwidth 5 nm). The concentration of GSH and GSSG in each sample was interpolated from known GSH standards. Concentrations of both GSH and GSSG were expressed as nmol GSH/mg protein, which allowed for the calculation of the glutathione redox ratio GSH/GSSG.
The remaining pellets were vortexed until completely dissolved in 240 µL of 0.1 M NaOH to measure protein concentration by the bicinchoninic acid (BCA) method, using bovine serum albumin as a standard.
Thiobarbituric Acid-Reactive Substances (TBARs)
The content of TBARs was used as an index of lipoperoxidation. In brain tissue, 50 mM phosphate buffer (pH 7.4) was added to a concentration of 25 mg/mL (w/v) and the suspension was homogenized by sonication for 10 s. To 30 µL of the homogenate, 250 µL of 1% phosphoric acid and 75 µL of 0.6% thiobarbituric acid (TBA) were added. The reagent mixture was incubated at 100 • C in a water bath for 45 min, after which it was cooled in an ice bath and then centrifuged at 3000× g for 10 min at 4 • C. A volume of 150 µL of supernatant was taken from each sample. Fluorescence was measured using a FLUOSTAR microplate reader (BMG LABTECH, Ortenberg, Baden-Württemberg, Germany) with the excitation filter set at 485 nm (bandwidth 5 nm) and the emission filter set at 530 nm (bandwidth 5 nm). A calibration curve was prepared using malondialdehyde (MDA) as a standard. The results were expressed in pmol MDA/mg protein.
Enzymatic Activity of the Main Antioxidant Enzymes
For the determination of enzyme activity in brain tissue, a lysis buffer containing 50 mM phosphate buffer (pH 7.4) and antiproteases (1 mM EDTA, 1 mM PMSF, 1 g/mL pepstatin and 1 g/mL leupeptin) was added to a concentration of 50 mg/mL (w/v). Then, suspension was sonicated for 30 s in an ice bath, and the homogenate was centrifuged at 10,000× g for 15 min at 4 • C. Supernatants were collected for the determination of the enzymatic activity of the antioxidant enzymes.
Superoxide dismutase (SOD) activity was measured by following the inhibition of pyrogallol autoxidation at 420 nm [69]. One unit of enzyme was defined as the amount of enzyme required to inhibit the rate of pyrogallol autoxidation by 50%. The SOD enzymatic activity was expressed as international units (IU)/mg protein. Catalase (CAT) activity was measured in Triton-X-100 (1%, v/v)-treated supernatants by following hydrogen peroxide (H 2 O 2 ) disappearance at 240 nm [70], and enzyme activity was reported as substrate (µmol H 2 O 2 ) transformed/min · mg protein. Total glutathione peroxidase (GPx) was determined following NADPH oxidation at 340 nm in the presence of excess GR, GSH, and cumene hydroperoxide [71]. GPx activity was expressed as substrate (nmol NADPH) transformed/min mg protein. Glutathione reductase (GR) activity was analyzed following NADPH oxidation at 340 nm in the presence of GSSG [72] and expressed as substrate (nmol NADPH) transformed/min · mg protein. GR and both GPx activities were corrected for spontaneous reaction in the absence of biological samples (in the absence of enzyme).
BACE1 Activity Test
The BACE1 test protocol involves the use of a secretase-specific substrate (peptide) which is conjugated to two reporter molecules, namely EDANS and DABCYL, which results in the release of a fluorescent signal [73,74]. The BACE1 activity was measured both in the cortex and hippocampus lysates. The reaction was carried out at 37 • C for 1 h using 10 µM substrate in 50 mM sodium acetate buffer (pH 4.5). Fluorescence intensity measurements were done using a FLUOSTAR microplate reader (BMG LABTECH, Ortenberg, Baden-Württemberg, Germany) with the excitation filter set at 360 nm (bandwidth 5 nm) and the emission filter set at 530 nm (bandwidth 5 nm). The level of secretase enzymatic activity is proportional to the fluorometric reaction, and the data are expressed as x-fold increase in fluorescence over that of background controls (reactions in the absence of substrate or tissue). The BACE1 activity was normalized with protein concentration. The mice's BACE1 activity, quercetin or rutin-treated, was expressed as the percentage of activity of that of TgAPP control mice. We analyzed the different areas of the brain, namely the cortex and hippocampus, stored at −80 • C. To a known amount of brain tissue, Triomol ® lysis buffer was added at a ratio of 1:10 (w/v). Samples were homogenized for 30 s using a Cordless motor (Pellet pestle, Sigma-Aldrich), and incubated for 5 min at 25 • C to allow for complete dissociation of nucleoprotein complexes. Then, 0.2 mL of chloroform was added for each mL of Triomol ® lysis buffer used. The tubes were shaken vigorously for 15 s and incubated at 25 • C for 3 min. Then, they were centrifuged at 11,000× g for 15 min at 4 • C. After centrifugation, three phases were obtained, with RNA in the upper phase.
To isolate the RNA, the upper phase was transferred to another tube and precipitated by adding 0.5 mL isopropanol. After thorough mixing of isopropanol and aqueous solution by inversion, the mixture was incubated at room temperature for 10 min to promote precipitation, and centrifuged at 12,000× g for 10 min at 4 • C. The supernatants were removed, and the pellets were washed with 75% ethanol and centrifuged at 7500× g for 5 min at 4 • C. The pellets were dried at room temperature and dissolved in 50 µL of DEPC-treated water. To remove traces of DNA, 2.5 µL of DNase (RNase-free) was added and incubated at 37 • C for 30 min. Finally, samples were incubated at 64 • C for 5 min to inactivate the DNase.
Subsequently, the concentrations of RNA were measured in a UV-VIS spectrophotometer (BMG LABTECH, Ortenberg, Baden-Württemberg, Germany) at 260 nm and the purity was assessed considering the absorbance ratio at 260 and 280 nm (A260/A280).
The determination of RNA integrity and purity was performed by electrophoresis in a 1% agarose gel stained with GelRed and visualized under UV light, where, if the RNA was intact, two upper bands corresponding to ribosomal RNA (28S and 18S) and two lower bands corresponding to transfer RNA (tRNA) and 5S ribosomal RNA had to be observed.
Complementary DNA (cDNA) Synthesis
cDNA is much more stable than RNA and therefore allows for more convenient and safer sample handling. The cDNA was synthesized from mRNA by retrotranscription using the First Strand cDNA Synthesis Kit for RT-qPCR (Fermentas Life Sciences).
In order to carry out the retrotranscription for cDNA synthesis to 2 µg of RNA, 11 µL of DEPC-treated water and 1 µL of 10X Random primers were added. Then, the mixture was incubated at 65 • C for 10 min to denature the RNA. After this time, the tubes were immediately brought to 4 • C for 5 min to avoid renaturation of the RNA. The reagent mix for cDNA synthesis is shown in Table S1 (Supplementary data).
Eight µL of the reaction mixture was added to each sample. The entire volume was brought to the bottom of the tubes and incubated at 42 • C for 60 min. Finally, the reaction was stopped by inactivating the reverse transcriptase by heating it at 70 • C for 10 min.
Real-Time PCR
The main feature of real-time PCR is that the analysis of the products takes place during the amplification process by determining the fluorescence. In this way, the amplification and detection processes occur simultaneously in the same tube or vial without the need for any further action. For real-time PCR, thermal cyclers are used, which can amplify and detect fluorescence simultaneously. We utilized the LightCycler real-time thermal cycler (Roche Diagnostics, Mannheim, Germany). Table S2 (Supplementary data) lists the reagents required for real-time PCR, using sequence-specific primers and DNA-binding dye (SYBR Green I, Roche Molecular Systems, Inc., Rotkreuz, Switzerland) as a detection system.
For the design of the primers for the different quantified markers, the Primer3Plus bioinformatics program (http://www.bioinformatics.nl/cgi-bin/primer3plus/primer3 plus.cgi, accessed on 21 January 2023) was used, for which we took the cDNA sequences of the genes of interest from the Medline open-access database (http://www.ncbi.nlm.nih. gov/entrez, accessed on 21 January 2023). The primers were supplied by Merck (Sigma-Aldrich). The hybridization temperature and the sequence of the different primers used are shown in Table S3 (Supplementary data).
The reaction conditions for the amplification of the genes of interest are shown in Table S4 (Supplementary data).
Finally, the samples were subjected to a melting program: 95 • C for 15 s, 65 • C for 30 s, and up to 98 • C at a rate of 0.1 • C/s with continuous fluorescence recording.
For the quantification of cDNA levels, the cycle threshold (Ct) comparison method [75] was used, using GADPH as a housekeeper. The amplification of the housekeeper was done in parallel with the analyzed gene. Ct values were calculated using the 4.0 software provided by LightCycler (Roche Diagnostics, Mannheim, Germany). The software allows distinguishing between fluorescence due to sample amplification and due to background. Melting curves were also recorded. Determination of the melting temperature of the amplified fragment allowed for characterization of the amplified product. The size of the bands was checked on a 1.5% agarose gel.
The variation of the expression of the gene under study with the quercetin or rutin treatment was expressed as a function of the control TgAPP (mice without treatment) and normalizing this expression with the levels of GADPH. The Change Fold (2 −∆∆Ct ) represents the number of times that the gene of interest is modified under the particular treatment with respect to the control mice.
Statistical Analyses
All tests were performed at least in duplicate and in three different experiments. The results obtained are expressed as the mean ± standard error. One-way analysis of variance (ANOVA) was performed once the data were tested and demonstrated that it fits a normal distribution. The Newman-Keuls multiple comparison post-hoc test was run, examining mean differences between groups. Values of p < 0.05 were considered significant. SigmaPlot 11.0 software was used for statistical analyses.
Conclusions
Dietary habits and supplementation can affect the cellular redox status. On this basis, we aimed to ameliorate the cellular redox homeostasis in an AD mouse model by a flavonoid diet containing quercetin or rutin in order to alleviate amyloid pathology, considering the interplay between cellular redox status and proteasome-dependent amyloid features in asymptomatic AD. Our datasets are relevant, since the flavonoid effects displayed in the TgAPP mouse model are consistent with those reported earlier in our in vitro and ex vivo models.
In conclusion, our findings show that initiating a diet treatment at the asymptomatic stage or at the onset of AD-like symptoms might reinstate cellular redox status and APP physiological processing via concurrent regularization of APP expression and BACE1 activity.
Although it is difficult to extrapolate our findings to the human condition, they may have broad implications for the human response to future therapeutics. Of the two flavonoids, rutin, with an overall more prominent in vivo effects, seems to be most suitable to be included in a day-to-day diet as an adjuvant therapy in AD, based on the augmentation on intracellular redox homeostasis of the brain.
Author Contributions: P.B.-B. and S.M.-A. conceived the idea and the experimental design, helped in the experiments, interpreted the obtained results, and wrote the manuscript. K.L.J.-A. conducted the experiments, carried out the data analyses and interpreted the obtained results. J.B. helped in data analyses and revision of the manuscript. All data were generated in-house, and no paper mill was used. All authors agree to be accountable for all aspects of work ensuring integrity and accuracy. All authors have read and agreed to the published version of the manuscript.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2023-03-05T16:08:55.152Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "071c5d634e7c2dcf616f621c882763e47c46033c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/5/4863/pdf?version=1677770102",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d7a7152a0a52a025e11aada215ecafc307001de",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18520640
|
pes2o/s2orc
|
v3-fos-license
|
Rectification of laser-induced electronic transport through molecules
We study the influence of laser radiation on the electron transport through a molecular wire weakly coupled to two leads. In the absence of a generalized parity symmetry, the molecule rectifies the laser induced current, resulting in directed electron transport without any applied voltage. We consider two generic ways of dynamical symmetry breaking: mixing of different harmonics of the laser field and molecules consisting of asymmetric groups. For the evaluation of the nonlinear current, a numerically efficient formalism is derived which is based upon the Floquet solutions of the driven molecule. This permits a treatment in the non-adiabatic regime and beyond linear response.
I. INTRODUCTION
During the last years, we experienced a wealth of experimental activity in the field of molecular electronics. 1,2 Its technological prospects for nanocircuits 3 have created broad interest in the conductance of molecules attached to metal surfaces or tips. In recent experiments 4,5,6,7 weak tunneling currents through only a few or even single molecules coupled by chemisorbed thiol groups to the gold surface of leads has been achieved. The experimental development is accompanied by an increasing theoretical interest in the transport properties of such systems. 8,9 An intriguing challenge presents the possibility to control the tunneling current through the molecule. Typical energy scales in molecules are in the optical and the infrared regime, where today's laser technology provides a wealth of coherent light sources. Hence, lasers represent an inherent possibility to control atoms or molecules and to direct currents through them.
A widely studied phenomenon in extended, strongly driven driven systems is the so-termed ratchet effect, 10,11,12,13,14 originally discovered and investigated for overdamped classical Brownian motion in periodic nonequilibrium systems in the absence of reflection symmetry. Counterintuitively to the second law of thermodynamics, one then observes a directed transport although all acting forces possess no net bias. This effect has been established as well within the regime of dissipative, incoherent quantum Brownian motion. 15 A related effect is found in the overdamped limit of dissipative tunneling in tight-binding lattices. Here the spatial symmetry is typically preserved and the nonvanishing transport is brought about by harmonic mixing of a driving field that includes higher harmonics. 16,17,18 For overdamped Brownian motion, both phenomena can be understood in terms of breaking a generalized reflection symmetry. 19 Recent theoretical descriptions of molecular conductivity are based on a scattering approach. 20,21 Alternatively, one can assume that the underlying transport mechanism is an electron transfer reaction and that the conductivity can be derived from the corresponding reaction rate. 8 This analogy leads to a connection between electron transfer rates in a donor-acceptor system and conduction in the same system when operating as a molecular wire between two metal leads. 22 Within the high-temperature limit, the electron transport on the wire can be described by inelastic hopping events. 8,23,24,25 For a more quantitative ab initio analysis, the molecular orbitals may be taken from electronic structure calculations. 26 Isolated atoms and molecules in strong oscillating fields have been widely studied within a Floquet formalism 27,28,29,30,31,32 and many corresponding theoretical techniques have been developped in that area. This suggests the procedure followed in Ref. 33: Making use of these Floquet tools, a formalism for the transport through time-dependent quantum systems has been derived that combines Floquet theory for a driven molecule with the many-particle description of transport through a system that is coupled to ideal leads. This approach is devised much in the spirit of the Floquet-Markov theory 34,35 for driven dissipative quantum systems. It assumes that the molecular orbitals that are relevant for the transport are weakly coupled to the contacts, so that the transport characteristics are dominated by the molecule itself. Yet, this treatment goes beyond the usual rotatingwave approximation as frequently employed, such as e.g. in Refs. 35,36. A time-dependent perturbative approach to the problem of driven molecular wires has recently been described by Tikhonov et al. 37,38 However, their one-electron treatment of this essentially inelastic transmission process cannot handle consistently the electronic populations on the leads. Moreover, while their general formulation is not bound to their independent channel approximation, their actual application of this approximation is limited to the small light-molecule interaction regime.
With this work we investigate the possibilities for molecular quantum wires to act as coherent quantum ratchets, i.e. as quantum rectifiers for the laser-induced electrical current. In doing so, we provide a full account of the derivation published in letter format in Ref. 33. In Sec. II we present a more detailed derivation of the Floquet approach to the transport through a periodically driven wire. This formalism is employed in Sec. III to investigate the rectification properties of driven molecules. Two generic cases are discussed, namely mixing of different harmonics of the laser field in symmetric molecules and harmonically driven asymmetric molecules. We focus thereby on how the symmetries of the model system manifest themselves in the expressions for the timeaveraged current. The general symmetry considerations of a quantum system under the influence of a laser field are deferred to the Appendix A.
II. FLOQUET APPROACH TO THE ELECTRON TRANSPORT
The entire system of the driven wire, the leads, and the molecule-lead coupling as sketched in Fig. 1 is described by the Hamiltonian The wire is modeled by N atomic orbitals |n , n = 1, . . . , N , which are in a tight-binding description coupled by hopping matrix elements. Then, the corresponding Hamiltonian for the electrons on the wire reads in a second quantized form where the fermionic operators c n , c † n annihilate, respectively create, an electron in the atomic orbital |n and obey the anti-commutation relation [c n , c † n ′ ] + = δ n,n ′ . The influence of the laser field is given by a periodic timedependence of the on-site energies yielding a single particle Hamiltonian of the structure H nn ′ (t) = H nn ′ (t + T ), where T = 2π/Ω is determined by the frequency Ω of the laser field.
The orbitals at the left and the right end of the molecule, that we shall term donor and acceptor, |1 and |N , respectively, are coupled to ideal leads (cf. Fig. 1) by the tunneling Hamiltonians The operator c qL (c qR ) annihilates an electron on the left (right) lead in state Lq (Rq) orthogonal to all wire states. Later, we shall treat the tunneling Hamiltonian as a perturbation, while taking into account exactly the dynamics of the leads and the wire, including the driving. The leads are modeled as non-interacting electrons with the Hamiltonian A typical metal screens electric fields that have a frequency below the so-called plasma frequency. Therefore, any electromagnetic radiation from the optical or the infrared spectral range is almost perfectly reflected at the surface and will not change the bulk properties of the gold contacts. This justifies the assumption that the leads are in a state close to equilibrium and, thus, can be described by a grand-canonical ensemble of electrons, i.e. by a density matrix where µ L/R are the electro-chemical potentials and N L/R = q c † qL/R c qL/R the electron numbers in the left/right lead. As a consequence, the only non-trivial expectation values of lead operators read where ǫ qL is the single particle energy of the state qL and correspondingly for the right lead. Here, f (x) = (1 + e x/kB T ) −1 denotes the Fermi function.
A. Time-dependent electrical current
The net (incoming minus outgoing) current through the left contact is given by the negative time derivative of the electron number in the left lead, multiplied by the electron charge −e, i.e.
Here, the angular brackets denote expectation values at time t, i.e. O t = Tr[Oρ(t)]. The dynamics of the density matrix is governed by the Liouville-von Neumann equation ih̺(t) = [H(t), ̺(t)] together with the factorizing initial condition ̺(t 0 ) = ̺ wire (t 0 ) ⊗ ̺ leads,eq . For the Hamiltonian (1), the commutator in Eq. (7) is readily evaluated to To proceed, it is convenient to switch to the interaction picture with respect to the uncoupled dynamics, where the Liouville-von Neumann equation reads The tilde denotes the corresponding interaction picture operators, , where the propagator of the wire and the lead in the absence of the lead-wire coupling is given by the time-ordered product Equation (9) is equivalent to the integral equatioñ Inserting this relation into Eq. (8), we obtain an expression for the current that depends on the density of states in the leads times their coupling strength to the connected sites. At this stage it is convenient to introduce the spectral density of the lead-wire coupling which fully describes the leads' influence. If the lead states are dense, Γ L/R (ǫ) becomes a continuous function.
Since we restrict ourselves to the regime of a weak wirelead coupling, we can furthermore assume that expectation values of lead operators are at all times given by their equilibrium values (6). Then we find after some algebra for the stationary (i.e. for t 0 → −∞), time-dependent net electrical current through the left contact the result A corresponding relation holds true for the current through the contact on the right-hand side. Note that the anti-commutator [c † 1 ,c 1 (t, t − τ )] + is in fact a c-number (see below). Like the expectation value c † 1c 1 (t, t − τ ) t−τ , it depends on the dynamics of the isolated wire and is influenced by the external driving.
It is frequently assumed that the attached leads can be described by a one-dimensional tight-binding lattice with hopping matrix elements ∆ ′ . Then, the spectral densities Γ L/R (ǫ) of the lead-wire couplings are given by the Anderson-Newns model, 39 i.e. they assume an elliptical shape with a band width 2∆ ′ . However, because we are mainly interested in the behavior of the molecule and not in the details of the lead-wire coupling, we assume that the conduction band width of the leads is much larger than all remaining relevant energy scales. Consequently, we approximate in the so-called wide-band limit the functions Γ L/R (ǫ) by the constant values Γ L/R . The first contribution of the ǫ-integral in Eq. (13) is then readily evaluated to yield an expression proportional to δ(τ ). Finally, this term becomes local in time and reads eΓ L c † 1 c 1 t .
B. Floquet decomposition
Let us next focus on the single-particle dynamics of the driven molecule decoupled from the leads. Since its Hamiltonian is periodic in time, H nn ′ (t) = H nn ′ (t + T ), we can solve the corresponding time-dependent Schrödinger equation within a Floquet approach. This means that we make use of the fact that there exists a complete set of solutions of the form 27,28,29,31,32 with the quasienergies ǫ α . Since the so-called Floquet modes |Φ α (t) obey the time-periodicity of the driving field, they can be decomposed into the Fourier series This suggests that the quasienergies ǫ α come in classes, of which all members represent the same solution of the Schrödinger equation. Therefore, the quasienergy spectrum can be reduced to a single "Brillouin zone" −hΩ/2 ≤ ǫ <hΩ/2. In turn, all physical quantities that are computed within a Floquet formalism are independent of the choice of a specific class member. Thus, a consistent description must obey the so-called class invariance, i.e. it must be invariant under the substitution of one or several Floquet states by equivalent ones, where k 1 , . . . , k N are integers. In the Fourier decomposition (15), the prefactor exp(ik α Ωt) corresponds to a shift of the side band index so that the class invariance can be expressed equivalently as Floquet states and quasienergies can be obtained from the quasienergy equation 27,28,29,30,31,32 A wealth of methods for the solution of this eigenvalue problem can be found in the literature. 31,32 One such method is given by the direct numerical diagonalization of the operator on left-hand side of Eq. (19). To account for the periodic time-dependence of the |Φ α (t) , one has to extend the original Hilbert space by a T -periodic time coordinate. For a harmonic driving, the eigenvalue problem (19) is band-diagonal and selected eigenvalues and eigenvectors can be computed by a matrix-continued fraction scheme. 31,40 In cases where many Fourier coefficients (in the present context frequently called "sidebands") must be taken into account for the decomposition (15), direct diagonalization is often not very efficient and one has to apply more elaborated schemes. For example, in the case of a large driving amplitude, one can treat the static part of the Hamiltonian as a perturbation. 28,41,42 The Floquet states of the oscillating part of the Hamiltonian then form an adapted basis set for a subsequently more efficient numerical diagonalization.
A completely different strategy to obtain the Floquet states is to propagate the Schrödinger equation for a complete set of initial conditions over one driving period to yield the one-period propagator. Its eigenvalues represent the Floquet states at time t = 0, i.e., |Φ α (0) . Fourier transformation of their time-evolution results in the desired sidebands. Yet another, very efficient propagation scheme is the so-called (t, t ′ )-formalism. 43 As the equivalent of the one-particle Floquet states |Φ α (t) , we define a Floquet picture for the fermionic creation and annihilation operators c † n , c n , by the timedependent transformation The inverse transformation follows from the mutual orthogonality and the completeness of the Floquet states at equal times. 31,32 Note that the right-hand side of Eq. (21) becomes t-independent after the summation. In the interaction picture, the operator c α (t) obeys This is easily verified by differentiating the definition in the first line with respect to t and using that |Φ α (t) is a solution of the eigenvalue equation (19). The fact that the initial conditionc α (t ′ , t ′ ) = c α (t ′ ) is fulfilled, completes the proof. Using Eqs. (21) and (22), we are able to express the anti-commutator of wire operators at different times by Floquet states and quasienergies: (23) This relation together with the spectral decomposition (15) of the Floquet states allows to carry out the time and energy integrals in the expression (13) for the net current entering the wire from the left lead. Thus, we obtain with the Fourier components Here, we have introduced the expectation values The Fourier decomposition in the last line is possible because all R αβ (t) are expectation values of a linear, dissipative, periodically driven system and therefore share in the long-time limit the time-periodicity of the driving field. In the subspace of a single electron, R αβ reduces to the density matrix in the basis of the Floquet states which has been used to describe dissipative driven quantum systems in Refs. 32,34,35,44,45,46.
C. Master equation
The last step towards the stationary current is to find the Fourier coefficients R αβ,k at asymptotic times. To this end, we derive an equation of motion for the reduced density operator ̺ wire (t) = Tr leads ̺(t) by reinserting Eq. (11) into the Liouville-von Neumann equation (9). We use that to zeroth order in the molecule-lead coupling the interaction-picture density operator does not change with time,̺(t − τ, t 0 ) ≈̺(t, t 0 ). A transformation back to the Schrödinger picture results after tracing out the leads' degrees of freedom in the master equation Since we only consider asymptotic times t 0 → −∞, we have set the upper limit in the integral to infinity. From Eq. (28) follows directly an equation of motion for the R αβ (t). Since all the coefficients of this equation, as well as its asymptotic solution, are T -periodic, we can split it into its Fourier components. Finally, we obtain for the R αβ,k the inhomogeneous set of equations For a consistent Floquet description, the current formula together with the master equation must obey class invariance. Indeed, the simultaneous transformation with (18) of both the master equation (29) and the current formula (25) amounts to a mere shift of summation indices and, thus, leaves the current as a physical quantity unchanged.
For the typical parameter values used below, a large number of sidebands contributes significantly to the Fourier decomposition of the Floquet modes |Φ α (t) . Numerical convergence for the solution of the master equation (29), however, is already obtained by just using a few sidebands for the decomposition of R αβ (t). This keeps the numerical effort relatively small and justifies a posteriori the use of the Floquet representation (21). Yet we are able to treat the problem beyond a rotating-waveapproximation.
D. Average current
Equation (24) implies that the current I L (t) obeys the time-periodicity of the driving field. Since we consider here excitations by a laser field, the corresponding frequency lies in the optical or infrared spectral range. In an experiment one will thus only be able to measure the time-average of the current. For the net current entering through the left contact it is given bȳ Mutatis mutandis we obtain for the time-averaged net current that enters through the right contact Total charge conservation of the original wire-lead Hamiltonian (1) of course requires that the charge on the wire can only change by current flow, amounting to the continuity equationQ wire (t) = I L (t) + I R (t). Since asymptotically, the charge on the wire obeys at most the periodic time-dependence of the driving field, the time-average ofQ wire (t) must vanish in the long-time limit. From the continuity equation one then finds that I L +Ī R = 0, and we can introduce the time-averaged currentĪ For consistency, the last equation must also follow from our expressions for the average current (30) and (31). In fact, this can be shown by identifyingĪ L +Ī R as the sum over the right-hand sides of the master equation (29) for α = β and k = 0, which vanishes as expected.
E. Rotating-wave approximation
Although we can now in principle compute timedependent currents beyond a rotating-wave approximation (RWA), it is instructive to see under what conditions one may employ this approximation and how it follows from the master equation (29). We note that from a computational viewpoint there is no need to employ a RWA since within the present approach the numerically costly part is the computation of the Floquet states rather than the solution of the master equation. Nevertheless, our motivation is that a RWA allows for an analytical solution of the master equation to lowest order in the lead-wire coupling Γ. We will use this solution below to discuss the influence of symmetries on the Γ-dependence of the average current.
The master equation (29) can be solved approximately by assuming that the coherent oscillations of all R αβ (t) are much faster than their decay. Then it is useful to factorize R αβ (t) into a rapidly oscillating part that takes the coherent dynamics into account and a slowly decaying prefactor. For the latter, one can derive a new master equation with oscillating coefficients. Under the assumption that the coherent and the dissipative timescales are well separated, it is possible to replace the time-dependent coefficients by their time-average. The remaining master equation is generally of a simpler form than the original one. Because we work here already with a spectral decomposition of the master equation, we give the equivalent line of argumentation for the Fourier coefficients R αβ,k .
It is clear from the master equation (29) that if then the corresponding R αβ,k emerge to be small and, thus, may be neglected. Under the assumption that the wire-lead couplings are weak and that the Floquet spectrum has no degeneracies, the RWA condition (34) is well satisfied except for i.e. when the prefactor of the l.h.s. of Eq. (34) vanishes exactly. This motivates the ansatz R αβ,k = P α δ α,β δ k,0 , which means physically that the stationary state consists of an incoherent population of the Floquet modes. The occupation probabilities P α are found by inserting the ansatz (36) into the master equation (29) and read Thus, the populations are determined by an average over the Fermi functions, where the weights are given by the effective coupling strengths of the k-th Floquet sideband |Φ α,k to the corresponding lead. The average current (32) is within RWA readily evaluated to read
III. RECTIFICATION OF THE DRIVING-INDUCED CURRENT
In the absence of an applied voltage, i.e. µ L = µ R , the average force on the electrons on the wire vanishes. Nevertheless, it may occur that the molecule rectifies the laser-induced oscillating electron motion and consequently a non-zero dc current through the wire is established. In this section we investigate such ratchet currents in molecular wires.
As a working model we consider a molecule consisting of a donor and an acceptor site and N − 2 sites in between (cf. Fig. 1). Each of the N sites is coupled to its nearest neighbors by a hopping matrix elements ∆. The laser field renders each level oscillating in time with a position dependent amplitude. The corresponding timedependent wire Hamiltonian reads where x n = (N + 1 − 2n)/2 is the scaled position of site |n , the energy a(t) equals the electron charge multiplied by the time-dependent electrical field of the laser and the distance between two neighboring sites. The energies of the donor and the acceptor orbitals are assumed to be at the level of the chemical potentials of the attached leads, E 1 = E N = µ L = µ R . The bridge levels E n , n = 2, . . . , N − 1, lie E B above the chemical potential, as sketched in Fig. 1. Later, we will also study the modified bridge sketched in Fig. 6, below. We remark that for the sake of simplicity, intra-atomic dipole excitations are neglected within our model Hamiltonian.
In all numerical studies, we will use the hopping matrix element ∆ as the energy unit; in a realistic wire molecule, ∆ is of the order 0.1 eV. Thus, our chosen wire-lead hopping rate Γ = 0.1∆/h yields eΓ = 2.56 × 10 −5 Ampère and Ω = 3∆/h corresponds to a laser frequency in the infrared. Note that for a typical distance of 5Å between two neighboring sites, a driving amplitude A = ∆ is equivalent to an electrical field strength of 2 × 10 6 V/cm.
A. Symmetry
It is known from the study of deterministically rocked periodic potentials 47 and of overdamped classical Brownian motion 19 that the symmetry of the equations of motion may rule out any non-zero average current at asymptotic times. Thus, before starting to compute ratchet currents, let us first analyze what kind of symmetries may prevent the sought-after effect. Apart from the principle interest, such situations with vanishing average current are also of computational relevance since they allow to test numerical implementations.
The current formula (25) and the master equation (29) contain, besides Fermi factors, the overlap of the Floquet states with the donor and the acceptor orbitals |1 and |N . Therefore, we focus on symmetries that relate these two. If we choose the origin of the position space at the center of the wire, it is the parity transformation P : x → −x that exchanges the donor with the acceptor, |1 ↔ |N . Since we deal here with Floquet states |Φ α (t) , respectively with their Fourier coefficients |Φ α,k , we must take also into account the time t. This allows for a variety of generalizations of the parity that differ by the accompanying transformation of the time coordinate. For a Hamiltonian of the structure (41), two symmetries come to mind: a(t) = −a(t + π/Ω) and a(t) = −a(−t). Both are present in the case of a purely harmonic driving, i.e. a(t) ∝ sin(Ωt). We shall derive their consequences for the Floquet states in the Appendix A and shall only argue here why they yield a vanishing average current within the present perturbative approach.
Generalized parity
As a first case, we investigate a driving field that obeys a(t) = −a(t + π/Ω). Then, the wire Hamiltonian (41) is invariant under the so-called generalized parity transformation Consequently, the Floquet states are either even or odd under this transformation, i.e. they fulfill the relation (A5), which reduces in the tight-binding limit to where σ α = ±1, according the generalized parity of the Floquet state |Φ α (t) .
The average currentĪ is defined in Eq. (32) by the current formulae (30) and (31) together with the master equation (29). We apply now the symmetry relation (43) to them in order to interchange donor state |1 and acceptor state |N . In addition we substitute in both the master equation and the current formulae R αβ,k by R αβ,k = σ α σ β (−1) k R αβ,k . The result is that the new expressions for the current, including the master equation, are identical to the original ones except for the fact that I L , Γ L andĪ R , Γ R are now interchanged (recall that we consider the case µ L = µ R ). Therefore, we can conclude thatĪ which yields together with the continuity relation (33) a vanishing average currentĪ = 0.
Time-reversal parity
A further symmetry is present if the driving is an odd function of time, a(t) = −a(−t). Then, as detailed in the Appendix A, the Floquet eigenvalue equation (19) is invariant under the time-reversal parity i.e. the usual parity together with by time-reversal and complex conjugation of the Floquet states Φ. The consequence for the Floquet states is the symmetry relation (A7) which reads for a tight-binding system Inserting this into the current formulae (30) and (31) would yield, if R αβ,k were real, again the balance condition (44) and, thus, a vanishing average current. However, the R αβ,k are in general only real for Γ L = Γ R = 0, i.e. for very weak coupling such that the condition (34) for the applicability of the rotating-wave approximation holds. Then, the solution of the master equation is dominated by the RWA solution (36), which is real. In the general case, the solution of the master equation (29) is however complex and consequently the symmetry (46) does not inhibit a ratchet effect. Still we can conclude from the fact that within the RWA the average current vanishes, thatĪ is of the order Γ 2 for Γ → 0, while it is of the order Γ for broken time-reversal symmetry.
B. Rectification from harmonic mixing
The symmetry analysis in Sec. III A explains that a symmetric bridge like the one sketched in Fig. 1 will not result in a average current if the driving is purely harmonic since a non-zero value is forbidden by the generalized parity (42). A simple way to break the time-reversal part of this symmetry is to add a second harmonic to the driving field, i.e., a contribution with twice the fundamental frequency Ω, such that it is of the form as sketched in Fig. 2. While now shifting the time t by a half period π/Ω changes the sign of the fundamental frequency contribution, the second harmonic is left unchanged. The generalized parity is therefore broken and we find generally a non-vanishing average current. The phase shift φ plays here a subtle role. For φ = 0 (or equivalently any multiple of π) the time-reversal parity is still present. Thus, according to the symmetry considerations above, the current vanishes within the rotatingwave approximation. However, as discussed above, we expect beyond RWA for small coupling a currentĪ ∝ Γ 2 . Figure 3 confirms this prediction. Yet one observes that already a small deviation from φ = 0 is sufficient to restore the usual weak coupling behavior, namely a current which is proportional to the coupling strength Γ.
The average current for such a harmonic mixing situation is depicted in Fig. 4. For large driving amplitudes, it is essentially independent of the wire length and, thus, a wire that consists of only a few orbitals, mimics the behavior of an infinite tight-binding system. Figure 5 shows the length dependence of the average current for different driving strengths. The current saturates as a function of the length at a non-zero value. The convergence depends on the driving amplitude and is typically reached once the number of sites exceeds a value of N ≈ 10. For low driving amplitudes the current response is more sensitive to the wire length. Length dependence of the average current for harmonic mixing with phase φ = π/2 for different driving amplitudes; the ratio of the driving amplitudes is fixed by A1 = 2A2. The other parameters are as in Fig. 4; the dotted lines serve as a guide to the eye.
the Hamiltonian by employing a driving field of the form while making the level structure of the molecule asymmetric. An example is shown in Fig. 6. 33,48 In this molecular wire model, the inner wire states are arranged in N g groups of three, i.e. N − 2 = 3N g . The levels in each such group are shifted by ±E S /2, forming an asymmetric saw-tooth like structure. Figure 7 shows for this model the stationary timeaveraged currentĪ as a function of the driving amplitude A. In the limit of a very weak laser field, we find I ∝ A 2 E S , as can be seen from Fig. 8. This behavior is expected from symmetry considerations: On one hand, the asymptotic current must be independent of any initial phase of the driving field and therefore is an even function of the field amplitude A. On the other hand,Ī vanishes for zero step size E S since then both parity symmetries are restored. The A 2 -dependence indicates that the ratchet effect can only be obtained from a treatment beyond linear response. For strong laser fields, we find thatĪ is almost independent of the wire length. If the driving is moderately strong,Ī depends in a short wire sensitively on the driving amplitude A and the number of asymmetric molecular groups N g ; even the sign of the current may change with N g , i.e. we find a current reversal as a function of the wire length. For long wires that comprise five or more wire units (corresponding to 17 or more sites), the average current becomes again lengthindependent, as can be observed in Fig. 9. This identifies the current reversal as a finite size effect. Figure 10 depicts the average current vs. the driving frequency Ω, exhibiting resonance peaks as a striking feature. Comparison with the quasienergy spectrum reveals that each peak corresponds to a non-linear resonance between the donor/acceptor and a bridge orbital. While the broader peaks athΩ ≈ E B = 10∆ match the 1:1 resonance (i.e. the driving frequency equals the energy difference), one can identify the sharp peaks forhΩ < ∼ 7∆ as multi-photon transitions. Owing to the broken spatial symmetry of the wire, one expects an asymmetric current-voltage characteristic. This is indeed the case as depicted with the inset of Fig. 10. Fig. 7. The inset displays the dependence of the average current on an externally applied static voltage V , which we assume here to drop solely along the molecule. The driving frequency and amplitude are Ω = 3∆/h (cf. arrow in main panel) and A = ∆, respectively.
IV. CONCLUSIONS
With this work we have detailed our recently presented approach 33 for the computation of the current through a time-dependent nanostructure. The Floquet solutions of the isolated wire provide a well-adapted basis set that keeps the numerical effort for the solution of the master equation relatively low. This allows an efficient theoretical treatment that is feasible even for long wires in combination with strong laser fields.
With this formalism we have investigated the possibility to rectify with a molecular wire an oscillating external force brought about by laser radiation, thereby inducing a non-vanishing average current without any net bias. A general requirement for this effect is the absence of any reflection symmetry, even in a generalized sense. A most significant difference between "true" ratchets and molecular wires studied here is that the latter lack the strict spatial periodicity owing to their finite length. However, as demonstrated above, already relatively short wires that consist of approximately 5 to 10 units can mimic the behavior of an infinite ratchet. If the wire is even shorter, we find under certain conditions a current reversal as a function of the wire length, i.e. even the sign of the current may change. This demonstrates that the physics of a coherent quantum ratchet is richer than the one of its units, i.e. the combination of coherently coupled wire units, the driving, and the dissipation resulting from the coupling to leads bears new intriguing effects. A quantitative analysis of a tight-binding model has demonstrated that the resulting currents lie in the range of 10 −9 Ampère and, thus, can be measured with today's techniques.
An alternative experimental realization of the presented results is possible in semiconductor heterostructures, where, instead of a molecule, coherently coupled quantum dots 49 form the central system. A suitable radiation source that matches the frequency scales in this case must operate in the microwave spectral range.
V. ACKNOWLEDGEMENT
We appreciate helpful discussions with Sébastien Camalet, Igor Goychuk, Gert-Ludwig Ingold, and Gerhard Schmid. This work has been supported by Sonderforschungsbereich 486 of the Deutsche Forschungsgemeinschaft and by the Volkswagen-Stiftung under grant No. I/77 217.
APPENDIX A: PARITY OF A SYSTEM UNDER DRIVING BY A DIPOLE FIELD
Although we describe in this work the molecule within a tight-binding approximation, it is more convenient to study its symmetries as a function of a continuous position and to regard the discrete sites as a special case. Let us first consider a Hamiltonian that is an even function of x and, thus, is invariant under the parity transformation P : x → −x. Then, its eigenfunctions ϕ α can be divided into two classes: even and odd ones, according to the sign in ϕ α (x) = ±ϕ α (−x).
Adding a periodically time-dependent dipole force xa(t) to such a Hamiltonian evidently breaks parity symmetry since P changes the sign of the interaction with the radiation. In a Floquet description, however, we deal with states that are functions of both position and timewe work in the extended space {x, t}. Instead of the stationary Schrödinger equation, we address the eigenvalue problem with the so-called Floquet Hamiltonian given by where we assume a symmetric static part, H 0 (x) = H 0 (−x). Our aim is now to generalize the notion of parity to the extended space {x, t} such that the overall transformation leaves the Floquet equation (A1) invariant. This can be achieved if the shape of the driving a(t) is such that an additional time transformation "repairs" the acquired minus sign. We consider two types of transformation: generalized parity and time-reversal parity. Both occur for purely harmonic driving, a(t) = sin(Ωt). In the following two sections we derive their consequences for the Fourier coefficients of a Floquet states Φ(x, t).
Time-inversion parity
A further symmetry is found if a is an odd function of time, a(t) = −a(−t). Then, time inversion transforms the Floquet Hamiltonian (A2) into its complex conjugate so that the corresponding symmetry is given by the antilinear transformation This transformation represents a further generalization of the parity P; we will refer to it as time-inversion parity since in the literature the term generalized parity is mostly used in the context of the transformation (A4).
Let us now assume that that the Floquet Hamiltonian is invariant under the transformation (A6), H(x, t) = H * (−x, −t), and that Φ(x, t) is a Floquet state, i.e., a solution of the eigenvalue equation (A1) with quasienergy ǫ. Then, Φ * (−x, −t) is also a Floquet state with the same quasienergy. In the absence of any degeneracy, both Floquet states must be identical and, thus, we find as a consequence of the time-inversion parity S TP that Φ(x, t) = Φ * (−x, −t). This is not nessecarily the case in the presence of degeneracies, but then we are able to choose linear combinations of the (degenerate) Floquet states which fulfill the same symmetry relation. Again we are interested in the Fourier decomposition (A3) and obtain The time-inversion discussed here can be generalized by an additional time-shift to read t → t 0 − t. Then, we find by the same line of argumentation that Φ k (x) and Φ * k (−x) differ at most by a phase factor. However, for convenience one may choose already from the start the origin of the time axis such that t 0 = 0.
|
2015-03-21T17:44:09.000Z
|
2002-08-21T00:00:00.000
|
{
"year": 2002,
"sha1": "be10ec95499e0e58ca1968cf30242f8fa836893e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0208404",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "903288844da42cb8ecbc8a85b9fa813cbbd1886c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
270060301
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of drying systems in terms of energy consumption, effective mass diffusion, exergy e�ciency and improvement-sustainability index in the valorizations of waste tomatoes by thermal processes
In this study, the effect of carrier agent added at the rates of 5% and 10% to tomatoes with physical defects that have no market value and the effects of convective (CD), vacuum (VD), hybrid (HD), temperature controlled microwave (MTCM) methods on the energy parameters of powder production processes were investigated. The products reached their �nal moisture values in the shortest time with the MTCM method and in the longest time with the CD method. Effective moisture diffusion varied between 8.01x10 − 8 -1.97x10 − 6 m 2 /s. It has been determined that MTCM has the lowest energy consumption. SMER values of drying processes varied between 0.0018329–0.007384 kg/kWh. SEC values ranged between 546.76-135.42 kWh/kg. Exin, Exout, Exevap, Ex-Vdryer, Ex-Vdrying, SI and IP values of drying processes are 3.65–4.54 J/s, 3.13–3.43 J/s, 10.91–14.17 kJ/kg, 2.94–3.72, 0.72–0.90, respectively. The values varied between 3.60–9.99 and 0.34–0.91. It has been observed that the VD method is more advantageous than other drying methods in terms of exergy energy values. The MTCM method was found to be more advantageous in terms of drying time and energy consumption parameters.
Introduction
45% of agricultural products produced globally are waste [13,10].14.4% of these wastes occur in the eld, 8.8% in consumption, 8.5% in processing, 7.1% in distribution and 6.9% in post-harvest processes [16, 39,10].).In the structure of agricultural product wastes, especially fruit and vegetable wastes; There are plenty of substances with nutritional ( ber, vitamins and minerals), physicochemical, antioxidant, bioactive, antimicrobial and antibacterial properties [28,10].These wastes have a high potential to be transformed into new value-added products after being processed through physical, chemical and thermo-chemical processes.One of these valued products is food powder.Food powder is a popular topic.It is reported that the economic value of fruit powder is 13.52 million dollars.It is reported that this economic value will increase by 7.40% and reach 14.52 million dollars [20,50].With the decrease in the weight and volume of agricultural products turned into dust, transportation-storage costs decrease [27] and their shelf life is extended.One of the products that are turned into powder and used is tomato.Tomato (Lycopersicon esculentum L.) is one of the most consumed vegetables with high nutritional value.Ripe tomato fruit is rich in avour, bre, vitamins, minerals, salt and organic acids.The structure of tomato contains 0.6-6.6%dry matter, 0.95-1.0%protein, 4.0-5.0%sugar, 0.2-0.3%fat, 0.8-0.9%cellulose, 0.6% ash, 0.5% organic acids, 19-35 mg/kg vitamin C, It contains 0.2-2 mg/kg carotene, 0.3-1.6 mg/kg thiamine and 1.5-6 mg/kg ribo avin [7,15].According to the 2021 data of the FAO, the amount of fresh vegetable production in the world is approximately 1 billion 155 million tons, and the amount of tomato production has a large share of 16.38% in fresh vegetable production [14,11].
Before being turned into powder, agricultural products are subjected to a drying process.The thermal removal of moisture content from fresh products is called drying.The main purpose here is to preserve the quality characteristics of the product [6], keep energy consumption at a minimum level and increase energy e ciency by creating high heat-mass transfer.It is reported that an average of 10-25% of the amount of energy spent in the production-consumption chain is consumed only in drying processes [34], and in industrially developed countries, this rate varies between 7-15% on average [2].For this reason, in order to reduce energy consumption, products are dried in different systems and their performance values are investigated in terms of energy parameters [26,44].In the literature, Tagnamas et al. [47] dried carob seeds in a solar type convective dryer at temperatures of 50, 60, 70 and 80 ºC.They reported that temperature values were signi cantly affected by energy parameters.They found that the energy e ciency rates of drying processes varied between 2.6-4.2%.Kusuma et al. [25] dried banana leaves using microwave-assisted convective, microwave and convective methods.They determined the average energy consumption values according to drying processes as 0.0193, 0.0198, and 8.100 kWh, respectively.El-Mesery et al. [12] dried grape fruits with a hot air convective dryer.They investigated the thermal analysis of drying processes.Thermal e ciency rates of drying varied between 50.31 and 74.52%.The lowest thermal e ciency was found between 29.77% and 52.45%.Minimum speci c energy consumption was found to be 27.44 MJ/kg.Regarding the energy analysis of drying processes, Kamble et al. [22], Guangbin et al. [17], and Moreno et al. [32] etc.Many studies have been done.No comprehensive study has been found that investigates the effects of drying methods on the drying kinetics, energy consumption, energy-exergy and sustainability and improvement analysis of the processes of converting tomato powder into tomato powder with different drying systems using waste tomatoes.
In this study, the drying kinetics and effective moisture diffusion of tomato powder drying processes produced by microwave assisted hot air hybrid (HD), modi ed temperature controlled microwave (MTCM), hot air convective (CD) and vacuum assisted convective drying (VD) systems using waste tomatoes were investigated energy consumption, energy-exergy e ciency, sustainability and recoverability indices were investigated.
Fresh product
The waste tomatoes used in this study were samples collected in the soilless agriculture greenhouse in the spring of 2023.Within the scope of the study, Alsancak Fı tomato variety was selected for powder production.This type of pole is a product suitable for eld and greenhouse cultivation, but also does not have resistance and/or tolerance to biotic and abiotic stress factors.As a result of the inhibition of calcium intake due to sodium-calcium competition in the environment where the product is grown, products with ower nose physiological disorders and no market value have been used in the production of powder.Before moisture determination and drying of the products, the stem and leaf parts of the tomatoes were removed and washed thoroughly with chlorinated tap water.The parts with blossom end rot were cut and cleaned with a sharp knife (Fig. 1).
Preparation of puree
Waste tomatoes samples were blanched for 1 minute with a WARNING brand HGB2WTG4 model, 220-240 V, 50-60 Hz, (400 W) glass blender and turned into a puree.5%, and 10% Parmor brand maltodextrin (C12H22O11) was added to the prepared puree and mixed with a hand blender for 5 minutes.Use Beko brand 2166 model (700 W) food processor (1.5 min) to homogeneous the dried samples.
Moisture determination processes and drying processes
To determine the initial moisture content of waste tomato puree, it was dried in an oven (Şimşek labrteknik brand-ST-055 model) set at 70 ºC (Pixton and Warburton, 1973) until the weight change was constant.The initial moisture content (d.b.) of the samples was determined as an average of 4.60 ± 0.21 g moisture/g dry matter.It was dried to an average value of 0.035 ± 0.024 g moisture/g dry matter.The weight change of dry waste tomato puree samples was determined with an AND brand GF-300 model precision scale.MTCM (70 ºC), HD (350 W + 70 ºC), VD (70 ºC) and CD (70 ºC) were used to dry waste tomato purees (Table 1).Şimşek Laborteknik brand, ST-055 model hot air convective dryer was used.The samples were dried in the CD method at temperature of 70 ºC.
Modi ed temperature controlled microwave (MTCM) An Optris brand non-contact infrared temperature sensor is mounted on the Kenwood brand 13J28 model microwave oven, taking into account the technical speci cations.The drying material in the microwave oven was dried by placing it on a rotating glass tray.The device used is a household microwave oven with 230 V, 50 Hz and 800 W technical speci cations.In addition, it has a 2450 MHz feature.The surface temperature of waste tomato puree samples is measured with a non-contact infrared temperature sensor.The read values are transmitted to the control panel.MTCM stops when the surface temperature of the product reaches the drying temperature set on the control panel.After a 15second rest period, MTCM automatically starts working again when the product temperature falls below the speci ed drying temperature (± 2 ºC).If the product temperature does not decrease to the lower value of the drying temperature during this period, it rests for the speci ed period [48,38].The samples were dried in the MTCM method at temperature of 70 ºC.
Vacuum drying (VD)
CLS brand, CLVO-64T model vacuum dryer was used.The samples were dried in the VD method at temperature of 70 ºC.
Hybrid drying (HD)
The hybrid dryer is an oven-assisted microwave dryer.The drying process is carried out by placing the materials on the rotating glass tray inside.The hybrid dryer is Ariston brand, model MWHA 33343 B. In this dryer, waste tomato purees were dried at 350 W + 70 ºC..
Calculation of moisture content
Here: Mt.Moisture content (g water g dry matter − 1 ), dt; min., DR; dry rate (g water g dry matter min − 1 ).
Equation number 3 was used to determine the moisture content removed during the drying process.
Here: MR; moisture rate, M; Instant moisture content (g moisture/g dry matter), Me; Equilibrium moisture content (g moisture/g dry matter), Mo; initial moisture content (g moisture/g dry matter).
Energy consumption values
Polaxtor brand PLX-15366 model power meter (± 0.02 kWh) was used to measure the energy values consumed in the drying processes of waste tomato puree.
Speci c moisture extraction rate (SMER)
Equation number 5 was used to calculate the amount of moisture removed (SMER) in response to the unit energy value in the drying processes of waste tomato puree [45].
Speci c energy consumption (SEC)
Equation 6 was used to calculate the speci c energy consumption value of waste tomato puree drying processes [33].
Statistical analysis
Datas were evaluated by SPSS23.Duncan test was performed with the program.SigmaPlot10 to generate drying kinetics curves.program was used.Reliability values were calculated based on p < 0.05.
Drying values
Moisture and drying rates determined in different drying methods of waste tomato puree are given in Fig. 2.
According to Fig. 2, microwave drying methods dried waste tomato puree in a shorter time than convective drying methods.The reason for this is that the drying rates detected in microwave drying methods are higher than in convective drying methods.Kılıçlı et al. [24] found that the drying time decreased at high temperatures in the drying process of tomato puree using the convective drying method.This is due to the increase in drying rates as temperature increases.In all drying methods, drying time decreased by increasing the carrier agent ratio from 5-10%.This is because the carrier agent had an accelerating effect on the removal of water from the product during drying.Özçelik et al. ( 2019), increasing the carrier agent ratio used in blackberry puree contributed to the decrease in drying time.The average DR values of waste tomato puree dried using 5% and 10% carrier agent by CD, VD, HD and MTCM methods are 0.0051-0.0061,0.0073 − 0.0069, 0.050-0.054and 0.0174 − 0.0169 g moisture/g dry matter.min,respectively.It was determined as.Kılıçlı et al. [24] determined the DR values of the tomato powder they produced using 7.5% aquafaba green pea powder as 0.15, 0.26 and 0.30 g moisture/g dry matter for temperatures of 50, 60 and 70 ºC, respectively.The reason why the DR data obtained in this study is lower than the DR data in the literature is due to the difference in the initial moisture content of the product, the physicochemical properties of the product and the carrier agent powder used.Samimi-Akhijahani and Arabhosseini [40] determined that the maximum DR values of the drying processes of tomatoes with slice thicknesses of 3, 5 and 7 mm with a solar type convective dryer with a sun tracking system were 0.1, 0.08 and 0.05 g moisture/g dry matter.min,respectively.It was observed that the ndings obtained in this study were compatible with the ndings in the literature.It was found that the DR values determined in microwave drying processes were higher than the DR values determined for convective methods.This is due to the fact that microwave energy produces heat directly in the product [31,18].They found that the DR values of the drying-densi cation processes of tomato residues using microwave and convective methods were higher in the microwave [29].
Effective moisture diffusion
Effective diffusion values of dried waste tomato purees are given in Table 2. 2, drying methods and carrier agent rates affected the effective diffusion values of the drying processes.It has been observed that the effective diffusion values of convective drying methods are higher than microwave drying methods [21].The reason for this is that microwave drying methods carry out the drying process by creating a pressure difference in the product, causing the moisture moving away from the product to move forward.This phenomenon caused the diffusion area of moisture to decrease and the effective diffusion value to be lower.Minaei et al. [30] dried pomegranate seeds as control and pre-treatment using vacuum, microwave and infrared methods.It was determined that the effective moisture diffusion for the drying methods varied between the values of 6.77-52.5×10− 10 m 2 /s, 3.43-29.19×10− 10 and 4-32×10 − 10 m 2 /s, respectively.It was observed that the relationship between the ndings in the literature and the ndings obtained within the scope of this study was similar.Al-Hilphy et al. [3] dried tomato slices in a convective dryer with a halogen lamp at temperatures of 60, 70 and 80 ºC.They reported that the effective moisture diffusion varies between 7.96×10 − 9 and 1.07×10 − 8 m 2 /s.It was observed that the ndings obtained within the scope of this study were compatible with the ndings in the literature.
Energy consumption values
SMER and SEC values of dried waste tomato puree are given in Fig.According to Fig. 3, drying methods and carrier agent rates affected the energy consumption analysis of waste tomato puree drying processes.SEC energy consumption values of convective drying methods were found to be higher than microwave drying methods [41].This has reduced the drying time in microwave drying methods, since the energy is generated directly in the dried product.Szadzińska et al.
[46] found in their drying study using convective and microwave methods that the total energy consumption was lower in microwave drying processes.Increasing the carrier agent ratio in the drying processes increased the SMER values and decreased the SEC values.Carrier agent improved the drying kinetics of waste tomato puree.Taşova and Öcalan [49] investigated the effect of different ratios of carrier agent in pumpkin residues on the SMER and SEC parameters of the drying processes.Within the scope of the study, increasing the carrier agent ratio decreased SMER values and SEC values.It has been observed that the relationship between the study conducted in the literature and the ndings obtained in this study is exactly the opposite.The reason for this is that the interaction between pumpkin residue and carrier agent and the interaction between waste tomato and carrier agent are different from each other.The VD method is more advantageous than the CD method in terms of SMER and SEC energy consumption parameters.The reason for this is that the total energy consumption value is lower in the VD method than in the CD method.Kaveh et al. [23] found that the SEC energy consumption values in the green pea drying processes they carried out with convective and vacuum methods were lower in the vacuum method.The reason for this is that due to the vacuum effective drying method, the drying time of the product is lower than the convective method.It has been observed that the MTCM method is more advantageous than the HD method in terms of energy consumption parameters.Although the drying time of the MTCM method was higher than the HD method, SMER values were found to be higher and SEC values were lower.The reason for this is that in the modi ed microwave dryer, when the product reaches the determined drying temperature, it automatically stops for a rest period (15 s).This phenomenon causes the total drying time to extend.Orikasa et al. [36] determined the total energy consumption values of tomato drying processes using vacuum-assisted microwave, microwave and convective methods as 1.352, 0.222 and 6.212 kWh, respectively.Within the scope of the study, the lowest energy consumption value was determined in the microwave drying method.This is due to microwave energy producing higher thermal energy in the product.While the total energy consumption for MTCM was determined as 0.751 kWh for 5% and 0.737 kWh for 10%, for HD these values were determined as 1.387 kWh for 5% and 1.228 kWh for 10%.
Exergy energy, e ciency and sustainabilityimprovement index values
Exergy energy, energy e ciency and sustainability index values of waste tomato puree drying processes are given in Table 3. [42] found in their study that the maximum E xin value in the tomato drying process using a solar type dryer was 350 W. The reason why this value in the literature is higher than this study is that the drying time of the heat source in the solar dryer is long due to the uncontrolled rays coming from the sun.However, the heat insulation feature of the dryer and the fact that the tomato was dried in slices affected it.It was determined that the E xout values of the drying process of waste tomato puree varied between 3.1335-3.4253J/s.Sharma et al. [42] [42] reported that sustainability index and improvement index values ranged between 1.55-2.39and 0.006966-0.065984,respectively.It has been observed that the ndings in the literature are generally compatible with the ndings for the drying processes of waste tomato puree.It is thought that the detected differences are due to the moisture content of the products, their physical properties, type, technical and thermal differences of the dryers.
Conclusion
Waste tomato puree was converted into powder and dried using different methods for use.Different drying methods and carrier agent ratios were effective in changing the drying kinetics of waste tomato puree.The drying and moisture content of the products dried using microwave methods and 10% carrier agent rates reached the highest level.Convective drying methods were more average than microwave methods in terms of drying and moisture rates.Drying methods and carrier agent rates affected the effective moisture diffusion values of waste tomatoes.Effective moisture diffusion values determined in convective methods were found to be higher than microwave methods.Since the drying rate values determined in convective methods are lower than in microwave methods, it causes the moisture moving away from the product to spread over a wider area.With the increase in the carrier agent ratio, energy consumption parameters were positively affected.It has been found that microwave drying methods are more advantageous than convective methods in terms of determined energy consumption values.The VD method was found to be the most advantageous method among the drying processes in terms of exergy energy e ciency of dryers (3.7224), exergy energy e ciency of drying processes (0.8999) and sustainability index (0.9850) values.In the MTCM method, the improvement index (0.9103) was calculated to be higher than other dryers.Although more advantageous results were obtained in terms of drying kinetics and energy consumption parameters in the MTCM method, the improvement index was found to be higher.It has been observed that the use of this dryer in the production of waste tomato powder is su cient, but it is possible to obtain higher performance values by improving.
Con ict of interest
The author in the study has no con ict of interest with any other author.
Equation 1
was used to determine the moisture content of waste tomato purees according to dry basis (d.b.). 1 Here: Mi; Initial weight (g), Ml; nal weight (g), Nd.b: g moisture/g dry matter 2.5.Drying kinetics Equation 2 was used to determine the drying rates of waste tomato purees.DR
Figure 2 Dry kinetics of waste tomato puree Figure 3
Figure 2
Table 2
Effective diffusion values of waste tomato puree
Table 3
Exergy energy and exergy e ciency values of drying processes reported that E xout values of tomato drying processes varied between 0.41738-0.67141.In this study, it was determined that E xevap values varied between 10.9099-14.1691kJ/kg.Aviara et al.[4]Cassava starch was dried in a tray type oven at increasing temperatures up to 40-60 ºC.They found that the E xin and E xout energy values of the drying processes
|
2024-05-28T15:03:51.193Z
|
2024-05-26T00:00:00.000
|
{
"year": 2024,
"sha1": "dadb788f0a6ca753fa58d2c8a9caee9411026275",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-3833212/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "216bf32922493b61a6c12ed88baa588a19a64fe2",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
226304346
|
pes2o/s2orc
|
v3-fos-license
|
An Evaluation of Portuguese Societal Opinion towards the Practice of Bullfighting
Simple Summary Bullfighting is one of the most controversial topics in animal welfare and ethics in recent years. This activity is an issue at the forefront of many animal welfare organizations. In the present study, an online questionnaire was used to seek Portuguese citizens’ opinions towards bullfighting and to relate these opinions to certain demographic characteristics. The majority of respondents had negative opinions about bullfights. Most questioned the artistic reasons to take the bull’s life in the name of culture and did not attend bullfighting events. However, the population interviewed was not representative of the Portuguese population. Men, older people, Roman Catholics, and people from rural areas (underrepresented in the study sample) showed a more favorable attitude towards bullfighting. Contrast between regions was also reflected; the districts where the most favorable opinions were collected (Satarém, Évora, Beja, and Portalegre) were those with the greatest presence of bull breeders. Public opinion research is an important policy-making instrument that could be useful in the face of possible initiatives to ban bullfighting at regional or country levels. Abstract Bullfighting is a controversial sport that continues to be legally permitted in a number of countries around the world, including Portugal. The spectacle has attracted significant attention from animal protectionist groups for many years because of concerns for animal distress, pain, and suffering during the fights. While there has been strong support for the sport in Portugal in the past, there is a need to study social profiles regarding the acceptability of this sport before a case can be made for changes in regional and national legislation. In this study, Portuguese attendance patterns at bullfights were assessed in addition to public opinions on welfare and ethical aspects of bullfighting, based on demographic variables. Study participants (n = 8248) were largely recruited through Portuguese social media channels (respondents may not be representative of the Portuguese population). Questionnaire data were evaluated by means of frequency tables, multiple correspondence analyses, and a two-step cluster analysis. Most respondents had a negative opinion about bullfighting and perceived that bullfighting had no positive impact on the country. However, while most respondents thought that the bull suffered during bullfighting, the opinion regarding banning bullfighting was far from unanimous. Based on the demographic analysis, the profile of individuals with more favorable responses towards bullfighting were men > 65 years old, of Roman Catholic faith, of low- or high-income levels, from more rural areas of Portugal. Somewhat surprisingly, there was a tendency to favor bullfighting amongst veterinary professionals. We conclude that there were still large pockets of individuals who desire to maintain the practice of traditional bullfighting within Portuguese society, despite recognition of animal suffering during the event.
Introduction
Bullfighting, tauromachia or tauromachy, as it is frequently called, is a traditional exhibition seen in Spain, Portugal, and Southern France. It was introduced by Spaniards to Colombia, Ecuador, Venezuela, Peru, and Mexico dating back to the 16th century [1]. Portuguese-style bullfights are called touradas or corridas de touros, and each year, approximately 2500 bulls are used in fights in Portugal [2]. The Portuguese bullfight is conducted by a cavaliero (rider) on horseback, who stabs the bull with several bandeiras (small javelins). The bull is also challenged by a group of men on foot called forcados, who are usually unarmed and who work to subdue the bull [3].
In Portugal, unlike other countries, the bull is not killed in full view of the public. However, on most occasions, the bull is sent to a slaughterhouse after the fight, where it is slaughtered according to Portuguese regulations. The exception to this is in Barrancos, a town in Southern Portugal, in which a special legal dispensation has been granted to kill the bull during fights as part of a long-standing tradition.
Bullfighting across Portugal is not consistently seen across the country and most fights take place during summer [2]. Of the country's 308 council areas, only 44 or approximately 15% have bullfighting activity, with Lisboa and Albufeira (in the Algarve) providing the main venues. Both the number of touradas and the number of spectators has generally declined in recent years. However, between 2016 and 2017, there was a 4.4% increase in bull fights, something that had not happened since 2010 [2].
Supporters of bullfighting consider it to be a deeply ingrained and integral part of the national culture and identity [4]. In addition, although poorly documented, the economic value of bullfighting in Portugal is thought to be significant in certain regions. Minor economic gains are realized at cattle ranches through breeding, raising, and caring for the bulls, with greater economic gains going to those working as entrepreneurs, bullfighters and their assistants, and arena staff [5]. Supporters also suggest that bullfighting attracts tourists; however, recent surveys suggest that the number of tourists attracted to this kind of activity is small. In fact, bullfighting might even be perceived as an unattractive event to tourists [6].
Due to the perceived suffering and distress of bulls during the fights there has been significant interest by animal protectionist groups to abolish bullfighting. Portuguese groups have been active in campaigning against bullfighting, and from 2002 this was their main campaign activity [7]. The Portuguese government has recently rejected a bill to ban bullfights that was submitted by PAN, the People-Animals-Nature party [8]. The bill was rejected by all major parties with few abstentions. Within the E.U., lawmakers in the European Parliament voted to approve an amendment to the 2016 EU budget indicating that EU subsidies should not go to farms that raise bulls for use in bullfighting. They added that such funding "is a clear violation of the European Convention for the Protection of Animals Kept for Farming Purposes" [9].
Some researchers have suggested that fighting bulls secrete large quantities of endorphins during the fight that help to mitigate pain [10,11]. Endorphins are hormones that can modulate physiologic responses to pain, but also to aversive stimuli [12]. Despite this, it seems evident that during wounding and other physical attacks that occur during bullfights, bulls exhibit behaviors indicative of distress including tail swishing, labored breathing, exhaustion, and reluctance to move [13]. A previous study also described severe anatomical damage to bulls after fights, concluding that this type of show clearly violates the minimum animal welfare standards and represents a clear expression of animal abuse [14]. In Portugal, the law 92/95 states that all unjustified violence against animals is forbidden, examples including acts that consist of unnecessarily inflicting death, cruel and prolonged suffering, or severe lesions to an animal [15]. Although animal abuse has been a part of tradition and culture, in the course of recent decades a number of practices have been questioned and many have been forbidden by law. Despite this, there is still legal protection of bullfighting in several countries on the grounds of preserving bullfighting as a national tradition [16]. In line with this, the Portuguese legal system criminalizes violence towards animals, but exceptions to this are granted for bullfights (and other entertainment using bulls) [7]. Since bulls are animals with the capacity to suffer pain, the reasons to oppose bullfighting would be the same as those to oppose other animal blood sports or practices that cause suffering and death of animals [17]. Pressures from the European Parliament exist to abolish bullfighting in those European countries where it still exists due to the duality of this activity occurring in the EU, in which animal welfare has been declared a priority [16]. In the described context, bullfighting goes against an animal's rights and could be only permitted through a legal loophole expressly exempting bullfights from the laws of animal protection.
Although there seems to be a heightened sense of public contempt in many countries toward the treatment of animals and toward the use of animals in 'sport', several blood sports with animals still maintain a certain popularity in different areas [18]. In the context of understanding why people are attracted to blood sports and why they still exist, one significant reason includes a lack of understanding of the basic needs and well-being of animals [19]. This lack of understanding could lead to a lack of empathy though objectification. This could be attributed to a lack of education regarding basic animal care, behavior, and welfare [19]. In the case of bullfighting, a previous paper that evaluated the opinions of supporters included a primary motivation of having grown up in family environments related to bullfighting, the aesthetics of the show, that is, considering it as an artistic expression in which the bullfighter is trained in a certain style and elicits emotion through the act of the fight or even ecological reasons in that the existence of bullfighting preserves the breed of cattle and the typical ecosystem in which it is raised [20].
A study conducted well over a decade ago suggested that many Portuguese citizens believe that bullfights should be abolished due to their cruel and violent nature [21]. In that study, 51% of respondents indicated support for laws banning bullfighting, whereas 40% were opposed to changing the status quo [21]. However, recent informal polls have suggested that social division is still present on this topic in Portugal and there is a need to examine this issue more formally.
The aim of this study was to characterize Portuguese opinions regarding bullfighting by demographics variables, to better understand Portuguese societal support for updated animal welfare practices.
Data Collection
Using the form function in Google Docs, a Portuguese language questionnaire, consisting exclusively of closed questions, was generated through consideration of existing literature to collect information regarding attitudes to bullfighting in Portugal [21][22][23][24]. Questions explored whether the respondents attended or had attended bullfighting shows. For those respondents that still attended bullfighting, questions inquired about their motivation to attend, the age they started to attend these events, and whether they would continue to attend if the bull was replaced by another animal (i.e., a dog, that is, a domestic animal towards which people generally show higher level of empathy) or a robot (substitution by something that does not imply animal suffering). For those who had attended at some point but no longer did, questions asked about their reasons for discontinuing. The core part of the questionnaire explored general opinions regarding bullfighting: whether the respondent considered that bullfighting was beneficial for the economy, tourism or culture of Portugal, whether bullfighting and related supporting activities should receive public funding, if they thought that bullfighting generates positive connotations for the country, whether it has greater, lesser, or equal artistic value than painting, and respondents' opinions on the bull's capacity to suffer pain compared to a dolphin, dog, or human. The survey also sought an opinion as to whether the bull suffers during fights and if respondents thought that the fighting bull breed would disappear if bullfighting did not exist. Finally, respondents were asked if bullfighting should be allowed to continue. Demographic characteristics including gender, age, occupation, education level, monthly income, religion, region of residence, habitat (rural or urban), and whether the respondent had a relative linked to the bullfighting industry were also collected. Descriptive statistics regarding the demographic characteristics of the studied population and the general population of Portugal are shown in Table 1. From December 2016 to March 2017. The online survey was communicated through the social media such as Facebook, Instagram and LinkedIn. Moreover, via e-mails, it was shared to personal contacts of the research group members having been chain-shared by multiple users. Before starting the questionnaire, the online survey, included a brief description of the study and its aim. All questionnaire information collected was anonymous and participation was voluntary. No incentives were provided for participating in this study. Prior to dissemination, the questionnaire was first administered to 10 people to ensure clarity of questions. Minor edits were incorporated before widespread administration to the general public. In all, 8248 responses were obtained (Table 1) and all Portuguese districts were represented in responses ( Table 2).
Statistical Analysis
All statistical tests were conducted using SPSS 15.0 (SPSS Inc., Chicago, IL, USA). To evaluate raw data, frequency tables were generated for each question. Following this, a multiple correspondence analysis (MCA) was performed [25]. The goal of the MCA is to reduce a set of possibly correlated variables (including bullfighting attendance patterns, demographic variables, and opinions) to a smaller group of linearly noncorrelated ones (dimensions). In this study, the number of dimensions was set to two to allow for a two-dimensional graphical representation. The position of the full set of categories for each investigated variable (category-points) in the MCA graph is the basis for revealing relationships among them: variable categories with a similar profile tend to be grouped together whereas those negatively correlated are positioned on opposite sides of the graph. The origin of the graph reflects the weighted average of the categories for each variable considered in the study (centroid of each variables). As a result, the closer a category point is to the origin, the closer it is to the average profile. From the MCA, the correlation matrix of the resulting variables (once optimal scaling had been performed) was also completed in the analysis. Finally, a two-step cluster analysis (TSCA) was performed to identify clusters of people with a similar opinion about bullfighting.
Results
In terms of respondent demographics, approximately 61% were female (vs. 53% in the Portuguese general population) and 84% were less than 48 years old (whereas almost 53% were less than 45 years old in the general population) ( Table 1). Most respondents were employed full-time with >95% indicating that they did not work in the veterinary profession. Just under 70% of respondents had undergone some post-secondary education (~25% in Portugal) and 40% had a net monthly income < 1590 euros (vs. 45% in Portugal). Approximately half of respondents identified themselves as Roman Catholics (56% in Portugal) and~75% lived in an urban environment (vs.~64% in Portugal). To summarize, most respondents were relatively young, well-educated, and urban-dwelling women.
Approximately 50% of respondents never attended bullfighting events, while almost 20% had attended at some point but no longer did primarily because of animal welfare concerns, whereas the rest of the participants continued to attend bullfighting events. Of these, most began to attend bullfighting before the age of 18 and they attended for cultural reasons. Similarly, most indicated that they would stop attending if the bull was replaced by another animal (i.e., a dog) or a robot (Table 3). According to the results from this survey, most respondents had a negative opinion about bullfighting with the predominant perception being that bullfighting has no positive impact on Portuguese culture or tourism. With respect to its impact on economy, there was more discrepancy in responses, but again, the majority felt that this activity should not receive public funding. In general, it was believed among those surveyed that bullfighting does not generate positive press for the country, and in line with this, bullfighting was given less artistic value than painting (Table 4). It was widely accepted that the bull suffers during bullfights and, in line with this, most respondents indicated that a bull's capacity to suffer pain is like that of other animals or humans. Additionally, a greater number of respondents (although they were still minority) believed that the fighting bull would disappear as a breed if bullfighting did not exist. Only 30% of respondents considered that bullfighting should be allowed to continue (Table 4). The MCA, in which the data have been standardized, explained 30% of the variance of the data on demographic and bullfighting opinions from 8248 respondents. The percentage of variance explained by the first dimension was~20%, and for the second dimension was 11.3%. The main results of the MCA are presented in Figure 1. Dimension one clearly differentiates between people with positive and negative opinions regarding bullfighting. The correlation matrix of the transformed variables considered in the study (after optimal scaling) is presented in Appendix A (Table A1).
Animals 2020, 10, x 8 of 18 explained by the first dimension was ~20%, and for the second dimension was 11.3%. The main results of the MCA are presented in Figure 1. Dimension one clearly differentiates between people with positive and negative opinions regarding bullfighting. The correlation matrix of the transformed variables considered in the study (after optimal scaling) is presented in Appendix A (Table A1). Variables with a similar profile tend to be grouped together whereas those negatively correlated are positioned on diagonally opposite sides of the graph. The origin of the graph reflects the weighted average for each demographic or opinion variable considered. The closer a variable is to the origin, the closer it is to the average profile of the survey respondents. * Positive opinions: location of the category points "Yes" for the questions "Bullfighting favors economy", "Bullfighting favors tourism", "Bullfighting favors economy", "Bullfighting must receive public founds", "Bullfighting generates positive connotations for the country" and "Bullfighting continuity should be allowed" and the category points "Bullfighting higher" and "Equal" for the question "Bullfighting has greater, less or equal artistic value than painting". ** Negative opinions: location of the category points "No" and "Painting higher" for the same questions. *** Only the districts with opinions more favorable to bullfighting are shown. The rest are located in or near quadrants A and C.
According to the TSCA, two clusters were formed. Cluster 1, which includes 73% of respondents, was the group with unfavorable opinions towards bullfighting. These individuals mostly did not attend or had stopped attending bullfighting. Cluster 2, representing 27% of respondents, was the group with a more favorable view towards bullfighting, who mostly still attended events and were least likely to recognize the suffering of the animal. Tables 5 and 6 representing within-cluster percentages demonstrate how each opinion or demographic variable is split within each cluster. Variables with a similar profile tend to be grouped together whereas those negatively correlated are positioned on diagonally opposite sides of the graph. The origin of the graph reflects the weighted average for each demographic or opinion variable considered. The closer a variable is to the origin, the closer it is to the average profile of the survey respondents. * Positive opinions: location of the category points "Yes" for the questions "Bullfighting favors economy", "Bullfighting favors tourism", "Bullfighting favors economy", "Bullfighting must receive public founds", "Bullfighting generates positive connotations for the country" and "Bullfighting continuity should be allowed" and the category points "Bullfighting higher" and "Equal" for the question "Bullfighting has greater, less or equal artistic value than painting". ** Negative opinions: location of the category points "No" and "Painting higher" for the same questions. *** Only the districts with opinions more favorable to bullfighting are shown. The rest are located in or near quadrants A and C.
According to the TSCA, two clusters were formed. Cluster 1, which includes 73% of respondents, was the group with unfavorable opinions towards bullfighting. These individuals mostly did not attend or had stopped attending bullfighting. Cluster 2, representing 27% of respondents, was the group with a more favorable view towards bullfighting, who mostly still attended events and were least likely to recognize the suffering of the animal. Tables 5 and 6 representing within-cluster percentages demonstrate how each opinion or demographic variable is split within each cluster. Unfavorable views of bullfighting were expressed more commonly by women, amongst those with average income levels, those living in urban areas, and in individuals with higher education levels.
The categories corresponding to men, high-or low-income levels, rural living, and lower education level lay somewhere in between (Figure 1), indicating that among these different categories, opinion was more divided. Older, retired individuals were noted to value bullfighting positively more often, and to a lesser extent those less than 28 years of age (although the categories corresponding to rural habitat, low income level, and age under 28 were close in the MCA graph and, are co-correlated) (Figure 1). Results from the TSCA confirm these demographic patterns. For example, 41.1% of men but only 18.6% of women were in cluster 2. Regarding age, the category corresponding to those >67 years old was the one with the highest percentage in cluster 2 ( Table 6). Individuals who identified themselves as non-practicing or agnostic as well as people indicating that they subscribed to a religion other than Roman Catholicism tended to have more negative opinions about bullfighting. The category corresponding to Catholics was located somewhere between positive and negative opinions (Figure 1). Thus, the TSCA indicated that 91.9% of those who declared themselves agnostic were in cluster 1, while only 53.3% of Roman Catholics were in this cluster (Table 6). Interestingly, respondents who indicated that their profession was veterinary medicine had a slightly more favorable opinion towards bullfighting than those who did not (Figure 1). Specifically, the percentage of veterinarians in cluster 2 was 33.0% compared to 26.8% in the respondents whose profession was not veterinary medicine ( Table 6).
Regarding the place of residence, the MCA graph indicated that favorable responses to bullfighting occurred in individuals living closer to the districts of Satarém, Évora, Beja, and Portalegre (i.e., bordering districts that extend from the center to the south of Portugal) and to a lesser extent in Açores (Figure 1). People from northern districts, in addition to Faro, tended to have the most unfavorable opinion towards bullfighting. The TSCA indicated that the districts of Satarém, Évora, Beja, and Portalegre were the only ones in which the percentage of respondents that fit in cluster 2 exceeded 50%, while in Açores they were approximately 41.1% (data not shown).
Discussion
The results of this survey about Portuguese societal attitudes towards bullfighting indicated that the majority of those responding held negative opinions about the sport. Although bullfighting is still popular with thousands of fans across Portugal, it has lost its relevance in a more modern society. Interestingly, most respondents who had stopped attending bullfighting did so for animal welfare reasons, which indicated a growing social awareness towards this issue. Despite this, the popularity of bullfighting has extended beyond its traditional home ground (Portugal, Spain, and South and Central America) to reach new attendees in North America, Japan, and Eastern Europe [26].
Bullfighting fans claim that there are moral arguments in favor of the activity and that supporting it is a legitimate ethical option [27]. Likewise, supporters want to separate themselves from other animal blood sports fans by emphasizing their respect for animals and conservationism. They indicate that this is shown by the fact that bullfighting allows producers to preserve the cattle breed and that maintenance of bulls for bullfighting contributes to the maintenance of a traditional pasture ecosystem [17]. The cattle breed is considered unique for bullfighting fans since this breed has been traditionally selected for particular characteristics and behavioral traits (i.e., aggressiveness, strength, and mobility) [28]. Certainly, fans deeply appreciate the qualities that the bull embodies, but according to our results, they often do not recognize the suffering of the animal during the event. Others have suggested that spectators are fully aware of the pain and suffering inflicted on the bulls, but that the pain and suffering do not matter to them because of a callous or hedonistic viewpoint [29].
According to our results, most spectators indicated that they started to attend bullfights before the age of 18. This coincides with information from other studies suggesting that many bullfighting fans grew up in family environments in which there was a fondness for bullfighting [20]. To prevent the harmful effects that viewing bullfighting could have on children, the United Nations recommends that those overseeing bullfighting spectacles prohibit the participation of children under 18 years of age in bullfighter schools and as spectators in bullfighting events. Witnessing a bullfight could result in psychological trauma as well as a reduction in moral judgement and empathy. Others have argued that another possible consequence is that children could become accustomed to violence and become apathetic later when confronted with a violent incident [30]. This seems unlikely in that children who grow up in conditions favorable to bullfighting are simultaneously embedded within a rational and democratic society [27]. From the perspective of bullfighting schools, they claim to teach tauromachic as well as desirable virtues, such as effort, discipline, perseverance, humility, loyalty, and love for traditions.
Our results concerning common social opinions are comparable to previous studies on this subject in Portugal. In a previous study in which 1064 people were interviewed by telephone, Monteiro et al. (2007) stratified responses according to origin and gender and determined that 51% of respondents were in favor of banning touradas, while 39% were opposed. The remaining 10% did not have a strong opinion one way or the other. In both studies, amongst men and people living in rural areas, the opinion regarding continuation of bullfighting tended to be more favorable. However, Monteiro et al. (2007) did not evaluate responses by age group [21]. In another study carried out in Spain, older men, those retired, and those of rural origin were identified as having the most favorable attitudes toward bullfighting [22]. Virtually all studies about animal activist group demographics have noted that women outnumber men among rank and file activists [31]. Research on the preponderance of women advocating for animal rights suggest that this is a result of women's socialization. It emphasizes a relational orientation of care and nurturing that extends to animals' and women's experiences with structural oppression that might make them more disposed to egalitarian ideology, which creates concern for animal rights [32]. Moreover, more men than women support animal research, hunt animals for recreation, and engage in animal cruelty [33]. A previous study went further and stated that bullfighting is a male-focused ritual and masculine values frame the entire event [34].
From a different viewpoint, given rural individuals' greater utilitarian attitudes toward animals, these people may view this activity as a function of costs and benefits, making it easier to justify the use of animals in entertainment, even if some animal suffering occurs [35]. In the present study, the responses from rural areas were more closely correlated to lower income levels, which could partly explain why in this income group positive attitudes appeared more frequently towards bullfighting, followed by the highest income groups. Previous studies indicated that younger age groups tend to show more concern for animals and animal welfare than older age groups. Additionally, older people showed higher levels of cultural conservatism, which encompasses the endorsement of traditional values [36]. The variable of age is also related to other variables, such as the professed religion or educational level, since young and middle-aged people more often tend to declare themselves non-practicing/agnostic and to have higher education levels [37]. Regarding religion, within the Iberian Peninsula, bullfighting still occurs at times within the scope of local or regional Catholic commemorations. Frequently before the fights, the bullfighter himself carries out a ritual closely linked to Catholic religious beliefs [38]. That is, after the ceremony of "dressing", the bullfighters are placed in front of a chapel. This domestic altar is made up of numerous stamps, medals, images, etc., that bullfighters have acquired during their visits to various sanctuaries or that have been given to them by family, friends, and followers. The bullfighter, while standing in front of these objects, prays for success in the arena. It has been stated that more religious people demonstrated less positive (less humane) attitudes toward animal treatment than did more liberally religious (or less religious) individuals [39]. Religiosity has also become associated with a conservative orientation toward politics, primarily based on a cultural conservatism encompassing traditional stances [40,41].
Regarding income level, the highest levels of approval for bullfighting were observed in those respondents with either the lowest or highest income levels, while those with intermediate incomes least supported the activity. Lower incomes were primarily found in rural areas, while those with the highest incomes have also been associated with a greater level of economic and cultural conservatism [42].
Interestingly, the percentage of veterinarians in profile 2 (positive attitudes towards bullfighting) was higher than in the general population. It could be that amongst these individuals, responses were related to utilitarian arguments balancing the cost of entertainment for the public against suffering of relatively low numbers of animals and the generally good living conditions of these bulls versus the conditions for life and death for intensively-raised animals [43]. Additionally, many veterinarians may see bullfighting as an employment opportunity. Given the relevance of assuring that the bull is healthy and in perfect condition for the bullfight, veterinarians play an important role in the preparation and development of the show [44]. Despite these findings, there are anti-bullfighting activists in the veterinary sector (even leading associations against bullfighting), amongst veterinarians, and within veterinary faculties. Similarly, there were conflicting thoughts amongst the general population. While almost 85% of respondents indicated that they thought that the bull suffered during bullfighting, only 65% would ban bullfighting for animal cruelty reasons. Although most respondents indicated that they believed that bull's capacity to suffer pain was equal to that of another animal or human, respondents corresponding to profile 2 considered that the bull's capacity to suffer pain was less. It has been suggested that under conditions of extreme stress, production of endorphins and other metabolites may alleviate some part of perceived pain, but a reduction of pain would be replaced by marked distress or fear [31]. Even if one accepts that these bulls live better lives than other cattle raised for food production, this does not justify the distress and pain to which the bulls are subjected to during the bullfight.
In Portugal, the largest number of bullfighting events are concentrated in the districts of Lisbon (the most populous city in Portugal and also the region with the most tourists) and Faro (another important tourist area) [2]. However, the districts in which the most favorable opinions were collected (Satarém, Évora, Beja, and Portalegre) are those with the greatest presence of bull breeders [45]. In these districts, the culture of bullfighting is probably more deeply rooted and because they are more rural, the population may tend to favor the preservation of primary economic activities. In Açores, and especially on the island of Terceira, there exists a particular type of bullfight (touradas a corda). In this case, the bull is led along a designated course by means of a rope tied around its neck while the bull is taunted and teased by players (called pastores) who have no intent to kill the animal.
A possible limitation of this study is that people that have a vested interest in the topic were more inclined to complete the survey [46]. The population interviewed may not be representative of the Portuguese population. When the study was conducted, the percentage of men in Portugal was 47% (39% in the studied population). In addition, the percentage of people over 65 years of age was 21%, whereas amongst the studied population (including even those over 57 years of age) it was only 7%. People living in rural areas in Portugal represented 35.3% of the population (vs. 24.9% in the studied population) and people with only primary education 46.3% (vs. 1.5% in the studied population) [24]. Men, and especially older and rural dwellers, are least likely to be connected with social media [47]. This also leads us to infer that the public opinion regarding bullfighting in the general population of Portugal could be somewhat more divided than observed, since men, older individuals, and those living in rural areas had more positive opinions about bullfighting in our survey.
Conclusions
In summary, the profile of individuals with more favorable responses to bullfighting were men, >65 years old, of Roman Catholic faith, of low-or high-income levels, and from more rural areas. Amongst veterinary professionals there was also a tendency to favor bullfighting compared to the rest of the Portuguese population. Favorable opinions also occurred more often amongst those living in the districts of Satarém, Évora, Beja, and Portalegre, and to a lesser extent in Açores. Women, those identifying themselves as agnostic or non-Roman Catholic, individuals with an intermediate income level, and those from more urban areas evinced more negative opinions about bullfighting. Although suffering of the bull during the bullfighting event was generally recognized, there was still division over banning bullfighting within Portuguese society, and general initiatives to ban bullfighting have not found widespread favor by the Portuguese government or its citizens.
|
2020-11-12T09:10:13.019Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d18586e675ed98e9d22ad793c145f9ce8cd26219",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/10/11/2065/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "edebaf9df6098837e79a36686e90d9881c0f119d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
}
|
140408007
|
pes2o/s2orc
|
v3-fos-license
|
Self-Other Asymmetry
In this paper, I present a non standard objection to moral impartialism. My idea is that moral impartialism is questionable when it is committed to a principle we have reasons to reject: the principle of self-other symmetry. According to the utilitarian version of the principle, the benefits and harms to the agent are exactly as relevant to the global evaluation of the goodness of his action as the benefits and harms to any other agent. But this view sits badly with the “Harm principle” which stresses the difference between harm to others and harm to the self. According to the deontological version, we have moral duties to ourselves which are exactly symmetrical to our duties to others. But there are reasons to believe that the idea of a duty to the self is not coherent.
Many philosophers deny that impartiality could be all there is to ethics. According to them, a morality limited to impartiality would be unrealistic, globally irrelevant to our lives and even repugnant in some cases.
I present another kind of objection to moral impartiality, less melodramatic if I may say. My idea is that moral impartiality is questionable when it is committed to a principle we have reasons to reject: the principle of self-other symmetry.
But what is self-other symmetry?
SELF-OTHER SYMMETRY
Self-other symmetry is a basic commitment in many moral theories, but it takes different forms depending on the global structure of the theory. One could say, for example, that utilitarian theories are based on self-other symmetry because, according to theses theories, the benefits and harms to the agent are exactly as relevant to the global evaluation of the goodness of his action as the benefits and harms to any other agent 1 . And one could say that deontological theories of Kantian flavour are based on self-other symmetry because, according to them, we have moral duties to ourselves, which are exactly symmetrical to our duties to others. The famous second main formulation of the categorical imperative, in the Groundwork of the Metaphysics of Morals, also called the "Formula of Humanity", stresses this symmetry as explicitly as possible: Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means but always at the same time as an end 2 .
The clause "whether in your own person or in the person of any other" is a non-equivocal affirmation of self-other moral symmetry. Suicide or masturbation are "moral crimes" according to Kant, partly because of their supposed moral symmetry with killing and sexual abuse. Finally, one could even say that virtue ethics is based on selfother symmetry because it values equally care to others and self-care. Actually, this is Michael Slote's master argument in favour of virtue ethics 3 , and it raises a perplexity concerning the scope of my criticism of self-other symmetry.
SOME PERPLEXITIES
If virtue ethics is committed to self-other symmetry, as Michael Slote claims, and if virtue ethics does not belong to the class of impartialist moral theories, as some moral philosophers would probably say, then by objecting to self-other symmetry, the target could be larger than moral impartialism. It could include virtue ethics as well, or some versions of it at least. It could make my argument less limited than I have suggested. But there are other perplexities.
Self-other symmetry seems to be a very important feature of many moral theories, but, at the same time, one can find elements of self-other asymmetry in these theories. Think of the "Harm principle" put forward by John Stuart Mill. According to Mill: "The only part of the conduct of anyone which he is amenable to society is that which concerns others. In the part which merely concerns himself, his independence is of right absolute. Over himself, over his own body and mind, the individual is sovereign" 4 . In more concrete words, according to the Harm principle, one can be morally or legally permitted to do to oneself what one is not permitted morally or legally to do to others, the most striking example being, again, suicide as opposed to killing. The "Harm principle", as a kind of self-other asymmetry, goes against the utilitarian general commitment to self-other symmetry. One could have expected that Mill, being a prominent representative of utilitarianism, would also be a prominent supporter of self-other symmetry. But for him, the harms to the agent are not as relevant to the global evaluation of the goodness of his action as the harms to any other agent. So, he is not a supporter of self-other symmetry after all.
According to Michael Slote, Kant is also guilty of inconsistency in his treatment of self-other symmetry 5 . On the one hand, Kant claims that we should apply to ourselves exactly the same moral rules we apply to any other person. He argues not only for the wrongness or impermissibility of killing others but for the wrongness or impermissibility of suicide as well. And this is clearly a commitment to self-other symmetry. But for Michael Slote, this view sits badly with what Kant says about the absence of duties to pursue one's own happiness 6 . For Kant, the concept of duty applies only in case we are reluctant to do something. It does not apply when we do something inevitably and spontaneously. Now, if it is true that we tend naturally to care about our own interests and that we tend, as naturally, to neglect the interests of other people, then, in Kant's perspective, there should be no duties to further one's own interests but only duties to further the interests of other people. The fact that some of us may have a tendency to ruin themselves through stupidity, carelessness, laziness does not weaken the argument. Personal imprudence leads to its own natural punishment, while ill treatment of others does not come with its own natural punishment, except in fairy tales. If ill treatment of others it is to be controlled, it can only be through a sort of socialised system of duties and punishment.
According to Michael Slote, there is a contradiction in Kant's views about morality between his claim that we should apply to ourselves exactly the same rules we apply to any other person, and his other claim that there are no duties to further one's own interests but only duties to further the interests of other people. In other words, he supports both self-other symmetry and self-other asymmetry, and this is not coherent.
One could add that the disharmony is even bigger when we introduce degrees of closeness to ourselves in the picture. We tend naturally to care not only about our own interests, but about the interests of those who are dear and near to us. And if we have a natural tendency to further the interests of those who are near or dear to us, then strictly speaking, there could be no duty to further their interests.
If Kant's logic in this domain were followed, we could have moral duties only towards those we dislike or those who are personally the most distant from us. So there is an element of moral asymmetry in Kant's moral system after all.
Actually, I will not enter into these difficulties. I am not especially interested in testing the coherence of Mill's utilitarianism, of Kant's moral system or of Slote's version of virtue ethics. I just want to present reasons to reject some forms of self-other symmetry and other reasons to endorse some forms of self-other asymmetry.
FORMS OF SELF-OTHER ASYMMETRY
Self-other asymmetry expresses itself in different ways. The most well-known are 1. selfishness, which gives priority to the agent's own interests over the interests of other people; 2. selective altruism, which gives priority to the interests of those who are near and dear to the agent over the interests of the agent himself and the interests of those who are not near and dear to him; 3. radical altruism, which gives priority to the interests of every other person over the interests of the agent himself. But I will only defend self-other asymmetry when it takes the form of the harm principle according to which harm to self is morally indifferent. I will try to show that there is no good reason to turn away from common-sense moral thinking on this specific point. Whenever philosophers have tried to abandon this form of commonsense self-other asymmetry, whenever they have tried to line up the harm we do to ourselves with the harm we do to others, they ended up with what I take to be unsound concepts like "duties to oneself " or "self-regarding moral virtues", if not with absurd questions like "Is it possible for me to rescue myself?" or "Is it possible for me to compensate myself?" or "Is it possible for me to be grateful to myself ?" 7 . At least, this is what I will try to show.
My plan will be the following. First I will present the Harm principle as a form of self-other asymmetry grounded in common-sense moral thinking. Then I will examine and reject an argument to the effect that self-other asymmetry is not grounded in common-sense moral thinking but in the theoretical importance of consent. Finally, I will try to show how wrong one can go when one departs from common-sense views about selfother asymmetry, by presenting Kant's arguments regarding "duties to the self " as they are called.
WHAT IS "COMMON-SENSE MORAL THINKING"?
A few words should be said at this point about what I mean by "common-sense moral thinking". I am not referring to empirical sociolo-gical or psychological data about everyday moral judgements of different people from different countries and different backgrounds. Actually, I could have referred to data of this kind, because we have now an interesting body of knowledge about everyday moral judgments, thanks to the works of psychologists Jean Piaget, Lawrence Kohlberg, Carol Gilligan, Jonathan Haidt, Larry Nucci or Elliott Turiel among many others 8 .
But when I refer to "common-sense moral thinking" I am pointing to something less empirical. I am concerned with formal or substantial norms of moral thinking which are not always openly expressed in everyday judgements. What I have in mind are norms like "You should treat like cases alike", "You should not sacrifice innocents", etc. These norms are formulated by philosophers and presented as central or basic "moral intuitions". They are sometimes rhetorically attributed to everyone, whether real people have them or not.
What is philosophically special about them is that they are supposed to be moral assertions which need no further justification and to be valid objections to some big theories in ethics. Think of our supposed "intuition" that we should not sacrifice one innocent person even in order to save the lives of thousands of innocents, which is so often directed against some versions of utilitarianism.
IMPARTIALITY AND PARTIALITY IN ETHICS
The debate around impartiality and partiality in ethics has many different aspects and meanings. It may oppose philosophers who believe that, as far as morality is concerned, we should, a priori, have the same benevolent attitude toward every human being, from the nearest and dearest to the distant other, and philosophers who believe that it is morally permitted or even admirable to give some priority, systemically or in certain specific circumstances, to yourself, your friends, relatives, compatriots, etc., out of love, friendship, loyalty, quest for self perfection. In this context, we are offered two possibilities.
1. Impartiality I should have the same benevolent attitude toward every human being, myself included.
2. Partiality I am permitted to give some priority, systemically or in certain specific circumstances, to myself, my friends, relatives, fellow countrymen. But if we take into account the Harm principle as well, moral par-tiality itself should be divided in two classes: positive partiality based on benefits to ourselves and those who are near to us, and negative partiality which means that we are permitted to harm ourselves but not other persons In other words, we have three possibilities and not only two, the first involving self-other symmetry and the two others involving self-other asymmetry.
1. Impartiality I should have the same benevolent attitude toward everyone, myself included.
2. Positive partiality I am permitted to treat myself, my friends, my relatives, my fellow countrymen better than other people, systemically or in certain specific circumstances.
3. Negative partiality I am permitted to harm intentionally myself but not other persons.
If we dig deeper, we find other possibilities: 4. misanthropy: it is a kind of impartiality which consists in having the same malevolent attitude toward everyone, myself included. The problem is of course that such form of impartiality could hardly be called "moral". 5. self-hatred: it is a kind of negative partiality which consists in treating myself, my friends, my relatives, my fellow countrymen worse than other people. Blacks, Jews, and other representatives of minorities are sometimes claimed to exhibit this attitude. The question is again if such a form of negative partiality could be called "moral".
6. self-sacrifice: here we have a "supermoral" or "supererogatory" negative partiality which consists in a complete ban on selfbenefits .
I will insist on negative partiality, especially on this specific kind which permits harm to self but not to others.
MORAL ASYMMETRY IN COMMON-SENSE MORAL THINKING
Actually, I have borrowed the words "self-other symmetry" and "selfother asymmetry" from Michael Slote 9 . Sometimes, he uses the words "moral symmetry" instead of "self-other symmetry" and "moral asym- metry" instead "self-other asymmetry". For him, these expressions have the same meaning. I will follow him on this account as well. What he calls either "self-other symmetry" or "moral symmetry" is a system that brings together care to others and something like selfcare or reasonable selfishness. What he calls "common-sense moral asymmetry" has two aspects. It consists in self-abnegation or self-sacrifice, that is, in avoiding some personal benefits where we would not prevent another person from receiving the same benefit. But it consists also in permitting to harm ourselves where we would not be permitted to harm another person in a similar way 10 . When he comes to examine moral theories according to these standards he insists on the first aspect of common-sense moral asymmetry, that is, on self-abnegation or selfsacrifice. He thinks that common-sense and Kantian morality value only self-abnegation or self-sacrifice while utilitarianism and virtue ethics value equally care to others and self-care or reasonable selfishness. He claims that "the common avoidance of self-other asymmetry allows utilitarianism and virtue ethics to go forward securely in a manner that is not possible for either Kantianism or commonsense morality" 11 .
I have difficulties with both the classification and the conclusions.
First, it seems to me that through the second main formulation of the categorical imperative, Kantian moral system is so deeply committed to self-other moral symmetry that it is not very fruitful to insist on some elements of self-other moral asymmetry we can find in it 12 .
Second, although Mill's is utilitarian, this doesn't prevent him from proposing a Harm principle, which is clearly asymmetric,. And third, I think that Michael Slote should have treated in a completely different manner the two aspects of what he calls "commonsense moral asymmetry".
"Common-sense moral asymmetry" consists in avoiding some personal benefits where we would not prevent another person from receiving the same benefit, as well as in permitting to harm ourselves where we would not be permitted to harm another person in a similar way.
One can say that there can be something "moral" in avoiding personal benefits where we would not prevent another person from receiving the same benefit. But can one say, in a similar manner, that there can be something moral in permitting to harm ourselves where we would not be permitted to harm another person in a similar way? I don't think so. This second kind of asymmetry tells us something interesting about the limits we set to what can be called "moral" or "immoral". It seems to fall outside the scope of morality altogether.
Let me present some examples. Suppose that instead of cutting his own ear, Van Gogh had jumped on some innocent passer-by to cut the ear of this unfortunate fellow. It seems natural to me to say that common-sense moral thinking would treat these two cases differently. The harm done by Van Gogh to himself could only be called crazy, irrational but not immoral, while the harm done to the innocent passer-by could be called immoral as well. Now, let's compare the following statements.
1. You should read or practise some sport instead of spending all your days on the couch, watching stupid programs on television and stuffing yourself with chocolate cookies. I don't force you, I don't threaten you. I just tell you that it would be better for you.
2. You should read or practise some sport instead of spending all your days on the couch, watching stupid programs on television and stuffing yourself with chocolate cookies. I don't force you, I don't threaten you. I just tell you that so staying in front of the television would be immoral.
3. According to many witnesses, you have spent more than 30 days on a couch, watching stupid programs on television and stuffing yourself with chocolate cookies. By so acting, you have violated the law criminalizing unhealthy behaviour. You are fined 100 000 dollars and from now on, you will be under medical supervision at your own expenses.
Let's concentrate on the second statement. It seems to me that there is something queer, if I may say, in asserting "It is immoral to spend all your days on the couch, watching stupid programs on television and stuffing yourself with chocolate cookies" because it consists in blaming a person for harming himself or herself, and harming oneself is morally indifferent.
In order to prove that what's wrong with this judgement is that it concerns only harm to self, it could be sufficient to show that if it were formulated a little differently, with a hint to the fact that such behaviour would also harm, even indirectly, some other person, it would not seem queer at all. For example, the judgement "It is immoral to stuff yourself with chocolate cookies when you could save so many lives by sharing them with hungry little children", wouldn't seem queer, I believe, but it is, of course, because it directly implies other persons. So much for the idea that the self-other asymmetry is plausibly grounded in our common-sense moral thinking, as a sort of intuition which does not require justification.
Some philosophers have tried to ground self-other asymmetry in something "deeper", more "philosophical". They have tried to show that self-other asymmetry could be derivative from the moral importance of consent. Could it be the case?
GROUNDING COMMON-SENSE MORAL ASYMMETRY
In a footnote attached to his "Some Advantages of Virtue Ethics", Michael Slote writes that it has been suggested to him that the reason why we are allowed to harm ourselves where we would not be permitted to harm others in a similar way lies in the consent implicit in whatever we do to ourselves. If I harm myself I presumably do it willingly, whereas the agent I am harming does not consent to the harm 13 .
According to this explanation, the self-other asymmetry is not a substantial feature of common-sense morality but derivative from the moral importance of consent. Shelly Kagan has also observed that a defence of moral asymmetry could be grounded in the moral importance of consent. He writes that if what is in question is "only my treatment of myself, it is obvious that I will always be acting with the consent of the person I am affecting" 14 . But in what sense does consent affect my action? Well in case of harm, it does in a very special way. According to Kagan, normally, it is not permissible to perform an act if it would involve harming someone. There is a constraint against acting so. But when I deliberately harm myself, I have my own permission to do so. And for that reason, I am removed from the scope of the constraint 15 . The argument would be clearer if it were put in legal terms. We should make a difference between causing physical or psychological pain to someone as it may happen in medical procedures or in violent sports like ice hockey or boxing, that is with some sort of consent of the victim to the risks of being injured, and causing to someone some physical or psychological pain to which he has never consented in any sense. In both cases some pain has been caused but it is only in the second case, that is, where there was no consent, that one can speak of rights violations, torts, or harm in the moral sense. The common law doctrine "Volenti non fit injuria" means "one cannot be wronged by that to which one consents". It could be interpreted in the following way, I think. When there is tacit or explicit consent, there might be pain but no harm, no violation of rights, no wrong. The same could be said for harm to self. You can intentionally cause pain to yourself but you can't harm yourself intentionally, because where there is consent, there is no harm, in the sense of there being no wrong. Shelly Kagan thinks that the argument can obviously be resisted by those who deny the moral relevance of consent, and also, but less obviously, by those who do accept the relevance of consent but deny that in cases like those supposed to illustrate harm to self, true consent obtains.
When someone is about to commit suicide or self-mutilation can we say that he is really acting willingly? Can we say that he is really consenting? I think that we don't have to enter into these psychological and conceptual complexities because there is a better argument against grounding self-other asymmetry in consent. It has been proposed by Michael Slote, actually. According to him, common-sense morality makes a difference between negligently causing harm to another person and negligently causing harm to oneself. We keep making a difference between harm to self and harm to others even when the harm was not done intentionally or when there was no consent to the harm done. And this should bring us to think, in the most simple way, that after all, self-other asymmetry is not grounded in intention or consent.
I think that Michael Slote is right and that self-other asymmetry is not derivative from the moral importance of consent, nor reducible to it 16 . It should be taken as a basic moral intuition which can be used against traditional impartialist moral theories.
Until now I have presented some arguments to the effect that self-other asymmetry is a reasonable feature of common-sense moral thinking and need not be grounded in the moral importance of consent. Now, I will try to show how wrong one can go when one departs from this common-sense view about self-other asymmetry. I will present to this effect Kant's views on "duties to the self " or "duties to oneself " as they are more often called.
MORAL SYMMETRY
Kant "duties to self " are divided into perfect duties which concern us as natural beings and perfect duties which concern us as moral beings. Examples of perfect duties which concern us as natural beings are prohibitions on suicide, on masturbation and on excessive use of food and drink. Examples of perfect duties which concern us as moral beings are prohibitions on lying, avarice and servility. We have also imperfect duties not to leave our natural talents or capacities idle and, as it were, rusting away. All these duties to ourselves are presented as duties not to downgrade ourselves by denying our own humanity. They are duties not to harm ourselves. This is why I am especially interested in them. It has been recently noticed that, despite Kant's claim that duties to oneself are of primary importance in his system, his idea has not been widely discussed 17 . Actually, after Marcus Singer published a short and brilliant paper on duties to oneself in 1959, there has been an intense exchange of arguments on the subject 18 . But it didn't last, and I don't know why. The only thing I can say is that it is not due to the fact that Singer's replies were so good that nothing could be added to them. Nothing of that sort has ever happened in philosophy, especially in moral philosophy, and it would be almost supernatural that it had happened then! I am personally very interested in this issue because I am trying to build a sort of "minimalist ethics" in which there are no duties to self and no self-regarding virtues. I will return to this very shortly in my conclusion.
But before that, I will present what I take to be the best argument against duties to the self as Kant conceived of them. It aims to show that the notion of duty to the self is contradictory. Other arguments could have being raised as well. For example: 1. Duties to the self are not really duties toward the self as a particular person but duties toward abstract entities like nature or the human species. When, for example, Kant argues against masturbation, it is partly because according to him human species would disappear if it were a unique and universal sexual practice. So Kant's ban on masturbation is less a duty to oneself than a duty to the human species.
2. As moral duties, duties to the self are only derivative from duties toward other people. I may have a moral duty to keep fit, if for example it may help me care about other people: my lover, family, children, fellow citizens, etc. Another example: I may have a moral duty to stay sober if I am the pilot of the airplane, but it is a moral duty to the passengers more than to myself. 3. Duties to the self are not real duties but natural preferences. There can be no moral prohibition to ruin your own talents because no normal human being wants to ruin his own talents.
I insist on the formal argument because if duties to the self are inconceivable, we will naturally turn to these other arguments which are meant to show that talk about duties to the self is always talk about something else, something more conceivable, if not more acceptable.
The fact is that it has often been noticed that the notion of duties to oneself is conceptually problematic in a way that the notion of duties to others is not.
There are duties which can clearly be regarded as relative to other persons: duties arising from contracts, agreements, debts or promises for example, but duties to oneself do not seem to be that clearly conceivable. Why?
Against Kant who claimed that our duties to ourselves are of primary importance and should have pride of place and who added that the prior condition of our duties to others is our duties to ourselves 19 , Marcus Singer has argued, for example, that it is "actually impossible for there to be any duties to oneself in any literal sense because, if taken literally the idea involves a contradiction" 20 .
His argument goes like this. If in general "A has a duty to B, then B has a right against or with respect to A. But it follows from this that to have a duty to oneself would be to have a right against oneself and this is surely nonsense. What could it mean to have a right or a claim against oneself? (Could one sue oneself in a court of law for return of the money one owes to oneself)?" 21 Singer is right to stress the absurdity of these implications of the idea that one can have a duty to himself. But here, there is no obvious contradiction, just some kind of category mistake. The contradictory character of duties to self is more obvious when we consider the structure of promises or debts.
As a duty to another person a promise has the following features. Suppose you have promised to lend me your vacuum cleaner. You can, of course, break this promise, by hiding yourself, destroying the vacuum cleaner, or even by telling me that you have simply changed your mind: you realised that you don't like me after all and that you don't want to lend me anything anymore. But none of this will release you from your specific promise or cancel it. It will just be a promise which you have not kept. Actually you can't be released from your promise by yourself. However, you can be released from it by me, that is, by the one to whom you made it. I can cancel your promise because, for example, I realised that I am too lazy to vacuum my place, or because I realised that I don't like you after all and I don't want to feel I owe something or for whatever other stupid reason.
In short, the person who promised something cannot release himself from the promise, but the person to whom the promise has been made can release from his promise the person who promised. Now suppose that it is the same person who made the promise and to whom the promise has been made.
He is not bound by his promise and he is bound by it. He can release his promise and he can't. Isn't it contradictory? When we move from promises to debts, the contradiction is as obvious when it is the same person who is creditor and debtor. Suppose you lent me 20 000 dollars. I am under the obligation to return them to you. Of course you are free to tell me in a fit of generosity or vanity: "I decided to cancel your debt. You don't owe me 20 000 dollars anymore". But I am not free to answer. "Well, it doesn't matter. I didn't plan to return them to you. In the meanwhile, I had cancelled my debt." So the creditor is free to release the debtor from his obligation, but the debtor isn't free to release himself from the obligation. Now, suppose that it is the same person who is creditor and debtor. He is free to release himself from the debt ant not free to release himself from the debt. Isn't it contradictory?
One could deny that the model of the contract or the promise is relevant when we deal with duties to oneself. One may think that duties to the self belong to the category of duties you can't be released from, like duties not to torture or duties not to enslave 22 . But could all duties to the self be seen that way? I doubt it. An argument is obviously missing here.
In any case, although Kant claimed that "our duties to ourselves are of primary importance and should have pride of place" and added that "the prior condition of our duties to others is our duties to ourselves", he was quite aware that duties to oneself involve, at first sight at least, a contradiction.
In the Metaphysics of Morals, Kant insists on the foundatio-nal role of duties to oneself. He presents them as precondition of all obligation to others as well 23 . But at the beginning of the section where he develops this point, he mentions a puzzle about the possibility of duties to the self 24 : does the self not have to be active and passive at the same time, which is impossible? If the "one imposing obligation and the one put under obligation were identical, could not the former always release the latter -we ourselves-from an obligation?". Wouldn't it involve a contradiction? For Kant the contradiction is only apparent, because actually it is not the ordinary or phenomenal self, that is, the one governed by the laws of nature, who is placing himself under the obligation, but what he calls the noumenal self, the one that is endowed with inner freedom. It is only qua noumenal self that one can freely place oneself under an obligation. But as fantastic as the powers of the noumenal self might be, it is hard to see how they may allow him to have a duty to himself, if it is a contradictory notion. It might be that it is in order to block this objection that Kant appeals to a totally different argument. "For suppose there were no duties to oneself; then there would be no duties whatsoever, not even external duties" 25 .
At first sight, Kant tries here to make use of his notion of autonomy, that is the idea that we can freely give a law to ourselves, in order to show that duties to the self are conceivable 26 . But, as it has been noticed, "it would be surprising if the notion of autonomy and the notion of duty to oneself turned out to be identical" 27 .
In Kant's eyes, the idea that we can freely give a law to ourselves may ground moral duties to others as well as duties to the self. But from this it does not seem to follow that, for him, there is no difference between duties to others and duties to the self, or that the idea of autonomous lawgiving is identical with the idea of duty to the self. In any case, it has been suggested that we should take Kant's reasoning here as a reductio ad absurdum and read it: "If they were no duties to the self, there would not be duties at all. But there are duties, therefore there are duties to the self " 28 .
Marcus Singer calls this argument a blatant "non sequitur". I agree. It seems to me that we would still have external duties, that is, duties to others, even if we had no duties to ourselves. It might be that if we had no duties to ourselves, we could not fulfil our duties to others, but this is a totally different story. And even this story doesn't seem promising. After all, what we need in order to fulfil our duties to others is not to be able to respect our duties to ourselves but to have the motivation or the will to fulfil our duties to others. A number of other arguments have been raised against the idea of duty to the self. Bernard Williams, for example, claims that there is nothing more in the notion of duty to the self than the idea of selfinterest. For him, using the word "duty" is just a fraudulent and pernicious way of speaking of self-interest 29 . It could be a perfect example of the fallacies of modern moral philosophy. But of course, by mocking duties to the self, Williams is not at all pleading for some kind of self-other asymmetry. As a supporter of Ancient ethics, he will not deny that self-care has an ethical importance. I myself take the argument against duties to the self as a first step in a defence of common sense moral asymmetry, as it is expressed by the harm principle. The second step could be a criticism or self-regarding moral virtues like temperance or endurance. But I leave it for another occasion.
CONCLUSION
Instead of presenting as a conclusion what I have already said, I will try to push my questions a little further. Could we conceive of an impartialist theory which would make room for self-other asymmetry? 30 Actually, impartialist moral outlooks could make room for selfother asymmetry in a very simple way. The impartialist may say that if we reason from the impartial point of view, we will be brought to the conclusion that harm to self is morally indifferent. But why would we be brought to this conclusion? The claim would need further justification and it is not sure that it could be given. Another way to reconcile impartiality and self-other asymmetry seems to be more promising.
It consists in specifying first what can be called the "scope of morality". In this area, I would separate "maximalists" and "minimalists" 31 . I call "maximalists" those who give the largest scope to morality. They include in morality what we owe to ourselves, what we owe to others and what we owe to abstract entities like the needs of society or symbolic entities like the flag of the nation. By contrast, for "minimalists", the scope of morality is very narrow. It covers only what we owe to each other as individuals.
Once the scope of morality is fixed, our moral concepts apply in this limited area only.
For a maximalist, moral impartiality will apply to everyone, myself included and to many other things as well. A maximalist won't be able to be committed to common-sense self-other asymmetry and an impartialist at the same time. If he is an impartialist, he will have to deny common-sense self-other asymmetry.
For a minimalist, moral impartiality will apply only to our relations to other people. A minimalist will be able to be, at the same time, an impartialist in his relations to others and to be committed to common-sense self-other asymmetry.
I think this is a good point for moral minimalism.
|
2019-05-01T13:08:51.376Z
|
2018-04-13T00:00:00.000
|
{
"year": 2018,
"sha1": "01c43328a610eb933a9be44f80f29730b50a945b",
"oa_license": "CCBY",
"oa_url": "http://www.erudit.org/fr/revues/ateliers/2008-v3-n1-ateliers03571/1044607ar.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0bc61d972ed3d3a0103c296b279063a7aff87ffe",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
202777203
|
pes2o/s2orc
|
v3-fos-license
|
Human-Computer Interface for Handicapped People
An economical head operated mouse for individuals with disabilities. It focuses on the top operated mouse that employs one tilt sensing element placed within the telephone receiver to work out head position and to operate as straightforward headoperated computer mouse. The system uses measuring system based mostly tilt sensing element to detect the user's head tilt so as to direct the mouse movement on the computer screen. Clicking of mouse is activated by the user's eye brow movement through a sensing element. The keyboard operate is meant to permit the user to scroll letters with head tilt and with eye brow movement because the choice mechanism. Voice recognition section is additionally gift within the head section to identify the little letters that area unit pronounced by the unfit user.
INTRODUCTION
Owing to the shortage of acceptable input devices, individuals with disabilities usually encounter many obstacles once using computers. Currently, keyboard and mouse are unit the foremost common input devices. Because of the increasing quality of the Microsoft Windows interface, i.e., Windows ninety eight and NT, electronic device has become even intercalary vital. Therefore, it's necessary to create a straightforward mouse system for individuals with disabilities to control their computers. People with funiculus injuries (SCIs) and United Nations agency area unit unfit have progressively applied electronic helpful devices to boost their ability to perform sure essential functions. Equipment, that has been changed to learn People with disabilities embody communication and daily activity devices, and battery-powered wheelchairs. From our literature analysis there are unit several pc input devices area unit out there. Finger mounted device mistreatment pressure sensors, but no Hardware has been realised up to now and it wants physical quite interaction with system. a good vary of interfaces area unit out there between the user and device and therefore the interfaces are often enlarged keyboards or a fancy system that Allows the user management} or control a movement with the help of a mouth stick, However, for several individuals the mouth stick methodology isn't correct and cozy to use. an eye fixed imaged input system, Electrooculograpy (EOG) signals ,electromyogram (EMG) signals, encephalogram (EEG) signals area unit capable of providing solely a number of controlled Movements have slow interval for signal process and need substantial motor coordination. In infrared or ultrasound-controlled mouse system (origin instruments' head mouse and prentke romish's head master), etc. There are unit 2 primary determinants that area unit of concern to the user. the primary one being whether or not the transmitter is intended to aim at a good varyor not with relevancy receiver, the opposite one being whether or not the pointer of electronic device will move along with his head or not. These concerns increase the load for individuals with disabilities. Thus, various systems that utilize commercially out there natural philosophy to perform tasks with straightforward operation and simple interface management area unit painfully needed. the flexibility to control a electronic device has become additional vital to individuals with disabilities particularly because the advancement of technology permits a lot of and more functions to be controlled by pc. There are unit several reasons for individuals with disabilities to control a pc. As an example, they have to amass new data and communicate with the surface world through the web. Additionally, they have to figure reception, get pleasure from leisure activities, and manage several different things, like home searching and net banking. This analysis focuses on a tilt sensing element controlled electronic device. The lean sensors or inclinometers notice the angle between a sensing axis and a reference vector like gravity or the earth's field of force. Within the space of drugs science, tilt sensors are used principally in activity medication analysis. As an example, application of tilt sensors in gait analysis is presently being 2 investigated. Andrews et al. used tilt sensors connected to a floor reaction sort gliding joint foot orthosis as a training program supply via AN electrocutaneous show to boost bodily property management throughout useful electrical stimulation (FES) standing. Bowker and Heath counseled employing a tilt sensing element to synchronize leg bone nerve stimulation to the gait cycle of hemiplegics by watching angular rate. Basically, tilt sensors have potential applications of rising the talents for persons with different disabilities. The system uses MEMS accelerometers to notice the user's head tilt so as to direct mouse movement on the pc screen. Clicking of the mouse is activated by the user's eye-brow movement through a sensing element. The keyboard operate is intended to permit the user to scroll letters with head tilt and with eye brow movement because the choice mechanism. Voice recognition section is additionally gift within the head section to spot the tiny letters that area unit pronounced by the unfit user. the lean sensors will sense the operator's head motion up, down, left, and right, etc. consequently, the pointer direction are often determined.
II. LITERATURE SURVEY Human Computer Interaction is concerned with the way humans interact with technology. It deals with how humans work with computers and how computer systems can be designed to best facilitate the users in achieving their goals. There is a gap to be bridged, with computer technology on the one side, and the human operator on the other side. In these times we see rapid technological advancements, in terms of computer performance, ever-increasing telecommunication possibilities and new and improved interface devices such as lightweight LCD displays and magnetic and optical tracking devices. With the progression of technology over the years we have also seen improvements in the interface through which the user interacts with a computer system: the User Interface (UI). The first of what can be called the modern computers, such as the ENIAC required users to program the computer by connecting cables on a patch board. Later, a less cumbersome UI was possible in the shape of punch tapes or cards that were used as in-and output. The development of computer screens with the possibility of displaying text opened the way for more diversity in the UI, allowing the user to interact with the computer using a command driven or menu driven interface. As long as there have been user interfaces, there have been special software systems and tools to help design and implement the user interface software. These tools have demonstrated significant productivity gains for programmers, and have become important commercial products. Others have proven less successful at supporting the kinds of user interfaces people want to build. Design in Human Computer Interface is more complex than in many other fields. It is inherently interdisciplinary, drawing and influencing diverse areas such as computer graphics, software engineering and human factors. The developers task of making a complex system simple and sensible to the user is in itself a very difficult, complex task.
A. MPU 6050 (MEMS Tilt sensor)
We are using MPU6050 to trace the movement of the head. MPU6050 typically consists of 2 or more components. Listing them by priority, they are : accelerometer, gyroscope and magnetometer. The MPU6050 could be a 6 DOF (Degrees of Freedom) or a six axis detector, which means that it provides six values as output. 3 values from the accelerometer and 3 from the gyroscope. The MPU 6050 could be a detector based on MEMS (Micro Electro Mechanical Systems) technology. Both the accelerometer and the gyroscope is embedded within one chip. This chip uses I2C (Inter Integrated Circuit) protocol for communication. Accelerometer works on the principle of piezo electrical impact. Here, imagine a three-dimensional box, having atiny low ball within it. The walls of this box area unit created with piezo electrical crystals. Whenever you tilt , the ball is forced to maneuver within the direction of the inclination, due to gravity. The wall with that the ball collides, creates small piezo electric currents. There area unit whole, 3 pairs of opposite walls in an exceedingly cuboid. Each try corresponds to AN axis in 3D space: X, Y and Z axes. Depending on the present made from the piezo electrical walls, we are able to confirm the direction of inclination and its magnitude. Gyroscopes work on the principle of Coriolis acceleration. Imagine that there is a fork like structure, that is in constant back and forth motion. It is held in place using piezo electric crystals. Whenever, you try to tilt this arrangement, the crystals experience a force in the direction of inclination. This is caused as a result of the inertia of the moving fork. The crystals thus produce a current in consensus with the piezo electric effect and this current is amplified. The values are then refined by the microcontroller.
B. IR Sensor (Eye-Brow Sensor)
The eye brow sensor contains an IR LED at 900 nm. It shines invisible IR light on the user's eye and this light does not cause any harm to the user's eye. An IR 900 nm sensor is use to detect the reflected IR light when the user blinks his eye. This signal is given to the signal conditioning section then to the microcontroller for further processing.
C. Voice-Recognition Module
This Voice Recognition Module is a compact and easy-control speaking recognition board.This product is a speaker-dependent voice recognition module. It supports up to 80 voice commands in all. Max 7 voice commands could work at the same time. Any sound could be trained as command. Users need to train the module first before let it recognizing any voice command. This board has 2 controlling ways: Serial Port (full function), General Input Pins (part of function). General Output Pins on the board could generate several kinds of waves while corresponding voice command was recognized.
D. Micro-Controller (Arduino Leonardo)
The Arduino Leonardo is a microcontroller based on the ATmega32u4 . It has twenty digital input/output pins (of which seven can be used as PWM outputs and twelve as analog inputs), a 16 MHz crystal oscillator, a micro USB connection, a power jack, an ICSP header, and a reset button. It contains everything required to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started. The Leonardo differs from preceding Arduino boards in that the user-programmable ATmega32U4 AVR microcontroller has built-in USB functionality, eliminating the need for a secondary processor. This makes the Leonardo more versatile: in addition to supporting a virtual serial/COM port interface, it can appear to a connected computer as a mouse and keyboard.
IV. CONCLUSION
The main advantage of this project is to eliminate the disability for the handicapped people so that they can enjoy this world as a normal human being are enjoying. Those people can control or operate all the computer application by the gesture of their eye movement and the interactive application are done by their tooth click and also gaming, swapping, page scrolling, etc. are also done using their head movement by placing a MEMS (Micro-Electro Mechanical System). The complete replacement of wired communication It finds the solution to the disabled person to operate the computer fully with the enabled mode. The HCI (Human Computer Interface) is an evolving area of research interest nowadays. This project aims to be a convenient process for helping out the disabled to operate computers. These systems can also be used in other application like robotics efforts, in process to make the device cost effective and more complex thereby reducing the size. Thus we have developed a real hand free mouse. This project will be very effective and accurate using of both MEMS and eye blink sensors as a wireless mouse for future.
|
2019-09-17T03:00:19.566Z
|
2019-04-30T00:00:00.000
|
{
"year": 2019,
"sha1": "a91e58b778948f8e74fdb461d5fd86d0a20ec22e",
"oa_license": null,
"oa_url": "https://doi.org/10.22214/ijraset.2019.4334",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a8c2214a15eb1e8a8f735d695bbc2422de8d19e1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119272905
|
pes2o/s2orc
|
v3-fos-license
|
Further insight into gravitational recoil
We test the accuracy of our recently proposed empirical formula to model the recoil velocity imparted to the merger remnant of spinning, unequal-mass black-hole binaries. We study three families of black-hole binary configurations, all with mass ratio q=3/8 (to maximize the unequal-mass contribution to the kick) and spins aligned (or counter aligned) with the orbital angular momentum, two with spin configurations chosen to minimize the spin-induced tangential and radial accelerations of the trajectories respectively, and a third family where the trajectories are significantly altered by spin-orbit coupling. We find good agreement between the measured and predicted recoil velocities for the first two families, and reasonable agreement for the third. We also re-examine our original generic binary configuration that led to the discovery of extremely large spin-driven recoil velocities and inspired our empirical formula, and find reasonable agreement between the predicted and measured recoil speeds.
Merging black-hole binaries will radiate net linear momentum if the two black holes are not symmetric. This asymmetry can be due to unequal masses, unequal spins, or a combination of the two. A non-spinning black-hole binary will thus only radiate net linear momentum if the component masses are not equal. However, the maximum recoil in this case (which occurs when the mass ratio is q ≈ 0.36) is a relatively small ∼ 175 km s −1 [23]. The complementary case, where the black holes have equal masses but unequal spins was first reported in [27] and [29]. In the former case the authors calculated the recoil velocity for equal-mass, quasi-circular binaries with equal-amplitude, anti-parallel spins aligned with the orbital angular momentum direction, while in the latter case the authors used the same general configuration but varied the amplitude of one of the spins. In both the above cases the authors extrapolated a maximum possible recoil (which is tangent to the orbital plane) of ∼ 460 km s −1 when the two holes have maximal spin. At the same time, our group released a paper on the first simulation of a generic black-hole binaries with unequal masses and spins, where the spins were not aligned with the orbital angular momentum [28]. That configuration had a mass ratio of 1:2, with the larger black hole having spin a/m = 0.885 pointing 45 • below the orbital plane and the smaller hole having negligible spin. The black holes displayed spin precession and spin flips and a measured recoil velocity of 475 km s −1 , mostly along the orbital angular momentum direction. It was thus found that the recoil normal to the orbital plane (due to spin components lying in the orbital plane) can be larger than the in-plane recoil originating from either the unequal-masses or the spin components normal to the orbital plane. The maximum possible recoil arises from equal-mass, maximally spinning holes with spins in the orbital plane and counter-aligned. This maximum recoil, which will be normal to the orbital plane, is nearly 4000 km s −1 .
In [28] we introduced the following heuristic model for the gravitational recoil of a merging binary.
A = 1.2 × 10 4 km s −1 [23], B = −0.93 [23], here we find H = (6.9 ± 0.5) × 10 3 km s −1 , α i = S i /m 2 i , S i and m i are the spin and mass of hole i, q = m 1 /m 2 is the mass ratio of the smaller to larger mass hole, the index ⊥ and refer to perpendicular and parallel to the orbital angular momentum respectively at the effective moment of the maximum generation of the recoil (around merger time),ê 1 ,ê 2 are orthogonal unit vectors in the orbital plane, and ξ measures the angle between the "unequal mass" and "spin" contributions to the recoil velocity in the orbital plane. The angle Θ was defined as the angle between the in-plane component of ∆ ≡ (m 1 + m 2 )( S 2 /m 2 − S 1 /m 1 ) and the infall direction at merger. The form of Eq. (2a) was proposed in [23,46], while the form of Eqs. (2b) and (2c) was proposed in [28] based on the post-Newtonian expressions in [47]. In Ref [48] we determined that K = (6.0±0.1)×10 4 km s −1 . Although ξ may in general depend strongly on the configuration, the results of [30] and post-Newtonian calculations show that ξ is 90 • for headon collisions, and the results presented here indicate that ξ ∼ 145 • for a wide range of quasi-circular configurations. A simplified version of Eq. (1) that models the magnitude of V recoil was independently proposed in [32], and a simplified form of Eq. (1) for the equal-mass aligned spin case was proposed in [29].
Our heuristic formula (1) describing the recoil velocity of a black-hole binary remnant as a function of the parameters of the individual holes has been theoretically verified in several ways. In [48] the cos Θ dependence was established and was confirmed in [37] for binaries with larger initial separations. In Ref. [36] the decomposition into spin components perpendicular and parallel to the orbital plane was verified, and in [41] it was found that the quadratic-in-spin corrections to the in-plane recoil velocity are less than 20 km s −1 .
Consistent and independent recoil velocity calculations have also been obtained for equal-mass binaries with spinning black holes that have spins aligned/counteraligned with the orbital angular momentum [27,29]. Recoils from the merger of non-precessing unequal mass black-hole binaries have been modeled in [32].
The net in-plane remnant recoil velocity arises both from the asymmetry due to unequal masses, which given its z → −z symmetric behavior, only contributes to recoil along the orbital plane, and the asymmetry produced by the black-hole spin component perpendicular to the orbital plane. Even if we can parametrize the contribution of each of these two components of the recoil in terms of only one angle, ξ, the modeling of it appears in principle very complicated. ξ may depend on the mass ratio (q) of the holes, as well as their individual spins S z 1 and S z 2 , but also on their orbital parameters such as initial coordinates and momenta, or initial separation and eccentricity. We clearly have to reduce the dimensionality of this parameter space as part of the modeling process. In order to do so, we shall choose a model for ξ that only depends on q and ∆ z for quasi-circular orbits. We then perform simulations to determine how accurately this reduced-parameter-space model for ξ reproduces the observed recoil velocities and find that ξ ≈ 145 • , independent of either q or ∆ z .
The paper is organized as follows, in Sec. II we review the numerical techniques used for the evolution of the black-hole binaries and the analysis of the physical quantities extracted at their horizons. In Sec. III we review the post-Newtonian dynamics of binary systems in order to motivate our study of equivalent trajectories for unequal mass, nonspinning and spinning holes. We focus on four families of such configurations. In Sec. IV we give the initial data parameters for these families. The results of the evolution of those configurations are given in Sec. V, where we also introduce a novel analysis of the trajectories of the punctures and of the waveform phase to model the angle ξ in our heuristic formula Eq. (1). In Sec. VI we analyze the generic configuration that led us to discover the large recoil velocities produced by the spin projection on the orbital plane of the binary. Here we use more refined tools to analyze the individual hole spins and momenta near merger time, when most of the recoil is generated. We end the paper with a Discussion section pointing out the need for further runs with higher accuracy to improve our first results, and an Appendix including the post-Newtonian analysis of the maximum recoil configuration.
II. TECHNIQUES
We use the puncture approach [49] along with the TwoPunctures [50] thorn to compute initial data. In this approach the 3-metric on the initial slice has the form γ ab = (ψ BL + u) 4 δ ab , where ψ BL is the Brill-Lindquist conformal factor, δ ab is the Euclidean metric, and u is (at least) C 2 on the punctures. The Brill-Lindquist conformal factor is given by where n is the total number of 'punctures', m p i is the mass parameter of puncture i (m p i is not the horizon mass associated with puncture i), and r i is the coordinate location of puncture i. We evolve these blackhole-binary data-sets using the LazEv [51] implementation of the moving puncture approach [2]. In our version of the moving puncture approach [2,3] we replace the BSSN [52,53,54] conformal exponent φ, which has logarithmic singularities at the punctures, with the initially C 4 field χ = exp(−4φ). This new variable, along with the other BSSN variables, will remain finite provided that one uses a suitable choice for the gauge. An alternative approach uses standard finite differencing of φ [3].
We use the Carpet [55,56] mesh refinement driver to provide a 'moving boxes' style mesh refinement. In this approach refined grids of fixed size are arranged about the coordinate centers of both holes. The Carpet code then moves these fine grids about the computational domain by following the trajectories of the two black holes.
We obtain accurate, convergent waveforms and horizon parameters by evolving this system in conjunction with a modified 1+log lapse and a modified Gamma-driver shift condition [2,57], and an initial lapse α(t = 0) = 2/(1 + ψ 4 BL ). The lapse and shift are evolved with These gauge conditions require careful treatment of χ, the inverse of the three-metric conformal factor, near the puncture in order for the system to remain stable [2,4,12]. In Ref. [58] it was shown that this choice of gauge leads to a strongly hyperbolic evolution system provided that the shift does not become too large. We use AHFinderdirect [59] to locate apparent horizons. We measure the magnitude of the horizon spin using the Isolated Horizon algorithm detailed in [60]. This algorithm is based on finding an approximate rotational Killing vector (i.e. an approximate rotational symmetry) on the horizon, and given this approximate Killing vector ϕ a , the spin magnitude is where K ab is the extrinsic curvature of the 3D-slice, d 2 V is the natural volume element intrinsic to the horizon, and R a is the outward pointing unit vector normal to the horizon on the 3D-slice. We measure the direction of the spin by finding the coordinate line joining the poles of this Killing vector field using the technique introduced in [8]. Our algorithm for finding the poles of the Killing vector field has an accuracy of ∼ 2 • (see [8] for details).
We also use an alternative quasi-local measurement of the spin and linear momentum of the individual black holes in the binary that is based on the coordinate rotation and translation vectors [39]. In this approach the spin components of the horizon are given by where φ i [ℓ] = δ ℓj δ mk r m ǫ ijk , and r m = x m − x m 0 is the coordinate displacement from the centroid of the hole, while the linear momentum is given by where ξ i [ℓ] = δ i ℓ . We measure radiated energy, linear momentum, and angular momentum, in terms of ψ 4 , using the formulae provided in Refs. [61,62]. However, rather than using the full ψ 4 we decompose it into ℓ and m modes and solve for the radiated linear momentum, dropping terms with ℓ ≥ 5. The formulae in Refs. [61,62] are valid at r = ∞. We obtain highly accurate values for these quantities by solving for them on spheres of finite radius (typically r/M = 25, 30,35,40), fitting the results to a polynomial dependence in l = 1/r, and extrapolating to l = 0. We perform fits based on a linear and quadratic dependence on l, and take the final values to be the average of these two extrapolations with the differences being the extrapolation error.
We obtain a new determination of H in Eq. (2b) using results from simulations performed by the NASA/GSFC [32], PSU [27], and AEI/LSU [41] groups. The simulations performed by these groups include runs with q = 1, and thus provide an accurate measurement of v ⊥ . We calculate H for each simulation (via X is the quantity to be averaged, n is some specified power, and δX i is the uncertainty in a particular measurement of X. Note that we weight H and H 2 by the same w i . We find H = (6895 ± 513) km s −1 . Figure 1 shows the values of H obtained from each simulation as well as the average value of H. We can see that based on the AEI/LSU data, which take into account the initial recoil at the beginning of the full numerical simulations, one could fit linear corrections to H. However, the deviations from H = const are only significant near D = q 2 /(1 + q) 5 (α 2 − qα 1 ) = 0, when the spininduced recoil is small (and hence the relative error in the spin-induced recoil is large). The absolute differences between the predicted and measured recoil velocities for the AEI/LSU results are within 20 km s −1 when we take H = 6895 km s −1 .
III. POST-NEWTONIAN ANALYSIS
In order to compare results from the recoil due to unequal masses and those due to spin effects as well, we will study systems with similar orbital trajectories. Since the radiated momentum due to unequal masses is a function of the orbital acceleration, these systems will all exhibit very similar unequal-mass contributions to the net recoil, which allows us to isolate the spin-induced contributions to the recoil. To generate families of binaries with similar trajectories we use the formulae for the leading order post-Newtonian accelerations and choose configurations that minimize the effects of the spins on the trajectories, but have non-negligible spin contributions to the net recoil.
The relative one-body accelerations can be written as [47] and ∆ ≡ m( S 2 /m 2 − S 1 /m 1 ), and an overdot denotes d/dt. The first four terms in Eq. (8) correspond to the Newtonian, first-post-Newtonian (1PN), second-post-Newtonian, and radiation reaction contributions to the equations of motion. The last two terms in Eq. (8) are the spin-orbit (SO) and spin-spin (SS) contributions to the acceleration.
The radiated linear momentum due to the motion of the two holes has the form [47] plus higher post-Newtonian terms [63], while the radiated linear momentum due to spin-orbit effects has the forṁ Note also that the spin-spin coupling does not contribute to the radiated linear momentum to this order. In order to best study and model how the final remnant recoil velocity depends on the mass ratio and spins, we will chose configurations with black holes spinning along the orbital angular momentum. In this way the orbital plane will not precess and we can write [47] and v =ṙn + rωλ, where L N ≡ µ( x × v) is the Newtonian orbital angular momentum,λ =L N ×n withL N = L N /| L N |, and ω = dφ/dt is defined as the orbital angular velocity. Taking into account that the velocity remains in the orbital plane, i.e. Eq. (13), we find that the spin-orbit acceleration (9e) is given by and the radiated linear momentum is given bẏ anḋ Note that if we take the scalar product of these two instantaneous radiated momenta we obtaiṅ (17) where f = 4rṙ 4 + (8m + 24r 3 ω 2 )ṙ 2 − r 2 ω 2 (4m + 25r 3 ω 2 ) and g(r,ṙ, ω) = 4ṙ 6 − (16m − 124r 3 ω 2 − r)ṙ 4 /r + (16m 2 +232r 3 ω 2 m+1225r 6 ω 4 +2r 4 ω 2 )ṙ 2 /r 2 +ω 2 (64m 2 + 800r 3 ω 2 m + 2500r 6 ω 4 + r 4 ω 2 ). The fact that the factor of ∆ z drops out of Eq. (17) suggests that ξ (which is the angle between the cumulative radiated linear momenta) will depend only weakly on the spins through their affects on the orbital motion. Binaries with similar orbital trajectories should therefore have similar values for ξ. Note that ξ may still be a strong function of trajectory and q.
We will now turn to the question of identifying a subset of physical parameters of the binary that produce similar trajectories for unequal-mass, non-spinning and unequalmass, spinning binaries in order to compare their recoil velocities and extract the spin contribution to the total recoil.
A. similar radial trajectories
An analysis of how ξ depends on configuration is greatly simplified if the trajectories of the spinning binaries are similar to the trajectory for a non-spinning binary with the same mass ratio. In order to accomplish this, we use the post-Newtonian expression for the spin-orbit induced acceleration Eq. (14), and choose configuration that minimize its effect.
The radial component of the spin-orbit induced acceleration will vanish if 5S z + 3 δm m ∆ z = 0. This leads to the condition whereα = qα 1 /α 2 can take any positive or negative value. However, if we consider the algebraic average over the range 0 ≤ q ≤ 1 at fixed F we find and thatα = α when q = 3/8 (independent of F ). We will thus study configurations with this mass ratio (which also produces a nearly maximum recoil velocity of ≈ 175 km s −1 for non-spinning unequal mass black hole binaries [23]).
Hence the first family of black-hole-binary configurations that we will study is given by the choice thus The total spin of the binary will in general be nonvanishing with I: Initial data parameters for quasi-circular orbits with orbital frequency ω/M = 0.05. All sets have mass ratio q = m H 1 /m H 2 = 3/8. The 'F' series has α = α2/α1 = −9/20 (hence F = qα1/α2(2q + 3) + 3q + 2 = 0), and the 'S' series has S = S1 + S2 = 0. The punctures are located along the x-axis with momenta P1 = (0, P, 0) and P2 = (0, −P, 0), and spins Si along the z-axis. We can also choose a configuration where the tangential component of the acceleration due to the spin-orbit coupling vanishes, i.e.
This translates into the condition when q = 3/8. Note that now, the radial acceleration, as parametrized by F , is non vanishing Thus, for q = 1, we cannot make both the radial and tangential components of the spin-orbit acceleration vanish at the same time by a simple choice of physical parameters of the binary.
IV. INITIAL CONFIGURATIONS
We choose quasi-circular initial configurations with mass ratio q = m 1 /m 2 = 3/8 from four families of param-eters that we will denote by Q, F, S, and A. The Q-series has initially non-spinning holes, the F-series has F = 0 (See Eq. (18)); hence zero PN spin-orbit-induced radial acceleration, the S-series has total spin S = 0; hence zero PN spin-orbit-induced tangential acceleration, and the A-series has neither F = 0 nor S = 0; hence both spin-obit-induced accelerations are non-vanishing. The puncture masses were fixed by requiring that the total ADM mass of the system be 1 and that the mass ratio of the horizon masses be 3/8. The initial data parameters for these configurations are given in Tables I and II. We obtained initial data parameters by choosing spin and linear momenta consistent with 3PN quasi-circular orbits for binaries with mass ratio q = 3/8 and then solve for the Bowen-York ansatz for the initial 3-metric and extrinsic curvature. This method was pioneered by the Lazarus project [64] (See Fig. 35 there), and then used in the rest of the breakthrough papers [6,7,8,39,48,62] by the authors (in Ref. [28] we used the PN expressions for the radial component of the momentum as well).
V. RESULTS
We evolved all configurations given in Tables I and II using 10 levels of refinement with a finest resolution of h = M/80 and outer boundaries at 320M except configuration A +0.9 , where we used an additional coarse level to push the outer boundaries to 640M . In all cases, except where noted otherwise, we set the free Gamma-driver parameter in Eq. (3c) to η = 6/M [2,57].
In a generic simulation both the unequal mass and spin components of the recoil are functions of the trajectory. To single out each individual effect we perform runs cho- sen to follow similar trajectories. In order to compare recoil velocity directions between these runs we need to rotate each system so that the final plunge (where most of the recoil is generated) occurs along the same direction. We do this in two ways. First, as demonstrated in Fig. 2, we plot the puncture trajectory difference r = r 1 − r 2 (where r i (t) is the coordinate location of puncture i at time t) for each case and rotate the trajectories by an angle Φ track so that they all line up with the Q 38 trajectory during the late inspiral and merger phases. Note that by taking the differences between trajectories we remove effects due to the wobble motion of the center of mass. Second, we measure the phase of the dominant (ℓ = 2, m = 2) mode of ψ 4 at the point of peak amplitude and take half the phase difference between each case and Q 38 (a rotation of φ about the z-axis will introduce a phase difference of −2φ in the m = 2 components of ψ 4 ). We denote this latter rotation angle by Φ ψ4 . We get reasonable agreement between these two measures of the rotation angle (See Table III). This type of rotation may also be needed when comparing results from different resolutions of the same configuration (i.e. when the phase error, but not the amplitude error, is large). In Table IV we give the components of the recoil velocity for a set of Q 38 simulations with η = 2/M . This value of η leads to a poorer effective resolution than for our standard choice of η = 6/M . Consequently there is a relatively large phase error in the low resolution results. After performing the rotation, the recoil velocities agree to within errors. Note that there is no rotation which will make the A +0.9 or A −0.9 trajectories line up with the Q 38 trajectory. In these cases the hangup-effect [6] due to spin-obit coupling significantly alters the trajectories (See Fig. 3).
Once we have found the correct rotation angle we ob- where V recoil is the measured recoil velocity, R[Φ] rotates V recoil by an angle Φ in the xy plane, and V Q38 is the recoil of the Q 38 configuration. Note that when α 2 − qα 1 < 0 we need to replace ξ by π − ξ in formula (26) since the coefficient v ⊥ in Eq. (1) is negative. We calculate two different values of ξ, ξ track and ξ ψ4 , based on the rotation angles Φ track and Φ ψ4 respectively. We obtain an additional measurement of ξ by solving for cos ξ using Eq. (1) and the measured values of the recoil magnitude. We denote this latter measurement of ξ, which is unaffected by rotations, by ξ Formula , where v m is given by Eq. (2a), v ⊥ is given by Eq. (2b), and v is the measured magnitude of the recoil velocity.
We summarize the results of our simulations in Tables V and VI. All configuration, with the exception of the 'A' series, have radiated energies in the range E rad /M = 0.021 ± 0.002 and radiated angular momenta in the range J rad /M 2 = 0.15 ± 0.01, which is consistent Fig. 2).
We obtain weighted averages for ξ for the 'F' and 'S' series of ξ track = (152 ± 9) • , ξ ψ4 = (143 ± 14) • , and The recoil velocities (prior to any rotation), radiated energy and angular momentum, and ξ for the 'S' and 'A' series. Note that although we report the calculated values for ξ based on Φ ψ 4 for the 'A' series, here ξ is not well defined because the unequal mass component of the recoil is not given by the Q38 recoil. ξ track is calculated using Φ track and Eq. (26), ξ ψ 4 is calculated using Φ ψ 4 and Eq. (26), ξ Formula is calculated from the given recoil magnitude using Eq. (27). | V pred track |, | V pred ψ 4 |, and | V pred avg | are the recoil velocities as predicted by Eq. (1) with ξ = ξ track , ξ = ξ ψ 4 , and ξ = ξ respectively. ξ Formula = (144 ± 7) • , where we use Eq. (7) to obtain the weighted average and uncertainty. These weighted averages are consistent with the measured values of ξ. The weighted average over all three measurements of ξ is ξ = (145 ± 10) • . Interestingly, ξ provides an accurate prediction for the recoil velocity of the A −0.9 configuration. This result is unexpected because the recoil due to unequal masses is a function of the mass ratio and the trajectories (i.e. the accelerations of the masses over time). For the 'F' and 'S' configuration the trajectories are very similar to Q 38 , and hence the unequal mass components of the recoil are expected to be very similar to Q 38 . However, the spin-orbit coupling induced hangup effect in both A +0.9 and A −0.9 greatly affects the trajectories (See Fig. 3), as well as the radiated energy and angular momenta. If we consider the radiated linear momentum averaged over an orbit, then we see that the slower the inspiral (i.e. the closer to a closed orbit), the smaller the average recoil. Hence we expect that A +0.9 will have a smaller unequal-mass recoil than Q 38 , while A −0.9 will have a larger one. To quantify how much the orbits close we take the average of r = r 1 − r 2 over the trajectory from the beginning of each simulation until | r| ∼ 0.1. The resulting averages | r | for the 'Q', 'F', 'S', and 'A' families are given in Table VII. The mean and standard deviation of | r | for the 'Q', 'F', and 'S' configurations is | r | = 0.865 ± 0.070. The A +0.9 and A −0.9 configuration lie 7.1σ and 7.6σ below and above this mean respectively, while the results for the other configurations lie within 1.4σ of the mean.
As seen in Fig. 4 the angle ξ appears, at least qualitatively, to be independent of ∆. This is in agreement with our post-Newtonian analysis in Eq. (17). It is also consistent with our intuition that similar trajectories imply similar angles between the unequal-mass and spin contributions to the recoil, and it seems that the small differences in the trajectories produce some scatter on the values, but this is apparently mostly due to the numerical error generated during the simulations. It would be interesting to use this value of ξ to model the recoil velocity distribution in galaxies.
VI. GENERIC EVOLUTION REANALYZED
In light of our new understanding about the modeling of the recoil velocity, we re-examine our original generic binary configuration, which we denote by SP6. The SP6 : ξ versus ∆/m = S2/m2 − S1/m1 as calculated in this work for a mass ratio q = 3/8 and from the data published by the NASA/GSFC group for a mass ratio q = 2/3 provided in Ref. [32]. We plot ξ track , ξ ψ 4 , and ξ Formula for the 'F' and 'S' configurations and ξ Formula for 'A' configurations. The thick horizontal line and the two thin horizontal line show the average value ξ and its uncertainty (as calculated in this work from our simulations). The data displays significant scatter, but appears to be consistent with ξ = const.
configuration has a mass ratio of q = 1/2 with the larger hole having specific-spin a/m = 0.885 and spin pointing 45 • below the orbital plane, and the smaller hole having negligible spin. We also evolved a similar configuration, which we will denote by SP6R, that is identical to SP6, but with the spin rotated by 90 • about the z-axis. We evolved both configurations using the same grid structure as in the previous section, but used η = 2/M rather than 6/M . This choice of smaller η has the effect of reducing the effective resolution, but makes calculations of the quasi-local linear momentum and spin direction more accurate (See Ref. [39]) by reducing coordinate distortions. The initial data parameters for the two configurations are given in Table VIII. The drop in effective resolution when reducing η from 6/M to 2/M is significant. In our simulations we found that a Q 38 , η = 2/M run with central resolution of M/100 had a slightly larger waveform phase error than an equivalent M/80 resolution run with η = 6/M , while an M/80 run with η = 2/M displayed a significant phase error. We have found in general that, with our choice of gauge, the coordinate dependent measurements, such as spin and linear momentum direction, become more accurate as η is reduced (and h → 0). However, if η is too small (η < ∼ 1/M ), the runs may become unstable. Similarly, if η is too large (η > ∼ 10/M ), then grid stretching effects can cause the remnant horizon to continuously grow, eventually leading to an unacceptable loss in accuracy at late-times. We have found that a value of η = 6/M provides both very high accuracy in the com- VIII: Initial data parameters for the SP6 and SP6R configurations. mp is the puncture mass parameter of the two holes. SP6 has spins S1 = (0, S, −S) and S2 = (0, 0, 0), momenta P = ±(Pr, P ⊥ , 0), puncture positions x1 = (x+, d, d) and x2 = (x−, d, d), masses m1 and m2, and MADM/M = 1.00000 ± 0.00001. SP6R has the same parameters as SP6 with the exception that S1 = (−S, 0, −S). puted waveform at modest resolutions, while keeping the remnant horizon size nearly fixed at late-times. We measure a net recoil of V recoil = 375±18 km s −1 and V recoil = 848 ± 20 km s −1 for SP6 and SP6R respectively.
The analysis of the recoil in SP6 and SP6R is complicated by the fact that the orbital plane precesses significantly during the merger. Thus, we cannot associate the xy components of the recoil with the in-plane recoil (as was done tentatively in Ref. [28]). In order to measure the precession of the orbital plane we need an accurate measurement of the orbital angular momentum. Here we use the approximate formula where r i is the coordinate location of puncture i and P i is the quasi-local momentum [39], given by Eq. (6), of black hole i. In Fig. 5 we show the orbital angular momentum of the SP6 configuration versus time. Note the rapid change in direction near merger (a common horizon was first detected at t = 207.4M ), and as seen in Fig. 6, most of the recoil is generated about 3M to 30M after merger (here we assume that waveform features seen at t = τ for an observer at r = 40M were generated by dynamics near the horizons at t ∼ τ − 40M ). This rapid change in direction has a strong effect on the computed recoil due to the cos Θ and cos ξ dependence of v recoil . That is, rapid physical changes in the orbital plane and spin direction, lead to relatively large errors in the direction (but not magnitude) of both the spin and orbital angular momenta when the resolution is below some threshold. This in turn, leads to relatively large errors in the measured recoil. Thus it is not surprising that this new calculation of the recoil velocity for SP6 is 100 km s −1 smaller than the value we reported in [28] (note that we used a higher effective resolution in [28], thus we expect those values to be more accurate). These large errors will not be observed in more symmetric binaries where either the spin or angular momentum axes are fixed.
We can obtain an approximate measurement of α and α ⊥ using Eq. (28) and the measured direction of the spin. This estimation is only approximate due to the coordinate dependent nature of both calculations. We find that for SP6, α and α ⊥ vary little over the course of the run with values at merger of α = −0.62 ± 0.03 and α ⊥ = 0.62 ± 0.03 (which are within errors of the initial values). However, the SP6R configuration does show a definite change in α over time, with merger values of α = −0.69 ± 0.03 and α ⊥ = 0.54 ± 0.03. We can use Eq. (1) to give estimates for the predicted recoil velocity if we make the following assumptions: (1) ξ = ξ , (2) Θ for SP6R is rotated by π/2 radians with respect to SP6, and (3) Θ 0 is the same for SP6 and SP6R. Given these assumptions and the above range of the values for α and α ⊥ , we can perform a nonlinear least-squares fit of the recoil magnitude for SP6 and SP6R to obtain Θ 0 . The resulting predictions for the recoil magnitude are V SP6 = (500 ± 60) km s −1 and V SP6R = (1120 ± 130) km s −1 . Both predictions are within 2σ of the actual measured values and have an absolute error of 32%. If we fix α and α ⊥ to their average values and vary our guess for ξ over the range (0, 360 • ), we find that the predicted values for V SP6 and V SP6R lie in the ranges (462, 495)km s −1 and (1048, 1120)km s −1 respectively.
The SP6 configuration demonstrated that the in-plane component of the spin can be the dominant contribution to the recoil. Given this observation, it becomes very important to accurately model this recoil. In Appendix A we derive a post-Newtonian model for the recoil produced by this in-plane component and show that it predicts the cos Θ dependence in our empirical formula.
VII. DISCUSSION
Interestingly, most of the recoil velocity imparted to the remnant is generated at around merger time (more precisely, as seen in Fig. 6, within the first few tens of Here the initial data burst is excluded from the calculation. Note that peak in dV /dt is located between t = 250M and t = 270M and occurs about 2M latter than the peak in |ψ4|. A common horizon was first detected at t = 207.4M , strongly suggesting that most of the recoil velocity is built up around merger time (since the observer is at r = 40M , features in the waveform at time t = τ originated near the horizon(s) at time t ∼ τ − 40M ).
Although an accurate modeling of ξ is challenging, starting from an ansatz that ξ = ξ(q, ∆), we have found that, for quasi-circular orbits, ξ is qualitatively independent of either ∆ or q for q = 3/8, q = 2/3 (based on the results of Ref. [32]), and q = 1/2 (based on SP6). Note that the ξ that we measure is consistent with a similar parameter introduced in Ref. [32], where they found ξ = 147 • (in our notation), based on a least-squares fit of the magnitude of the recoil versus a simplified version of Eq. (1). We know from the results for headon collision (where ξ = π/2), that ξ is a function of eccentricity. However, for quasi-circular orbits, it appears to vary only marginally with either q or ∆. Further long-term simulations with high-accuracy (including extrapolations to h → 0 and η → 0) and further separated binaries will be needed in order to obtain a highly accurate model for ξ. In particular, the η → 0 limit will be important because the recoil depends sensitively on the linear momenta and spin directions of the individual black holes near merger (where gauge effects are most severe), and hence we need to take the η → 0 limit in order to accurately measure α, L, and Θ. Nevertheless, our simple formula holds with enough accuracy for astrophysical applications. In particular we have seen that the determination of an average value for the angle ξ of 145 o seems to work not only for the F and S sequences, but also when we move off of these sequences towards more generic binaries. However, the formula should definitely be used with caution in an untested regime, especially when the trajectories are significantly altered by spin-orbit effects.
This clearly displays the fact that the recoil will be maximized when ∆ takes the maximum magnitude (equal mass and opposite maximally rotating black holes) and varies sinusoidally with its projection along the line joining the holes. Note that if we define the angle betweenn and ∆ as θ we can write the above equation as˙ P SO = A(r)|∆| cos θ + B(r)|∆| sin θ = C(r)|∆| cos(θ − θ 0 (r)).
This cos θ dependence in the recoil was the motivation for proposing the now-verified cos Θ dependence in our empirical formula Eq. (2c) for the recoil.
Note that this analysis applies to the radiated linear momentum flux. Hence we have assumed that the larger the radiated linear momentum flux, the larger the total radiated linear momentum.
It is also interesting to see if the unexpectedly large magnitude of the maximum out-of-plane recoil, compared to the in-plane recoil, can be understood using the post-Newtonian expression for the radiated linear momentum, i.e. Eqs. (15) and (A8) (See Ref. [38] for a similar analysis). To do this, we used the post-Newtonian formulae for the radiated linear momentum along with the numerical trajectories for runs with the spins in the plane and perpendicular to the plane. We found that the post-Newtonian formulae predicted that the maximum outof plane recoil will be approximately twice (almost 9/4) as large, rather than (the observed) ≈ 8 times as large, as the maximum in-plane recoil. Thus we see that the magnitude of the out-of plane recoil arises from nonlinear dynamics at merger not fully captured by the post-Newtonian formalism. One may then conclude that, while the post-Newtonian approximation gives the correct dependence of the recoil on the physical parameters, such as the scaling of the recoil velocities with the components of the spins parallel and perpendicular to the angular momentum, it is much less accurate when describing the amplitude of the recoils. Thus we find that post-Newtonian formalisms provides the correct form for our semi-empirical formula (1), but does not provide accurate measurements of the magnitudes of the constants in that formula.
|
2008-03-13T11:08:43.000Z
|
2007-08-30T00:00:00.000
|
{
"year": 2007,
"sha1": "c330fab10e660f85d025c628cc64e0387c13b48e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0708.4048",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c330fab10e660f85d025c628cc64e0387c13b48e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
237445245
|
pes2o/s2orc
|
v3-fos-license
|
Skin Strain Analysis of the Scapular Region and Wearables Design
Monitoring scapular movements is of relevance in the contexts of rehabilitation and clinical research. Among many technologies, wearable systems instrumented by strain sensors are emerging in these applications. An open challenge for the design of these systems is the optimal positioning of the sensing elements, since their response is related to the strain of the underlying substrates. This study aimed to provide a method to analyze the human skin strain of the scapular region. Experiments were conducted on five healthy volunteers to assess the skin strain during upper limb movements in the frontal, sagittal, and scapular planes at different degrees of elevation. A 6 × 5 grid of passive markers was placed posteriorly to cover the entire anatomic region of interest. Results showed that the maximum strain values, in percentage, were 28.26%, and 52.95%, 60.12% and 60.87%, 40.89%, and 48.20%, for elevation up to 90° and maximum elevation in the frontal, sagittal, and scapular planes, respectively. In all cases, the maximum extension is referred to the pair of markers placed horizontally near the axillary fold. Accordingly, this study suggests interesting insights for designing and positioning textile-based strain sensors in wearable systems for scapular movements monitoring.
Introduction
Patients suffering from shoulder musculoskeletal disorders (MSDs) may experience pain and reduced functional capacity [1,2]. The scapula, the bone linking the humerus with the clavicle, ensures proper alignment and the normal mobility of the glenohumeral and acromioclavicular joints [3]. A correct and coordinated scapular movement represents the key component in regular shoulder functionality. Alterations in scapular position and orientation, a condition known as scapular dyskinesis, characterize most shoulder MSDs, such as subacromial impingement syndrome, rotator cuff tears, frozen shoulder, or multidirectional instability [1,4]. The scapulothoracic joint is a functional sliding joint between the medial border of the scapula and the posterior thoracic ribcage, allowing the relative motion of the scapula on the thoracic surface below. The joint variables in sliding joints are the extensions between two sequential body segments [5,6].
Obtaining objective data of the scapular movements considering both different degrees of elevations and planes (e.g., frontal, sagittal, and scapular) could provide meaningful achievements in the context of rehabilitation and clinical research [7][8][9]. Recently, increasing attention has been directed toward understanding the complex scapula kinematics and providing monitoring systems that can quantify scapular movements [8,10]. To date, several axis along the scapular spine [31]. Translations include superior-inferior (elevation-depression) and mediolateral (retraction-protraction) motions of the scapulae over the posterior chest wall. Translational movements are permitted by the connection of the scapula to the axial skeleton through the clavicle [31]. Figure 1 illustrates the main scapular movements. The scapula serves as the location of various muscles' attachment [3]. Such muscles, having different sizes, functions, and depths, experience several stretching directions during upper limb movements in the different planes of the 3D space and at different degrees of elevation. Moreover, the scapula posteriorly is covered by overlying soft tissue, which in turn influences the superficial deformation of the scapular region. For all of these reasons, the skin deformations in the scapular region have dissimilarity in stretching position and magnitude during upper limbs motions. The main scapulothoracic muscles are the trapezius muscle, the serratus anterior muscle, the rhomboids, and the levator scapulae [3]. During active flexion and abduction of the shoulder, the trapezius act as scapular retractor, and the serratus anterior enables the upward rotation and protraction of the scapulothoracic joint [3,32,33]. The rhomboids and levator scapulae mainly contribute to the scapula's retraction, elevation, and internal rotation [3,10]. In Figure 2a,b, a schematic representation of the main lines of action of the aforementioned muscles is presented.
The axillary fold is located below the glenohumeral joint connecting the humerus to the glenoid fossa of the scapula. In addition to being the site of a certain amount of fatty tissue and connective tissue, the axillary region posteriorly borders with the latissum dorsi muscle (Figure 2c) and teres major muscle (Figure 2d). The latissimus dorsi muscle is part of the muscles of the scapular movements enabling inferior angle pulling in multiple directions. Indeed, its multidirectional muscle fibers allow shoulder adduction, extension, and internal rotation. Besides allowing the movements (internal rotation and extension) of the humerus at the glenohumeral joint, the teres major muscle contributes to the scapular upward rotation and elevation [3]. The scapula serves as the location of various muscles' attachment [3]. Such muscles, having different sizes, functions, and depths, experience several stretching directions during upper limb movements in the different planes of the 3D space and at different degrees of elevation. Moreover, the scapula posteriorly is covered by overlying soft tissue, which in turn influences the superficial deformation of the scapular region. For all of these reasons, the skin deformations in the scapular region have dissimilarity in stretching position and magnitude during upper limbs motions. The main scapulothoracic muscles are the trapezius muscle, the serratus anterior muscle, the rhomboids, and the levator scapulae [3]. During active flexion and abduction of the shoulder, the trapezius act as scapular retractor, and the serratus anterior enables the upward rotation and protraction of the scapulothoracic joint [3,32,33]. The rhomboids and levator scapulae mainly contribute to the scapula's retraction, elevation, and internal rotation [3,10]. In Figure 2a,b, a schematic representation of the main lines of action of the aforementioned muscles is presented.
The axillary fold is located below the glenohumeral joint connecting the humerus to the glenoid fossa of the scapula. In addition to being the site of a certain amount of fatty tissue and connective tissue, the axillary region posteriorly borders with the latissum dorsi muscle (Figure 2c) and teres major muscle (Figure 2d). The latissimus dorsi muscle is part of the muscles of the scapular movements enabling inferior angle pulling in multiple directions. Indeed, its multidirectional muscle fibers allow shoulder adduction, extension, and internal rotation. Besides allowing the movements (internal rotation and extension) of the humerus at the glenohumeral joint, the teres major muscle contributes to the scapular upward rotation and elevation [3].
Participants
In this study, five male volunteers (mean ± standard deviation: age-25.4 ± 3.8 years old; body mass-74.8 ± 9.6 kg; height-1.77 ± 0.11 m; body mass index-23.7 ± 1.9 kg•m −2 ) with no history of shoulder pathologies were recruited. All participants performed the experimental tasks with their dominant (right) limb. Before experimental sessions, all subjects read and signed an informed consent, approved by the Ethical Committee of University Campus Bio-Medico of Rome (protocol code: 09/19 OSS ComEt UCBM). Table 1 shows the age and main anthropometric characteristics of the subjects involved in the study.
Experimental Set-Up
A Qualysis™ Motion Capture system (Qualysis AB, Gothenburg, Sweden) equipped with 10 Miqus M3 cameras (sampling frequency, 100 Hz) and 2 Miqus Video (sampling frequency, 25 Hz) was used to track a 6 × 5 grid of spherical retro-reflective markers (diameter, 8 mm). All markers were positioned on the right scapular region by the same investigator to avoid bias. Firstly, three markers were positioned on three skeletal landmarks of the scapula, i.e., angulus acromialis, trigonum spinae, and angulus inferior, identified by surface palpation. Then, the remaining 27 markers were positioned to form the 6 × 5 grid covering the entire scapular region of each subject (Figure 3a). Figure 3b,c show an actual reconstruction of the grid of markers during a task performed by a volunteer representing the starting position and elevated position, respectively.
Participants
In this study, five male volunteers (mean ± standard deviation: age-25.4 ± 3.8 years old; body mass-74.8 ± 9.6 kg; height-1.77 ± 0.11 m; body mass index-23.7 ± 1.9 kg·m −2 ) with no history of shoulder pathologies were recruited. All participants performed the experimental tasks with their dominant (right) limb. Before experimental sessions, all subjects read and signed an informed consent, approved by the Ethical Committee of University Campus Bio-Medico of Rome (protocol code: 09/19 OSS ComEt UCBM). Table 1 shows the age and main anthropometric characteristics of the subjects involved in the study.
Experimental Set-Up
A Qualysis™ Motion Capture system (Qualysis AB, Gothenburg, Sweden) equipped with 10 Miqus M3 cameras (sampling frequency, 100 Hz) and 2 Miqus Video (sampling frequency, 25 Hz) was used to track a 6 × 5 grid of spherical retro-reflective markers (diameter, 8 mm). All markers were positioned on the right scapular region by the same investigator to avoid bias. Firstly, three markers were positioned on three skeletal landmarks of the scapula, i.e., angulus acromialis, trigonum spinae, and angulus inferior, identified by surface palpation. Then, the remaining 27 markers were positioned to form the 6 × 5 grid covering the entire scapular region of each subject (Figure 3a). Figure 3b,c show an actual reconstruction of the grid of markers during a task performed by a volunteer representing the starting position and elevated position, respectively.
Experimental Protocol
Volunteers were verbally instructed by the same investigator, who also provided a practical demonstration of each task to be performed.
During experimental sessions, the starting position was with the arms along the body and palms towards the thighs. Figure 4 illustrates the movements investigated during experiments.
Experimental Protocol
Volunteers were verbally instructed by the same investigator, who also provided a practical demonstration of each task to be performed.
During experimental sessions, the starting position was with the arms along the body and palms towards the thighs. Figure 4 illustrates the movements investigated during experiments.
Experimental Protocol
Volunteers were verbally instructed by the same investigator, who also provided a practical demonstration of each task to be performed.
During experimental sessions, the starting position was with the arms along the body and palms towards the thighs. Figure 4 illustrates the movements investigated during experiments. Task 6: 10 consecutive arm elevations in the scapular plane from starting position to maximum elevation.
All tasks were executed with the elbow fully extended and the thumb pointing upward. During each task, the same investigator guided the participants to perform the movements.
Motion Capture Data
The collected data were first pre-processed off-line using the Qualisys Track Manager (QTM) software (version 2021.1, Build 6300) for markers' labeling and trajectories gap filling by applying proprietary algorithms included in QTM software. All gap-filled trajectories were visually inspected. For further analysis, a process of manual identification of events corresponding to the starting and elevation positions reached by volunteers at each repetition was performed in QTM. Then, data of all subjects and executed tasks were exported to MATLAB (version 2020b). Markers' trajectories data were filtered using a low pass 4th order Butterworth filter with a cutoff frequency of 6 Hz. As there is no consensus on the directionality of deformation experienced in the scapular region during upper limb elevations, distances between pairs of markers were not calculated separately in the vertical and horizontal directions. Instead, distances between pairs of markers were computed by considering all possible combinations (i.e., 435) considering 30 elements (i.e., the number of markers) taken 2 at a time.
For each pair of markers, the distance D(i, j) between the i − th marker m(i) and the j − th marker m(j) was obtained as:
Skin Deformation Analysis and Statistics
For each pair of markers (i, j) with i = j, the skin relative strain variation ε k (i, j) at each k − th repetition was calculated using the following equation: where D(i, j) k and D(i, j) 0,k are the distances between the i − th and j − th markers corresponding to the k − th repetition at the elevated position and starting positions, respectively, and ∆D(i, j) k is the difference between the two mentioned distances. For greater clarity, Figure 5 shows the events corresponding to the starting position (light blue circle) and to the elevated position (green circle) for each repetition (in red). The mean percentage strain, ε%, was calculated as follows: A positive value of the mean strain ε% corresponds to the skin extension, while a negative value corresponds to the skin compression.
A positive value of the mean strain ̅ % corresponds to the skin extension, while a negative value corresponds to the skin compression. After calculating ̅ %, variations in skin strain were averaged among the five participants for each pair of markers. The descriptive analysis was performed by evaluating mean, median, standard deviation, minimum, and maximum strain. The Shapiro-Wilk test was used to assess the normality assumption of the data. If the Shapiro-Wilk test results were significant (p < 0.05), the nonparametric Wilcoxon rank-sum test was applied as a statistical method for strain comparison at 90° and maximum elevation in all planes. For all hypothesis tests, the p-value for significance was 0.05 for the rejection of the null hypothesis. Statistical analysis was performed in SPSS v28.0 (IBM, SPSS, Inc., Chicago, IL, USA).
Results
A total of 435 skin relative strain variations in the scapular region from 5 participants were analyzed during arm elevation in the frontal, sagittal, and scapular planes at 90° and maximum degree of elevation. During the elevation phase in all planes and at different degrees, some pairs of markers moved away, and others moved closer, suggesting extension and compression of the underlying scapular region, respectively. Figure 6 reports the distance trends of some pairs of markers during all tasks performed by a volunteer.
The Shapiro-Wilk test showed that strain distributions corresponding to different degrees of elevation were not normally distributed (Table 2). Moreover, the differences between strain at 90° and maximum elevation were significant, as shown by the results of the Wilcoxon rank-sum test (p < 0.05), see Table 2. Figure 7a reports the combination of box and violin plots to provide in a single representation the main features of strain distributions during the tasks performed in the frontal plane. The box plot allowed highlighting the mean value (represented by the asterisk), the median value (represented by the black horizontal line), and the interquartile After calculating ε%, variations in skin strain were averaged among the five participants for each pair of markers. The descriptive analysis was performed by evaluating mean, median, standard deviation, minimum, and maximum strain. The Shapiro-Wilk test was used to assess the normality assumption of the data. If the Shapiro-Wilk test results were significant (p < 0.05), the nonparametric Wilcoxon rank-sum test was applied as a statistical method for strain comparison at 90 • and maximum elevation in all planes. For all hypothesis tests, the p-value for significance was 0.05 for the rejection of the null hypothesis. Statistical analysis was performed in SPSS v28.0 (IBM, SPSS, Inc., Chicago, IL, USA).
Results
A total of 435 skin relative strain variations in the scapular region from 5 participants were analyzed during arm elevation in the frontal, sagittal, and scapular planes at 90 • and maximum degree of elevation. During the elevation phase in all planes and at different degrees, some pairs of markers moved away, and others moved closer, suggesting extension and compression of the underlying scapular region, respectively. Figure 6 reports the distance trends of some pairs of markers during all tasks performed by a volunteer.
The Shapiro-Wilk test showed that strain distributions corresponding to different degrees of elevation were not normally distributed (Table 2). Moreover, the differences between strain at 90 • and maximum elevation were significant, as shown by the results of the Wilcoxon rank-sum test (p < 0.05), see Table 2. From the analysis of Figure 7a is clear the greater dispersion of the ̅ % during the task at maximum elevation (in blue) than task up to about 90° (in yellow). For maximum elevation in the frontal plane, results of ̅ % showed a mean ± standard deviation equals to −0.36 ± 13.27, a median of −4.05, and an IQR of −8.10-3.58. For 90° of elevation in the frontal plane, the mean ± standard deviation was −0.46 ± 6.43, the median was −0.80, and the IQR was −3.39-1.86. The bigger extension of the IQR calculated during maximum extension confirms the higher dispersion in this task.
A similar analysis has been performed considering the ̅ % absolute values reported in Figure 7b. Such analysis allows comparing the ̅ % experienced during the two degrees of elevation by focusing on the skin strain's amplitude without discriminating between compression and extension. From the analysis of Figure 7b, it is clear that during the task From the analysis of Figure 7a is clear the greater dispersion of the ε% during the task at maximum elevation (in blue) than task up to about 90 • (in yellow). For maximum elevation in the frontal plane, results of ε% showed a mean ± standard deviation equals to −0.36 ± 13.27, a median of −4.05, and an IQR of −8.10-3.58. For 90 • of elevation in the frontal plane, the mean ± standard deviation was −0.46 ± 6.43, the median was −0.80, and the IQR was −3.39-1.86. The bigger extension of the IQR calculated during maximum extension confirms the higher dispersion in this task.
A similar analysis has been performed considering the ε% absolute values reported in Figure 7b. Such analysis allows comparing the ε% experienced during the two degrees of elevation by focusing on the skin strain's amplitude without discriminating between compression and extension. From the analysis of Figure 7b, it is clear that during the task at maximum elevation (in orange), the absolute value of ε% is bigger than the one up to about 90 • (in green). For maximum elevation in the frontal plane, the mean ± standard deviation was 9.90 ± 8.84, the median was 7.30, and the IQR was 4.02-12.56. For 90 • , the mean ± standard deviation was 4.46 ± 4.64, the median was 2.60, and the IQR was 1.21−6.23. These results highlight that skin strains in the scapular region are greatest during maximal abduction and are also confirmed by the maximum ε% value (i.e., 52.95% for maximum elevation vs. 28.26% for elevation up to 90 • ). The region that underwent maximum extension corresponds to the pair of markers 19-20 for both degrees of elevation in the frontal plane (Figure 7c,d).
at maximum elevation (in orange), the absolute value of ̅ % is bigger than the one up to about 90° (in green). For maximum elevation in the frontal plane, the mean ± standard deviation was 9.90 ± 8.84, the median was 7.30, and the IQR was 4.02-12.56. For 90°, the mean ± standard deviation was 4.46 ± 4.64, the median was 2.60, and the IQR was 1.21−6.23. These results highlight that skin strains in the scapular region are greatest during maximal abduction and are also confirmed by the maximum ̅ % value (i.e., 52.95% for maximum elevation vs. 28.26% for elevation up to 90°). The region that underwent maximum extension corresponds to the pair of markers 19-20 for both degrees of elevation in the frontal plane (Figure 7c,d). The region that underwent maximum compression during upper arm abduction corresponds to the first line of the grid for both degrees of elevation (Figure 7c,d). Table 3 reports the extreme values of ̅ % for both extension and compression during tasks performed in the frontal plane. Data confirm that skin strains are bigger during maximal abduction.
From the analysis of Figure 8a is clear the greater dispersion of the ̅ % during the task at maximum elevation (in blue) than task up to about 90° (in yellow) performed in the sagittal plane. For maximum elevation in the sagittal plane, results of ̅ % showed a mean ± standard deviation equals to −6.87 ± 14.62, a median of −3.96, and an IQR of −9.68-7.50. For 90°, the mean ± standard deviation was −5.37 ± 11.59, the median was −2.55, and the IQR was −3.77-12.56. Also in this case, from the analysis of Figure 8b is clear that during the task at maximum elevation (in orange) the absolute value of % is bigger than the one up to about 90° (in green). For maximum elevation in the sagittal plane, the mean ± standard deviation was 11.30 ± 9.28, the median was 9.10, and the IQR was 5.32-14.23. For 90°, the mean ± standard deviation was 9.40 ± 8.64, the median was 6.47, and the IQR was 3.39-12.86. The region that underwent maximum compression during upper arm abduction corresponds to the first line of the grid for both degrees of elevation (Figure 7c,d). Table 3 reports the extreme values of ε% for both extension and compression during tasks performed in the frontal plane. Data confirm that skin strains are bigger during maximal abduction.
From the analysis of Figure 8a is clear the greater dispersion of the ε% during the task at maximum elevation (in blue) than task up to about 90 • (in yellow) performed in the sagittal plane. For maximum elevation in the sagittal plane, results of ε% showed a mean ± standard deviation equals to −6.87 ± 14.62, a median of −3.96, and an IQR of −9.68-7.50. For 90 • , the mean ± standard deviation was −5.37 ± 11.59, the median was −2.55, and the IQR was −3.77-12.56.
Also in this case, from the analysis of Figure 8b is clear that during the task at maximum elevation (in orange) the absolute value of ε% is bigger than the one up to about 90 • (in green). For maximum elevation in the sagittal plane, the mean ± standard deviation was 11.30 ± 9.28, the median was 9.10, and the IQR was 5.32-14.23. For 90 • , the mean ± standard deviation was 9.40 ± 8.64, the median was 6.47, and the IQR was 3.39-12.86.
The maximum positive values were found to be 60.87% and 60.12% for maximum and 90 • of elevation, respectively ( Table 4). The region that underwent maximum extension corresponds to the pair of markers 19-20 for both degrees of elevation in the sagittal plane (Figure 8c,d). Unlike movements performed in the frontal plane, during upper arm elevations in the sagittal plane, the pairs of markers corresponding to the maximum compressive strain values were not distributed along the same direction (Figure 8c,d). Table 4 reports the extreme values of ε% for both extension and during tasks performed in the sagittal plane. The maximum positive values were found to be 60.87% and 60.12% for maximum and 90° of elevation, respectively ( Table 4). The region that underwent maximum extension corresponds to the pair of markers 19-20 for both degrees of elevation in the sagittal plane (Figure 8c,d). Unlike movements performed in the frontal plane, during upper arm elevations in the sagittal plane, the pairs of markers corresponding to the maximum compressive strain values were not distributed along the same direction (Figure 8c,d). Table 4 reports the extreme values of ̅ % for both extension and during tasks performed in the sagittal plane. Figure 9a shows strain distributions of the scapular region corresponding to the elevations performed in the scapular plane. For maximum elevation in the scapular plane, results of ε% showed a mean ± standard deviation equals to 0.32 ± 11.08, a median of −2.96, and an IQR of −6.56-4.39. For 90 • of elevation in the scapular plane, the mean ± standard deviation was 1.86 ± 7.94, the median was −0.60, and the IQR was −3.22-5.57. Figure 9a shows strain distributions of the scapular region corresponding to the elevations performed in the scapular plane. For maximum elevation in the scapular plane, results of ̅ % showed a mean ± standard deviation equals to 0.32 ± 11.08, a median of −2.96, and an IQR of −6.56-4.39. For 90° of elevation in the scapular plane, the mean ± standard deviation was 1.86 ± 7.94, the median was −0.60, and the IQR was −3.22-5.57. As in the two previous planes, from the analysis of Figure 9b is clear that during the task at maximum elevation (in orange) in the scapular plane, the absolute value of ̅ % is bigger than the one up to about 90° (in green). For maximum elevation in the scapular As in the two previous planes, from the analysis of Figure 9b is clear that during the task at maximum elevation (in orange) in the scapular plane, the absolute value of ε% is bigger than the one up to about 90 • (in green). For maximum elevation in the scapular plane, the mean ± standard deviation was 8.28 ± 7.36, the median was 6.05, and the IQR was 3.34-10.70. For 90 • , the mean ± standard deviation was 5.79 ± 5.74, the median was 3.87, and the IQR was 1.91-7.76.
The maximum positive values were 48.20% and 40.89% for maximum and 90 • of elevation, respectively ( Table 5). The region that underwent maximum extension corresponds to the pair of markers 19-20 for both degrees of elevation in the scapular plane (Figure 9c,d). As in the case of movements performed in the sagittal plane, also during the elevation of the upper limb in the scapular plane, the distribution of the pairs of markers corresponding to the maximum compressive strain values is not concentrated along the same row of the grid of markers. Even in this case, the region that underwent greater extension was the one surrounding the axillary fold, although along slightly different directions than in the other planes (Figure 9c,d). Table 5 reports the extreme values of ε% for both extension and compression during tasks performed in the scapular plane.
Discussion
Monitoring scapular movements may be useful in rehabilitation and clinical research. This study proposes a methodological approach to quantify scapular skin strain using a 6 × 5 grid of retro-reflective markers. We implemented this method for upper limb flexion in the sagittal plane, elevation in the scapular plane (scaption), and abduction in the frontal plane. This analysis may be fundamental for the development of some solutions able to monitor the scapular movements. Indeed, an open challenge in the development of wearable systems based on strain sensors is the proper placement of the sensing elements. To date, several textile-based strain sensors have been designed and employed to measure human joints movements [7,8,22,[34][35][36][37]. Among textile-based strain sensors, resistive ones are popular for instrumenting wearables [6,19]. These sensors are mainly composed of an elastic textile substrate and conductive materials, which undergo microstructural changes in response to an applied deformation resulting in electrical resistance variation in the sensing elements [6,7]. The textile component enables the integration into garments as adherent as possible or into polymeric substrates that could potentially be directly applied to the skin. The textile component allows the sensitive element to stretch and relax during movements, thanks to its elastic characteristics. One of the main requirements for developing wearable systems integrating textile-based strain sensors is that they should adhere perfectly to the surface of the body region of interest. Moreover, improper orientations of the sensors could negatively influence the sensitivity for joints movements detection. As regard scapular movements, the unreliable reading of textile-based strain sensors is further influenced by the simultaneity of translations and rotations that the scapula undergoes during upper limb movements. For this reason, identifying the areas in the scapular region that experience the greatest deformation could provide useful information about the design, integration, and placement of textile-based wearable strain sensors. In a previous study [30], skin strain field analysis in the region surrounding the shoulder joint was performed using three-dimensional image correlation technique. Shoulder abduction and flexion were investigated in a single volunteer, showing that the area that experienced more significant strains corresponds to that surrounding the axillary fold posteriorly, in accordance with our findings. Unlike our study, in [13], a grid of markers was placed on the scapular region to obtain a surface mapping from which to infer the scapular kinematics.
In the present study, the motion tracking data were used to provide the distribution of length changes in the posterior scapular region, calculated in terms of distance between all possible combinations of markers pairs. Strain distribution (Figures 7-9) shows interesting characteristics for all movements performed in all planes and degrees of elevation. Namely, the region with the highest extension was the area surrounding the axillary fold. Although this region corresponds to an area with a greater amount of underlying soft tissue, it also has a high number of muscles, which contract during arms elevation, inducing a corresponding surface deformation. Results showed a significant difference between elevation up to 90 • and maximum elevation for all the performed tasks. Concerning the positive strains (i.e., extension), the highest percentage positive strain was found to be: 28.26% and 52.95% for elevation in the frontal plane up to 90 • and maximum elevation, respectively; 60.12% and 60.87% for elevation in the sagittal plane up to 90 • and maximum elevation, respectively; and 40.89% and 48.20% for elevation in the scapular plane up to 90 • and maximum elevation, respectively. In all these cases, the maximum extension is referred to the pair of markers 19-20 placed horizontally near the axillary fold (see Figure 3). Conversely, the same generality of results cannot be applied to regions that underwent maximum compression. Although the regions subjected to the greatest compression mostly correspond to the first rows of the marker grid (Figure 7c,d, Figures 8c and 9c,d), in some cases, the pairs of markers that experienced the greatest compression are arranged in different regions (Figure 8c,d and Figure 9c). The reason for these results is probably related to the anthropometric heterogeneity of the subjects involved in the experimental trials. This aspect is not of particular relevance in the design of wearable systems based on resistive textile sensors since they work better in extension than in shortening [34]. Therefore, the regions subjected to higher stretch values should be considered for the placement of textile-based strain sensors.
The absence of deep analysis on the skin strain in the scapular region is highlighted by the different positioning and number of resistive textile-based strain sensors used in wearable systems designed for monitoring scapular movements [7,8,22]. Although these studies showed promising results about monitoring scapular movements in healthy subjects and patients with musculoskeletal or neurological disorders, they all empirically placed the sensors on the scapular region.
Conclusions
In conclusion, this study proposed a new method for skin strain analysis of the scapular region. The method was used to estimate the skin scapular surface strain on five volunteers during upper limb movements of clinical relevance. This is the first study investigating skin deformation of the scapular region induced by arms elevation in different planes and at different degrees of elevation. The results suggested interesting insights for the integration and positioning of resistive textile-based strain sensors within wearable systems for monitoring scapular movements. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-09-09T13:21:18.313Z
|
2021-08-26T00:00:00.000
|
{
"year": 2021,
"sha1": "c4cd0cb3d5493a4e4d654754958f0b0440900f4d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/17/5761/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff0b4425d88bc34b144117b1ed4052ea4a465230",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
7120874
|
pes2o/s2orc
|
v3-fos-license
|
Dietary isoflavones alter regulatory behaviors, metabolic hormones and neuroendocrine function in Long-Evans male rats.
BACKGROUND: Phytoestrogens derived from soy foods (or isoflavones) have received prevalent usage due to their 'health benefits' of decreasing: a) age-related diseases, b) hormone-dependent cancers and c) postmenopausal symptoms. However, little is known about the influence of dietary phytoestrogens on regulatory behaviors, such as food and water intake, metabolic hormones and neuroendocrine parameters. This study examined important hormonal and metabolic health issues by testing the hypotheses that dietary soy-derived isoflavones influence: 1) body weight and adipose deposition, 2) food and water intake, 3) metabolic hormones (i.e., leptin, insulin, T3 and glucose levels), 4) brain neuropeptide Y (NPY) levels, 5) heat production [in brown adipose tissue (BAT) quantifying uncoupling protein (UCP-1) mRNA levels] and 6) core body temperature. METHODS: This was accomplished by conducting longitudinal studies where male Long-Evans rats were exposed (from conception to time of testing or tissue collection) to a diet rich in isoflavones (at 600 micrograms/gram of diet or 600 ppm) vs. a diet low in isoflavones (at approximately 10-15 micrograms/gram of diet or 10-15 ppm). Body, white adipose tissue and food intake were measured in grams and water intake in milliliters. The hormones (leptin, insulin, T3, glucose and NPY) were quantified by radioimmunoassays (RIA). BAT UCP-1 mRNA levels were quantified by PCR and polyacrylamide gel electrophoresis while core body temperatures were recorded by radio telemetry. The data were tested by analysis of variance (ANOVA) (or where appropriate by repeated measures). RESULTS: Body and adipose tissue weights were decreased in Phyto-600 vs. Phyto-free fed rats. Food and water intake was greater in Phyto-600 animals, that displayed higher hypothalamic (NPY) concentrations, but lower plasma leptin and insulin levels, vs. Phyto-free fed males. Higher thyroid levels (and a tendency for higher glucose levels) and increased uncoupling protein (UCP-1) mRNA levels in brown adipose tissue (BAT) were seen in Phyto-600 fed males. However, decreased core body temperature was recorded in these same animals compared to Phyto-free fed animals. CONCLUSIONS: This study demonstrates that consumption of a soy-based (isoflavone-rich) diet, significantly alters several parameters involved in maintaining body homeostatic balance, energy expenditure, feeding behavior, hormonal, metabolic and neuroendocrine function in male rats.
Of these three main classifications, human consumption of isoflavones has the largest impact due to its availability and variety in food products containing soy. Furthermore, the phytoestrogens principally derived from soy foods have received prevalent usage due to their 'health benefits' of decreasing: a) age-related diseases (cardiovascular & osteoporosis), b) hormone-dependent cancers (e.g. breast & prostate) and c) postmenopausal symptoms [2][3][4][5][6]. However, little is known about the influence of dietary (soy-derived) phytoestrogens on neuroendocrine, hormone and metabolic parameters. In spite of this fact, the Food and Drug Administration (FDA) in the United States in October of 1999 authorized the use of-on food labelsthe health claim that: soy protein can reduce the risk of coronary heart disease by lowering blood cholesterol levels (when included in a diet low in saturated fat and cholesterol) [5].
The purpose of this study was to examine, in a comprehensive manner, important hormonal and metabolic health issues by testing the hypotheses that dietary soyderived phytoestrogens influence: 1) body weight and adipose deposition, 2) food and water intake, 3) metabolic hormones (i.e., leptin, insulin, T3 and glucose levels), 4) brain neuropeptide Y (NPY) levels, 5) heat production [in brown adipose tissue (BAT) quantifying uncoupling protein (UCP-1) mRNA levels] and 6) core body temperature. This was accomplished by conducting longitudinal studies where male Long-Evans rats were exposed (from conception to time of testing or tissue collection) to a diet rich in phytoestrogens vs. a diet low in phytoestrogens.
Animals
Long-Evans male and female rats [10 per sex at 50 days old] were purchased from Charles River Laboratories (Wilmington, MA, USA) for breeding. These animals were caged individually and housed in the Brigham Young University Bio-Ag vivarium and maintained on an 11-hour dark 13-hour light schedule (lights on 0600-1900). The animals and methods of this study were approved by the institute of animal care and use committee (IACUC) at Brigham Young University (BYU).
Treatment-Diets
Upon arrival all animals were allowed ad libitum access to either a commercially available diet with high phytoestrogen levels (Harlan Teklad Rodent Diet 8604, Madison, WI, USA) containing 600 micrograms of phytoestrogens/ gram of diet [or specifically this diet is high in isoflavones, 600 parts per million or ppm]; referred to hereafter as the Phyto-600 diet, or a custom phytoestrogen-free diet; referred to hereafter as the Phyto-free diet, obtained from Ziegler Bros. (Gardner, PA, USA) and water [7]. In the Phyto-free diet, the phytoestrogen concentrations were below the detectable limits of HPLC analysis [7]. The content and nutrient composition of these diets is described in detail elsewhere [7]. The diets were balanced and matched for equivalent percentage content of protein, carbohydrate, fat, amino acids, vitamins and minerals, etc. [7]. Circulating phytoestrogen serum levels from rats maintained on these diets (lifelong) have been reported previously by our laboratory using GC/MS analysis [7]. The animals were time mated within their respective diets so that the offspring of these pairings would be exposed solely to either the Phyto-600 or Phyto-free diet. Parameters were measured and/or the male rats were sacrificed and blood and tissues collected mainly at 33, 55 or 75 days of age; other ages were tested where indicated. Serum was prepared and stored at -20°C until assayed for metabolic hormones. For this study serum isoflavone levels are shown in Figure 1 from animals at 75 days of age. Male rats were only examined in this study since the influence of the estrous cycle on several of the measured parameters is unknown.
Weight measurements
Body weights and food intake were measured on a Mettler 1200 balance [in grams (g) ± 1 g; St. Louis, MO, USA], white and brown adipose tissue and prostate weights were measured on a Sartorious balance [in milligrams (mg) ± 1 mg; Brinkman Inst. Co., Westbury, NY, USA]. Water intake was measured in drinking tubes [in milliliters (ml) ± 1 ml]. White adipose tissue (WAT) was dissected inferior to the kidneys and superior to the testes in the abdominoplevic cavity (representing a majority of intra-abdominal WAT) and then weighed in grams ± 0.01 g. Brown adipose tissue was dissected from between the scapular blades (inter-scapular region) and weighed in milligrams (mg) ± 1 mg. [from arterial blood samples of 33 and 55 day-old male animals and venous blood samples collected from 75 dayold rats. This was due to exhausting the arterial supplies from the available blood samples for other assays and thus venous blood was assayed at 75 days of age]. Serum thyroid (T3) levels were assayed by a kit purchased from Diagnostic Systems Labs. Inc. (Webster, TX, USA) and glucose levels were detected by a kit (#510) purchased from Sigma Chem. Co. (St. Louis, MO, USA).
Hypothalamic NPY Levels
Subsequent to blood collection (above), after the animals were sacrificed, brains were removed rapidly, frozen on dry ice and then stored at -80°C until microdissection. Coronal slices 300 µm thick were sectioned on a microtome cryostat. The paraventricular nucleus, arcuate nucleus and median eminence regions of the hypothalamus were microdissected by punch technique and homogenized in 100 µl of 0.1 M HCl. Tissue protein was determined by the Lowry method [8] and NPY was measured using a solid-phase radioimmunoassay in Protein Gcoated 96-well plates, as described previously [9]. The NPY antiserum was used at a final concentration of 1:16,000. The sensitivity of the assay is 0.2 pg, with an intra-assay coefficient of variation of 8 %. All samples were run in duplicate in the same assay to avoid interassay variation.
Body temperature
Body temperature was monitored by radio telemetry by implanting a very small electronic chip [under the skin above the left thoracic cavity near the heart] that measured and transmitted core body temperature (± 0.1°C) to a notebook-sensor monitor (BioMedic Data Systems Inc., Seaford, DE, USA) within 2 seconds and repeated measurements were made throughout the day and/or the duration of the experiments.
Statistical Analysis
All data are presented as the mean ± SEM with p < 0.05 deemed significant. The data were tested by analysis of variance (ANOVA) (or where appropriate by repeated measures), followed by pairwise comparisons (via Neuman-Keuls analysis) to detect significant differences between the diet treatment groups (p < 0.05).
Body Weight, White Adipose Tissue Weight and Food/ Water Intake
When food and water intake was measured in young adult animals, surprisingly the Phyto-600 fed males displayed slight but significantly higher food ( Figure 2A) and water ( Figure 2B) consumption compared to Phyto-free fed males [for food intake: Phyto-600 = 24.3 vs. Phyto-free = 21.7 grams/day (p < 0.05) and for water intake: Phyto-600 = 37.7 vs. Phyto-free = 31.2 ml/day (p < 0.05).
The effects of dietary phytoestrogens on body weights in pre-, early adult and young adult age male rats are shown in Figure 3. At every age examined (i.e., 33, 55 and 75 days old), males exposed to the Phyto-free diet displayed significantly higher body weights (around 10-15%) compared to animals fed the Phyto-600 diet.
White adipose tissue (WAT) weights were not measured in 33 day-old animals, since relatively little fat deposition was observed in the abdominopelvic cavity (especially around the reproductive structures) at this age. However, at 55 and 75-days of age, males fed the Phyto-free diet displayed significantly higher white adipose tissue weights (approximately 50% greater) compared to Phyto-600 values ( Figure 4).
Circulating Leptin, Insulin, Glucose and Brain NPY Levels
In 33, 55 and 75 day-old male rats, circulating leptin and insulin levels were within the normal ranges (as described by the vendor's assay kit values), however, at each age males fed the Phyto-free diet displayed significantly higher leptin ( Figure 5A) and insulin ( Figure 5B) levels compared to Phyto-600 values. Notably, the leptin levels significantly increased with age that corresponded with significantly higher white adipose tissue deposition seen in these animals.
Since leptin plays an important role in regulating brain NPY levels that in turn influences food/water intake, NPY levels were determined in three hypothalamic regions [i.e., the periventricular nucleus (PVN), median eminence (ME) and the arcuate nucleus (ARC)] in 75 day-old males exposed to the diet treatments. In the PVN and ARC (but not the ME) NPY levels were significantly higher (by approximately 40 %) in Phyto-600 fed males vs. the Phyto-free male values ( Figure 6).
Circulating Thyroid (T3), UCP-1 mRNA Levels and Core Body Temperature
In non-fasting young adult rats at 65 and 110 days of age, circulating thyroid (T3) levels were determined from venous blood samples. Phyto-600 fed males displayed significantly higher T3 levels compared to Phyto-free fed Finally, when core body temperatures were recorded during a 24-hour interval, Phyto-free fed males displayed, in general, slight but significantly higher values compared to Phyto-600 animals during the dark phase of the light/dark cycle when rodents are most active (Figure 8). However, during one time point during the dark cycle (3 am) and one time point during the light phase of the cycle (3 pm) Phyto-600 males displayed slight but significantly higher core body temperatures vs. Phyto-free values.
Discussion
Estrogen is known to play a dual role in regulating body weight, food intake and adipose tissue deposition. On the one hand, estrogens decrease food intake, increase locomotor activity and hence decrease body weight [10,11].
Effects of Dietary Phytoestrogens on Body Weight in Male Long-Evans Rats fed either a phytoestrogen-rich (600) or a phytoes-trogen-free (Free) diet
Body Weight (grams)
However, adipose tissue deposition increases with puberty and early pregnancy in women, suggesting that estrogens influence body fat accumulation [12]. Additionally, in aging, estrogens promote adipose deposition and insulin resistance [13]. Conversely, results from aromatase, FSH and ER-knockout studies indicate that estrogens regulate adiposity where the complete lack of estrogens or blocking estrogen hormone action increases adipose tissue deposition [14][15][16][17][18], whereas, estrogen replacement in these models decreases adiposity. Notably, in the present study, male rats fed the Phyto-600 diet displayed significantly decreased adipose tissue and body weights compared to Phyto-free fed animals. While there is not extensive data on phytoestrogens and metabolism, other investigators have reported that genistein, increases lipolysis and decreases lipogenesis in rodent adipocytes [19] by a tyrosine kinase independent mechanism and these estrogen mimics inhibit glucose uptake by altering membrane-associated glucose transporters [20,21]. Thus, our data suggests that dietary soy phytoestrogens significantly decrease: 1) body and adipose tissue weights and 2) circulating leptin and insulin levels (that correspond with adipose deposition) compared to Phyto-free fed animals, implying that the hormonal action of phytoestrogens is beneficial to body fat regulation. Recent studies imply that insulin helps to regulate leptin expression in humans [22] and estrogens appear to enhance the action of insulin [23,24]. This may account for the decreased incidence of obesity in Asian countries where isoflavone consumption is high compared to Western countries. Decreased adipose It was previously observed in our laboratory that dietary phytoestrogens significantly alter food and water intake [7,25,26]. The differential effects of the Phyto-free vs. Phyto-600 diets observed in the present studies on hypothalamic NPY levels, circulating insulin and leptin concentrations and food intake are consistent with the well established interrelationships among these parameters. Thus, relative to animals maintained on the Phytofree diet, food intake was significantly increased in animals fed the Phyto-600 diet. Phyto-600-fed rats also exhibited higher concentrations of NPY in the arcuate and paraventricular nuclei of the hypothalamus. It is well established that NPY neurons whose perikarya reside in arcuate nucleus and project to PVN comprise an extremely important orexigenic neural pathway [27]. It therefore appears likely that at least one factor contributing to the higher food intake in Phyto-600-fed rats is the increased levels of NPY in this system.
The present studies also suggest a mechanism that may underlie the diet-induced effects on NPY (i.e., plasma insulin and leptin concentrations were significantly reduced in the Phyto-600 fed rats, relative to the Phytofree animals). A number of previous studies have demonstrated a reciprocal relationship between circulating Dietary Phytoestrogens Influence on Brain NPY Levels in 75 day-old Male Long-Evans Rats Figure 6 Dietary Phytoestrogens Influence on Brain NPY Levels in 75 day-old Male Long-Evans Rats. In the paraventricular (PVN) and arcuate (ARC) nucleus, males fed the Phyto-600 (600) diet displayed significantly greater NPY levels (* p < 0.05) compared to males fed the Phyto-Free (Free) diet. In the median eminence (ME) no significant differences were observed between male rats fed 600 vs. the Free diet. Thus, experimentally-induced reductions in either insulin [28] or leptin [29] are associated with increased pre-proNPY messenger RNA expression in arcuate nucleus and increased NPY levels in PVN, and moreover, it has been proposed that reductions in insulin and leptin that occur physiologically, e.g., with food deprivation, provide an important signal to the NPY system to initiate feeding [27,29]. Hence, taken together, the present findings sug-gest that by reducing secretion of insulin and/or leptin, chronic consumption of the Phyto-600 diet results in upregulation of the orexigenic NPY circuit in the hypothalamus, which in turn stimulates food intake (and water consumption, since rodents and humans display prandial characteristics).
While it is clear that thyroid hormone levels are influenced by estrogens where increases are seen in T3 and T4, presumably by increasing in the production of thyroid binding globulin in the liver [30], the published data examining thyroid function and hormone levels are problematic at best in the soy research field due to the history of soy food formulations, parameters examined and iodine deficiencies [6,31,32]. In agreement with more recent studies, our results demonstrate that circulating T3 levels increase with soy consumption [33], and "personal communication-Dr. David Baer-USDA". Furthermore, there appears to be a link between increased thyroid levels with soy consumption and cardiovascular protection in lowering serum cholesterol levels [6] and thyroid hormones along with estrogens protecting against osteoporosis [34]. However, in animals consuming the Phyto-600 diet (that displayed higher T3 levels) we observed a lower core body temperature compared to Phyto-free fed rats. In subsequent (unpublished) studies, we have consistently recorded slight (approximately 0.5°C) but significantly lower core body temperatures in Phyto-600 vs. Phyto-free fed rats during pregnancy. This suggests that the overall effect on body temperature via these estrogen mimics in the soy-rich diet may act primarily by increasing cutaneous vasodilation, thus decreasing core body temperature. Animal studies have shown that estrogens can act centrally (in the preoptic/anterior hypothalamus) or peripherally to regulate body temperature [35,36]. Support for this view is seen in humans where changes in skin blood flow via cutaneous vasodilation during the menstrual cycle and in hormone replacement therapy studies correspond with estrogen levels [36,37]. Also, one report showed that soy-derived phytoestrogens have a similar effect to our findings where ovariectomized rats fed a soy diet displayed an approximate 0.8°C decrease in skin temperature, whereas, estradiol treatment decreased temperature values by 1.4°C [38]. Finally, in association with temperature regulation, several studies have reported that soy consumption may be an effective therapy for relief of hot flushes in women [39]. Finally, the various metabolic parameters examined in a global fashion seem to suggest that declines in insulin and leptin levels are the dominant systemic regulators in regard to body weight, since overall the Phyto-600 animals weigh less compared to Phyto-free fed animals. However, the present findings also suggest that body temperature is reduced in Phyto-600 fed animals vs. Phyto-free fed animals and previous behavioral studies suggest that Phyto-600 animals exhibit more locomotor active vs. Phyto-free fed animals [7,50] (see summary Figure 9).
Uncoupling proteins (UCP-1 through UCP-5) are expressed in various tissues from many different species (mammals, birds, fish, insects and plants) that play important (but controversial) role(s) in the regulation of energy expenditure, or thermogenesis [40,41]. Uncoupling protein-1 is expressed mainly in BAT. When the influence of dietary phytoestrogens on UCP-1 mRNA levels in BAT was examined, Phyto-600 fed male rats, expressed significantly higher levels of the uncoupling protein (approximately 2-fold) compared to Phyto-free values (but BAT weights were significantly less in the Phyto-600 vs. Phyto-free fed males). To date, we are unaware of any studies that have investigated this aspect of soy consumption on thermogenesis. The decrease in BAT mass in Phyto-600 animals but increased expression of UCP-1 may represent a compensation mechanism for energy expenditure, and there are several neural inputs and hormonal factors that influence UCP-1 in BAT that make it difficult to differentiate the regulatory aspects of UCP-1 expression. For example, sympathetic denervation of inter-scapular BAT markedly reduced UCP-1 mRNA levels and estrogen, T3 and adrenergic agents [norepinephrine (NE)] stimulate UCP-1 expression in BAT [42,43]. In fact, it has been reported that T3 synergizes with NE to increase UCP-1 in BAT and stabilizes its mRNA transcripts [44]. These factors overlap with the changes seen in Phyto-600 fed vs. Phyto-free fed rats, in the present study, where T3 levels were increased and, presumably, along with the estrogenic influence of circulating isoflavones resulted in stimulating UCP-1 expression in BAT. Previously, we have not observed any significant alterations in circulating estradiol (or LH) levels in Phyto-600 vs. Phyto-free fed intact males [7]. Conversely, it has been reported that Dietary Phytoestrogens Influence on Core Body Temperature in 75 day-old rats [30]. Also, plasma leptin levels are thought to stimulate UCP-1 in BAT [45,46], results opposite, in general, to that obtained in the present study. Based upon the obtained data sets, it is difficult to identify a common stimulatory or inhibitory pattern for the expression of UCP-1 in BAT of soy fed animals and especially define a functional role for the physiological properties associated with these UCPs in thermoregulation. Therefore, it is reasonable to speculate that multiple factors act collectively to regulate UCPs in BAT that in turn contribute to adaptive changes in body temperature.
Conclusions
This study demonstrates that consumption of a widely used commercially available soy-based rodent diet, (i.e., the Phyto-600 diet rich in isoflavones), alters several hormonal, metabolic and neuroendocrine parameters involved in maintaining body homeostatic balance, energy expenditure and feeding behavior in male rats. Further research is warranted in examining the important aspects of the neuroendocrine and metabolic influences of dietary phytoestrogens via the consumption of soy in humans and laboratory animals. This is especially true when diet is usually not considered as an influencing factor in the experimental design [47][48][49][50].
|
2014-10-01T00:00:00.000Z
|
2004-12-23T00:00:00.000
|
{
"year": 2004,
"sha1": "638ccdea7e5c10077be1e806ea8559af889b6f80",
"oa_license": "CCBY",
"oa_url": "https://nutritionandmetabolism.biomedcentral.com/track/pdf/10.1186/1743-7075-1-16",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "638ccdea7e5c10077be1e806ea8559af889b6f80",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
18784432
|
pes2o/s2orc
|
v3-fos-license
|
Flexibility of Oral Cholera Vaccine Dosing—A Randomized Controlled Trial Measuring Immune Responses Following Alternative Vaccination Schedules in a Cholera Hyper-Endemic Zone
Background A bivalent killed whole cell oral cholera vaccine has been found to be safe and efficacious for five years in the cholera endemic setting of Kolkata, India, when given in a two dose schedule, two weeks apart. A randomized controlled trial revealed that the immune response was not significantly increased following the second dose compared to that after the first dose. We aimed to evaluate the impact of an extended four week dosing schedule on vibriocidal response. Methodology/Principal Findings In this double blind randomized controlled non-inferiority trial, 356 Indian, non-pregnant residents aged 1 year or older were randomized to receive two doses of oral cholera vaccine at 14 and 28 day intervals. We compared vibriocidal immune responses between these schedules. Among adults, no significant differences were noted when comparing the rates of seroconversion for V. cholerae O1 Inaba following two dose regimens administered at a 14 day interval (55%) vs the 28 day interval (58%). Similarly, no differences in seroconversion were demonstrated in children comparing the 14 (80%) and 28 day intervals (77%). Following 14 and 28 day dosing intervals, vibriocidal response rates against V. cholerae O1 Ogawa were 45% and 49% in adults and 73% and 72% in children respectively. Responses were lower for V. cholerae O139, but similar between dosing schedules for adults (20%, 20%) and children (28%, 20%). Conclusions/Significance Comparable immune responses and safety profiles between the two dosing schedules support the option for increased flexibility of current OCV dosing. Further operational research using a longer dosing regimen will provide answers to improve implementation and delivery of cholera vaccination in endemic and epidemic outbreak scenarios.
Introduction
As a disease of poverty and inequity, cholera is often prevalent in areas of compromised sanitation, overcrowded conditions, and poor quality of water supply. An increasing number of longer lasting outbreaks have dramatically impacted the least developed countries (LDCs), including those in Africa, South Asia, and the Hispaniola island region [1]. Living conditions in LDC populations often favor disease transmission and improvements can take a long time to achieve. In these settings, V. cholerae O1 can cause large, rapidly spreading severe outbreaks that cripple public health systems with already limited medical and financial resources. Many recent epidemics have occurred in highly susceptible and vulnerable populations (Haiti, Zimbabwe, Central and West Africa), where behavioral, social, and environmental factors, as well as lower background exposure to cholera have contributed to increased duration and severity of the outbreaks [2]. Effective interventions combining surveillance, treatment, and improving water, sanitation, and hygiene (WASH) measures are paramount. Vaccination can complement these preventive and control strategies in areas of endemic disease or areas at risk for outbreak [3]. Recently, a killed, bivalent oral cholera vaccine (OCV) has been prequalified and recommended for use by the WHO. Still, this OCV has not been widely implemented in endemic areas and its use is limited to areas with established or imminent outbreaks.
Safety and immunogenicity of this OCV has been demonstrated in Vietnam, India, and Bangladesh [4][5][6]. Seroconversion with serum vibriocidal antibodies following vaccination was found to be lower in hyper-endemic areas (India) compared to less endemic areas (Vietnam). When participants with only low baseline serum vibriocidal titers were analyzed in the Kolkata trial, seroconversion and geometric fold rise were similar in both populations [7]. A large phase three randomized clinical trial (RCT) of the two-dose, killed bivalent OCV demonstrated a cumulative 65% efficacy in endemic populations over five years [8]. Earlier studies with the cholera toxin whole cell O1 vaccine revealed protection for three years in adults and for 6-12 months in young children [9][10][11]. In Kolkata, a RCT evaluating immune responses of the bivalent killed whole cell OCV without cholera toxin B (Shanchol, Shantha Biotechnics Limited) in adults and children found robust responses to a first dose but no further rise following the second dose [12]. This observed blunted immune response following the second dose may be due to the increased LPS content, as compared with older versions of killed OCV. Proposed mechanisms of the blunted immune response include blocking of subsequent antibody production by the increased LPS or a booster like effect occurring after the first dose due to recurrent natural exposure in an endemic setting.
Some questions still remain unanswered with regards to the most optimal dosing regimen to assist the effective deployment of OCV in field conditions. An alternate 28 day interval could facilitate inclusion of OCV into a routine immunization schedule in cholera endemic regions. No significant difference in immune response following a four week schedule may further support the hypothesis of whether adequate protection can be offered by a single dose in endemic areas-this is currently being assessed in a large, placebo controlled RCT in Bangladesh. We aimed to assess if immune responses in a prolonged 28 day dosing interval is non-inferior to the standard 14 day schedule.
Study Design
This was a double-blind, RCT conducted at the Clinical Trials Unit of the National Institute of Cholera and Enteric Diseases (NICED), Kolkata, India. Recruitment, dosing and follow up were completed between January-December 2011. The study was performed in the cholera endemic urban slums of Kolkata with similar access to water, sanitation, and health care throughout the study area. Healthy males and non-pregnant females aged !1 year were recruited. Exclusion criteria consisted of serious chronic disease, pregnancy, immune-compromised conditions, gastrointestinal disease, antibiotic usage in the past 14 days, or previous receipt of cholera vaccine. Potential participants with acute illness or fever had dosing deferred pending recovery.
The objectives of this trial were to compare safety and serum vibriocidal antibody responses in participants receiving two OCV doses either 14 days or 28 days apart. The primary endpoint was the proportion of participants exhibiting four-fold or greater rises in serum vibriocidal antibody titers, 14 days following the second dose relative to baseline. Secondary endpoints included measurement of geometric mean titers of serum vibriocidal antibody at the above time points. Safety of the vaccine was also evaluated throughout the follow up period.
Ethics Statement
Written informed consent was obtained by study physicians for all adults and parents/guardians of participating children, as well as written assent for 11-17 year old participants. The trial protocol was approved by the Scientific and Ethics Committee of NICED and the International Vaccine Institute (IVI). Independent safety monitoring was conducted, with external monitoring & GCP audits performed by Shantha Biotechnics Limited. This trial was registered in India (CTRI/2010/091/002807) and clinicaltrials.gov (NCT 01233362). All data from trial volunteers used for analysis was anonymized.
Study Procedures and Definitions
The study vaccine (Shanchol) consisted of 600 ELISA units of LPS of V. cholerae O1 El Tor Inaba; 300 ELISA units of LPS each of V. cholerae O1 classical Ogawa, 300 ELISA units of LPS of V. cholera O1 Inaba and 600 ELISA units of LPS of V. cholerae O139. Placebo vials contained E. coli K12 cells, whose appearance was identical to the study vaccine. Dosing of the study agent was administered as in Table 1. Both placebo and vaccine were packaged as liquid formulations in identical vials containing 1.5 mL doses and were stored at 2-8°C. The study agent was given in two doses separated by a two week or a four week interval and administered by oral syringe, after which each participant was offered a cup of water. Participants were observed in the trials unit for 30 minutes following dosing, as well as for 3 days after each dosing. During each follow up day, study physicians conducted a structured interview regarding the participant's overall health and any occurrence of adverse events. Diarrhea was defined as three or more loose or liquid stools in a 24 hour period. Blood samples were obtained prior to the first dose and 14 days after each study agent dose. Sera were separated and stored at -70°C until paired testing was performed. The microtiter technique was used to detect serum vibriocidal antibodies to V. cholerae O1 El Tor Inaba, O1 Ogawa, and O139 [13].
Randomization and Masking
Participants were stratified by age group (1-5y, 6-10y, 11-17y, and !18y). Randomization numbers were generated in blocks of at least four, which included equal numbers of each arm, to ensure that balance between treatments was maintained. These lists were prepared by a statistician not involved in the study. Study agents were pre-labeled by Shantha personnel, who were not involved in the conduct or monitoring of the trial. All study staff and participants were blinded to treatment assignment for the duration of the study.
Statistical Methods
Sample size calculation was driven by seroconversion after two doses under 14 day and 28 day dosing intervals. Among participants, we assumed 45% seroconversion in adults and 80% in children after 2 doses. If the seroconversion rate in the 28 day dosing interval is no less than 20% than that in the 14 day dosing interval, it will be considered to be non-inferior. This threshold was selected based on seroconversion rates and their corresponding lower bounds of the one tailed 95% confidence interval from previous studies using the same vaccine in the same setting [5,12]. Assuming a one tailed α = 0.05, 80% power, a 15% drop-out rate, and using the score method of non-inferiority test [14], a total of 89 participants per study group had been considered. Thus, a total of 356 participants were targeted, 178 in each dosing regimen. Data were entered in a web-based data capture system and analyses were performed in SAS 9Á3 (SAS Institute, Cary NC). Analyses for comparisons of dichotomous outcomes such as adverse events and seroconversion were performed with the chi-square test or Fisher's exact test if cell counts were sparse. For comparisons of vibriocidal titers, Student's t-test was performed using the pooled or Satterthwaite method depending on whether the variances were equal or not. Nonparametric Wilcoxon rank-sum test and Kolmogorov-Simirnov test were performed when data were not normally distributed. Comparisons of the primary outcomes, vibriocidal seroconversion were evaluated with one-tailed 95% confidence intervals using the Wilson Score method [15]. Statistical evaluations of all other comparisons were two tailed.
Participant recruitment and baseline data
Recruitment of participant flow is illustrated in Fig. 1A total of 356 participants (178 children, 178 adults) were recruited from January 2010 to October 2011. Among eligible participants, 86/89 adults (96.6%) and 84/89 children (94.4%) in the 14 day interval arm and 84/89 adults (94.4%) and 82/89 children (92.1%) of the 28 day interval arm took all three doses of the assigned study agent and provided all four blood samples. A total of 20 participants (5.6%) were lost to follow up or were found to be ineligible to continue following study visit screening. There were no significant differences in demographic characteristics between intervention arms among each age group (Table 2).
Outcomes
All participants randomized in the study were included in safety outcome analysis. No statistically significant differences in the rates of adverse events between each intervention group were noted (Table 3). A total of 10 adverse events (AE) were reported within 3 days of either dose. The most commonly reported AEs were fever (n = 3), general ill feeling (n = 2), vomiting, diarrhea, and headache (with n = 1 each), with no statistically significant differences between children or adults. No serious adverse events were reported during the trial.
A per protocol analysis was conducted for immunogenicity data, including 336 participants who completed all planned study visits. Immune responses to V. cholerae O1 Inaba, O1 Ogawa, and O139 following administration of two doses of vaccine in a 28-day schedule were non-inferior to those of a 14 day schedule, as the difference measured was greater than the predefined cut-off of-20% (Tables 4,5). No significant difference between dosing schedules was observed in percentage of seroconversion after the first or second dose. Baseline vibriocidal geometric mean titers (GMT) to O1 Inaba ranged from 94 to 275 in adults and from 29 to 140 in children. The geometric mean fold (GMF) rise was higher in children (ranging from 7.5-26.9) than in adults (3.4-6.4). No statistically significant difference was noted between intervention arms in seroconversion or geometric fold rise. The GMF rise from baseline was higher for O1 Inaba in adults, after receipt of the first dose (6.8 and 8.9 respectively in the 14 and 28 day interval arms) compared to receipt of the second dose (4.6 and 4.7 respectively). In children the responses were more pronounced with GMF rise from baseline after first dose in both the arms being 29.7 and 20.8 respectively. The GMF rise after second dose was 17.5 and 10.7 respectively. Rise in titers to V. cholerae O1 were higher among individuals with lower baseline vibriocidal titers, as seen in Table 5.This magnified response was likely due to the lower baseline GMT observed in children, suggesting lower natural exposure. Adults with baseline GMTs lower than the median (<160) demonstrated high GMF rise (>10) and seroconversion (~85%) in both interval groups, which were markedly higher than adults with higher baseline GMTs (Table 6). Comparable results were noted in children, although median baseline titers were lower (80). There was a significantly higher GMF rise in children aged 1-5 years old in the 14 vs 28 day interval groups (34.7 vs 10, p = 0.01), though no significant difference in seroconversion was noted. This difference is most likely explained by the significantly higher baseline GMT detected in 1-5 year olds between the 14 and 28 day interval group (14.1 vs 69.6, p = 0.01, S1 Table). No other significant differences were noted in any other age group. When controlling for baseline GMT, a multiple linear regression model of log transformed titers did not find any significant difference between the two dosing intervals (-0.13 dosing interval effect comparing the 28 day interval to the 14 day interval, p = 0.33, S2 Table). Similar observations were also found for O1 Ogawa. Following the second dose, adults demonstrated GMFr of 4.1 with 45% seroconversion and 3.8 with 49% seroconversion to O1 Ogawa in 14 and 28 day interval groups (Tables 4, 5). Children exhibited GMFr of 11.1 with 73% seroconversion and 7.8 with 72% seroconversion. As with previous trials in Vietnam, India, and Bangladesh, immunogenicity against O139 was poor in both schedules [4][5][6].
Discussion
The results of our study support flexibility in dosing Shanchol in endemic settings, where strict schedules may be difficult to adhere to. As with any immunogenicity findings, vibriocidal antibodies do not truly reflect a protective response and, at best, are an indirect correlate of protection that is not absolute. While only a field trial can determine true effectiveness of altered dosing regimens, interpreting this data in light of the existing immunogenicity and clinical efficacy data in the same setting may provide a foundation for policy makers to ease implementation of OCV as part of a control strategy for cholera. Both schedules were well tolerated by all recipients with comparable safety profiles between either group. Our findings were compatible with previous studies that revealed that high baseline vibriocidal titers were associated with reduced post-vaccination serum vibriocidal antibody responses [5,16]. Higher baseline titers found among participants were most likely due to prior Geometric mean-fold rise from baseline to 14 days after first dose or from baseline to 14 days after second dose c # with !4 fold rise in titers from baseline to 14 days after first dose or from baseline to 14 days after second dose d 95% confidence intervals using Wilson Score method e Primary endpoint. Difference in seroconversion rates after second dose were calculated by subtracting 14 day interval from 28 day interval. The 28 days interval group is non-inferior to the 14 day interval group as the lower limit of the proportion difference is greater than pre-defined cut-off (-20%) exposure to V. cholerae since the area is cholera-endemic and the population had not earlier received cholera vaccine. The first dose of the vaccine may have elicited memory immune responses among previously exposed individuals resulting in a rise in vibriocidal titers with no further rises after the second dose. In children, the baseline vibriocidal titers were lower, suggesting lower earlier exposure in this age group. Lower baseline titers were associated with higher GMF rise increases following vaccination, with a higher percentage of responders in this age group, though the clinical significance of this finding is unclear. Although it is possible that these results reflect chance, a recurrent theme of immune differences in children under five years of age relating to OCV does occur, and it is possible that in this sub-population that there may be a difference between the two regimens.
Our study confirms earlier findings that the two-dose regimen of the killed whole-cell OCV is safe, well-tolerated, and immunogenic [17]. Vibriocidal responses to O1 Inaba were higher in both adults and children following the first dose, as compared to the second dose, with GMFr rises higher in children, likely related to the inverse relation of baseline serum titers Geometric mean reciprocal titers b Geometric mean-fold rise from baseline to 14 days after first dose or from baseline to 14 days after second dose c # with !4 fold rise in titers from baseline to 14 days after first dose or from baseline to 14 days after second dose d 95% confidence intervals using Wilson Score method mentioned above. Whether the lower responses to O139 indicate that the vaccine elicits poorer responses to O139 or if this reflects differences in assay sensitivity remains an aspect that needs to be explored with additional scientific data. V. cholerae O139 continues to be infrequently isolated from environmental samples but has not been responsible for any large outbreak in the past 10 years. The lack of circulating O139 strains could be a possible factor for the lower immune response to O139 antigen in the vaccine. Serum vibriocidal antibody responses were shown to be no higher following a second dose, when compared to levels after the first dose. This contrasts with the older generation killed OCV (Dukoral), for which serum titers increased further after the second dose [18]. The current reformulated killed whole cell vaccine (Shanchol) elicits higher serum vibriocidal responses than the older version of Dukoral. It exhibits no augmentation of these responses after the second dose as compared with the first, perhaps because it has an approximately two times higher LPS content than the older vaccine [4]. This marked difference in magnitude and pattern of immune responses motivated the current evaluation of whether extending the interval between doses has an impact on the vibriocidal response. While extending the dosing interval did not raise immune responses to the second dose, the mechanism behind this observed lack of boosting remains unclear. Since the vibriocidal antibody response does not truly reflect protection, and at best is an indirect correlate of protection, our immunogenicity results are not sufficient to support a hypothesis that a single dose regimen may confer similar efficacy as two doses. Nevertheless, the similarity of immune responses to shorter versus longer inter-dose intervals provides some reassurance that flexibility in dosing, particularly extending the intervals beyond 14 days, will not vitiate vaccine response. Comparable immune responses between different dosing regimen schedules would support additional uses of vaccination as part of a comprehensive strategy. In endemic settings, policy makers could entertain extending the dose interval to 1 month, which could ease delivery by facilitating national routine immunization strategies and linking OCV with other health interventions to populations in high risk regions. These results may be of particular interest in complex outbreaks, such as those seen following a natural disaster. A reactive vaccination strategy provides vaccine following a cholera outbreak to prevent further disease transmission with hopes of shortening outbreak duration. It relies on getting the first dose to affected populations as soon as possible. After the first dose is distributed, one month interval could allow the focus to return to stabilization of infrastructure and water sanitation. This is pertinent to a post disaster context in resource limited areas, which can be a common scenario for cholera outbreaks in both endemic and non-endemic areas (Indonesia tsunami, Haitian earthquake, Pakistan floods). Since this study was conducted in an endemic area and a population with pre-existing vibriocidal antibodies, the results may be different than what can be expected from nonendemic areas. Evaluations of a longer dosing interval in these settings are needed since immunogenicity and overall vaccine impact may likely be impacted by recurrent exposure, or 'natural boosting'. With no further rise in seroconversion rates after a second dose, efforts to evaluate a efficacy of a single dose regimen in a clinical field trial is underway to evaluate its potential use in an epidemic setting [19]. From a programmatic standpoint, additional exploration into serum and gut responses when spacing out the dosing interval even further may broaden our knowledge on public health benefits with regards to the amount and duration of clinical protection offered by this OCV.
Cholera remains a major global health concern and is an important threat to most developing countries, especially in areas where overcrowding and poor sanitation are common. Large outbreaks often involve populations affected by natural disasters or those displaced by war, where there is inadequate sewage disposal and contaminated water. In spite of current WHO support for use of OCV as part of a prevention and control package for cholera endemic areas, the international community is still exploring the best methods to implement these recommendations. Flexibility with the administration of two doses over one month could ease logistical requirements in a complex outbreak setting, allowing for stabilization of community infrastructure, as well as linking vaccination with other vital community interventions, resulting in the enhanced delivery of OCV. By demonstrating similar immunologic responses to different dosing regimens, with no additional safety risk, further operational research testing even longer inter-dose intervals could provide helpful answers to improve decision making to fill critical knowledge gaps for vaccination in endemic, epidemic, and outbreak scenarios.
|
2016-09-28T00:20:06.222Z
|
2015-03-01T00:00:00.000
|
{
"year": 2015,
"sha1": "64fcb07ecc0dc6917be72a84bff89fb6fccd6d71",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0003574&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64fcb07ecc0dc6917be72a84bff89fb6fccd6d71",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265976938
|
pes2o/s2orc
|
v3-fos-license
|
Stock Market Development and Economic Growth an Empirical Analysis
This study investigated the causal relationship between stock market development and economic growth for Greece for the period 1978-2007 using a Vector Error Correction Model (VECM). Questions were raised whether stock market development causes economic growth taking into account the negative effect of interest rate on stock market development. The purpose of this study was to investigate the short-run and the long-run relationship between the examined variables applying the Johansen co-integration analysis. To achieve this objective unit root tests were carried out for all time series data in their levels and their first differences. Johansen co-integration analysis was applied to examine whether the variables are cointegrated of the same order taking into account the maximum eigenvalues and trace statistics tests. Finally, a vector error correction model was selected to investigate the long-run relationship between stock market development and economic growth. A short-run increase of economic growth per 1% induced an increase of stock market index 0.41% in Greece, while an increase of interest rate per 1% induced a relative decrease of stock market index per 1.42% in Greece. The estimated coefficient of error correction term was statistically significant and had a negative sign, which confirmed that there was not any problem in the long-run equilibrium between the examined variables. The results of Granger causality tests indicated that there is a unidirectional causality between stock market development and economic growth with direction from economic growth to stock market development and a unidirectional causal relationship between economic growth and interest rate with direction from economic growth to interest rate. Therefore, it can be inferred that economic growth has a direct positive effect on stock market development while interest rate has a negative effect on stock market development and economic growth respectively.
INTRODUCTION
Stock market development has been the subject of intensive theoretical and empirical studies (Demirguc-Kunt and Levine, 1996;Levine and Zervos, 1998). More recently, the emphasis has increasingly shifted to stock market indexes and the effect of stock markets on economic development. Stock market contributes to the mobilization of domestic savings by enhancing the set of financial instruments available to savers to diversify their portfolios providing an important source of investment capital at relatively low cost. A well functioning and liquid stock market, that allows investors to diversify away unsystematic risk, will increase the marginal productivity of capital (Pagano, 1993).
Another important aspect through which stock market development may influence economic growth is risk diversification. Obstfeld (1994) suggests that international risk sharing through internationally integrated stock markets improves the allocation of resources and accelerates the process of economic growth. Fama (1990) and Schwert (1990) claim that there are three explanations for the strong link between stock prices and real economic activity: "First, information Science Publications AJEBA about future real activity may be reflected in stock prices well before it occurs-this is essentially the notion that stock prices are a leading indicator for the well-being of the economy. Second, changes in discount rates may affect stock prices and real investment similarly, but the output from real investment doesn't appear for some time after it is made. Third, changes in stock prices are changes in wealth and this can affect the demand for consumption and investment goods".
The model hypothesis predicts that economic growth facilitates stock market development taking into account the negative effect of interest rate on stock market development and economic growth.
This study has two objectives: • To examine the long run relationship among economic growth, interest rate and stock market development • To apply Granger causality test based on a vector error correction model in order to examine the causal relationships between the examined variables taking into account the Johansen co-integration analysis The remainder of the study proceeds as follows: Initially the data and the specification of the multivariate VAR model are described. For this purpose stationarity test and Johansen co-integration analysis are examined taking into account the estimation of vector error correction model.
Finally, Granger causality test is applied in order to find the direction of causality between the examined variables of the estimated model. The empirical results are presented analytically and some discussion issues resulted from this empirical study are developed shortly, while the final conclusions are summarized relatively.
Data and Specification Model
In this study the method of Vector Autoregressive Model (VAR) is adopted to estimate the effects of economic growth on stock market development through the effect of interest rate and credit market development. The use of this methodology predicts the cumulative effects taking into account the dynamic response among stock market index and the other examined variables.
In order to test the causal relationships, the following multivariate model is to be estimated as follows (Equation 1): Where: SM = The general stock market index R = The interest rate GDP = The gross domestic product Following the empirical study of King and Levine (1993) the variable of economic growth (GDP) is measured by the rate of change of real GDP, while the general stock market index is used as a proxy for the stock market development. The general Stock Market index (SM) expresses better the stock exchange market than other financial indices, taking into account the effect of interest Rate (R) (Κatos et al., 1996;Nieuwerburgh et al., 2006;Shan, 2005;Vazakidis, 2006;Thalassinos and Thalassinos, 2006;Thalassinos and Pociovalisteanu, 2007;Vazakidis and Adamopoulos, 2009;Adamopoulos, 2010).
The data that are used in this analysis are annual covering the period 1978-2007 for Greece, regarding 2000 as a base year and are obtained from international financial statistics yearbook (IMF, 2007). All time series data are expressed in their levels and Eviews econometric computer software is used for the estimation of the model.
Unit Root Tests
Economic theory does not often provide guidance in determining which variables have stochastic trends and when such trends are common among variables. If these variables share a common stochastic trend, their first differences are stationary and the variables may be jointly co-integrated.
For univariate time series analysis involving stochastic trends, Augmented Dickey and Fuller (1979); Phillips and Perron (1988) and Kwiatkowski et al. (1992) unit root tests are calculated for individual series to provide evidence as to whether the variables are integrated. This is followed by a multivariate co-integration analysis.
Following the study of Seddighi et al. (2000), Augmented Dickey-Fuller (ADF) test involves the estimation one of the following equations (Equation 2a,b,c) respectively: The additional lagged terms are included to ensure that the errors are uncorrelated. The maximum lag length begins with 2 lags and proceeds down to the appropriate lag by examining the AIC and SC information criteria.
The null hypothesis is that the variable X t is a nonstationary series (H 0 : β = 0) and is rejected when β is significantly negative (Ha: β<0). If the calculated ADF statistic is higher than McKinnon's critical values, then the null hypothesis (H 0 ) is not rejected and the series is non-stationary or not integrated of order zero I(0). Alternatively, rejection of the null hypothesis implies stationarity. Failure to reject the null hypothesis leads to conducting the test on the difference of the series, so further differencing is conducted until stationarity is reached and the null hypothesis is rejected (Dickey and Fuller, 1979).
In order to find the proper structure of the ADF equations, in terms of the inclusion in the equations of an intercept (α 0 ) and a trend (t) and in terms of how many extra augmented lagged terms to include in the ADF equations, for eliminating possible autocorrelation in the disturbances, the minimum values of Akaike Information Criterion (AIC) (Akaike, 1973) and Schwarz Criterion (SC) Schwarz (1978) based on the usual Lagrange Multiplier LM(1) test were employed. Phillips and Perron (1988) test is an extension of the Dickey-Fuller (DF) test, which makes the semiparametric correction for autocorrelation and is more robust in the case of weakly autocorrelation and heteroskedastic regression residuals. According to Choi (1992), the Phillips-Perron test appears to be more powerful than the ADF test for the aggregate data. Although the Phillips-Perron (PP) test gives different lag profiles for the examined variables (time series) and sometimes in lower levels of significance, the main conclusion is qualitatively the same as reported by the Dickey-Fuller (DF) test. Since the null hypothesis in the Augmented Dickey-Fuller test is that a time series contains a unit root, this hypothesis is accepted unless there is strong evidence against it. However, this approach may have low power against stationary near unit root processes.
Following the studies of Chang (2002); Vazakidis and Adamopoulos (2009) and Kwiatkowski et al. (1992) present a test where the null hypothesis states that the series is stationary. The KPSS test complements the Augmented Dickey-Fuller test in that concerns regarding the power of either test can be addressed by comparing the significance of statistics from both tests. A stationary series has significant Augmented Dickey-Fuller statistics and insignificant KPSS. According to Kwiatkowski et al. (1992), the test of ΚPSS assumes that a time series can be composed into three components, a deterministic time trend, a random walk and a stationary error based on Equation 3: where, r t is a random walk r t = r t-1 + u t.. The u t is iid (0, 2 u 0,σ ). The stationarity hypothesis implies that 2 u 0. σ = Under the null, y t , is stationary around a constant (δ = 0) or trend-stationary (δ≠0). In practice, one simply runs a regression of y t over a constant (in the case of level-stationarity) ore a constant plus a time trend (in the case of trend-stationary). Using the residuals, e i , from this regression, one computes the LM statistic as follows (Equation 3a, b): The distribution of LM is non-standard: The test is an upper tail test and limiting values are provided by Kwiatkowski et al. (1992), via Monte Carlo simulation.
To allow weaker assumptions about the behavior of ε t , one can rely, following Phillips (1987) on the Newey and West (1987) estimate of the long-run variance of ε t which is defined as follows (Equation 3c, d): where, w(s,l) = 1-s/(l+1). In this case the test becomes: (p) structure of a Ι of the dependent variable x t is determined using the recursive procedure in the light of a Langrange Multiplier (LM) autocorrelation test (for orders up to four), which is asymptotically distributed as chi-squared distribution and the value t-statistic of the coefficient associated with the last lag in the estimated autoregression; The critical values for the Phillips-Perron unit root tests are obtained t n , t c and t t are the PP statistics for testing the null hypothesis the series are not I(0) when the residuals are computed from a regression equation without an intercept and time trend, with only an intercept and with both intercept and time trend, respectively. The critical values at 1, 5 and 10% are -2.62, -1.94, -1.61, for t n , -3.60, -2.93, -2.60 for t t and for -4.19, -3.52, -3.19 for t τ respectively; k = bandwidth length: Newey-West using Bartlett kernel; h c and h t are the KPSS statistics for testing the null hypothesis that the series are I(0) when the residuals are computed from a regression equation with only an intercept and intercept and time trend, respectively. The critical values at 1%, 5% and 10% are 0.73, 0.46 and 0.34 for h c and 0.21, 0.14 and 0.11 for h t respectively (Kwiatkowski et al., 1992, Table 1); Since the value of the test will depend upon the choice of the 'lag truncation parameter', l; l = B and width length: Newey-West using Bartlett kernel; ***, ** and *: Indicate that those values are not consistent with relative hypotheses at the 1%, 5% and 10% levels of significance relatively Which is the one considered here. Obviously the value of the test will depend upon the choice of the 'lag truncation parameter', l. Here we use the sample autocorrelation function of ∆e t to determine the maximum value of the lag length l) statistics. The KPSS statistic tests for a relative lag-truncation parameter (l), in accordance with the default Bartlett kernel estimation method (since it is unknown how many lagged residuals should be used to construct a consistent estimator of the residual variance), rejects the null hypothesis in the levels of the examined variables for the relative lag-truncation parameter (l).
The econometric software Eviews which is used to conduct the ADF, PP, KPSS tests, reports the simulated critical values based on response surfaces. The results of the ADF, PP, KPSS tests for each variable appear in Table 1. If the time series (variables) are non-stationary in their levels, they can be integrated with integration of order 1, when their first differences are stationary.
Johansen Co-Integration Analysis
Following the studies of Chang (2002) and Vazakidis and Adamopoulos (2009), since it has been determined that the variables under examination are integrated of order 1, then the co-integration test is performed. The testing hypothesis is the null of non-co-integration against the alternative that is the existence of cointegration using the Johansen maximum likelihood procedure (Johansen and Juselius, 1990;. According to Chang and Caudill (2005) once a unit root has been confirmed for a data series, the question is whether there exists a long-run equilibrium relationship among variables. According to Engle and Granger (1987), a set of variables, Y t is said to be co-integrated of order (d, b)-denoted CI(d, b)-if Y t is integrated of order d and there exists a vector, β, such that β′Y t is integrated of order (d-b). Co-integration tests in this study are conducted using the method developed by Johansen (1988) and Johansen and Juselius (1990).
The multivariate co-integration techniques developed by Johansen (1988) and Johansen and Juselius (1990) using a maximum likelihood estimation procedure allows researchers to estimate simultaneously models involving two or more variables to circumvent the problems associated with the traditional regression methods used in previous studies on this issue. Therefore, the Johansen method applies the maximum likelihood procedure to determine the presence of cointegrated vectors in nonstationary time series.
AJEBA
Following the study of Chang and Caudill (2005); Johansen (1988) and Johansen and Juselius (1990) propose two test statistics for testing the number of cointegrated vectors (or the rank of Π) the trace (λ trace ) and the maximum eigenvalue (λ max ) statistics.
The likelihood Ratio Statistic (LR) for the trace test (λ trace ) as suggested by Johansen (1988) The λ trace statistic tests the null hypothesis that the number of distinct characteristic roots is less than or equal to r, (where r is 0, 1, or 2) against the general alternative. In this statistic λ trace will be small when the values of the characteristic roots are closer to zero (and its value will be large in relation to the values of the characteristic roots which are further from zero).
Alternatively, the maximum eigenvalue (λ max ) statistic as suggested by Johansen is presented in Equation 4b: ( ) max r 1 r, r 1 T ln(1 ) The λ max statistic tests the null hypothesis that the number of r co-integrated vectors is r against the alternative of (r+1) co-integrated vectors. Thus, the null hypothesis r = 0 is tested against the alternative that r = 1, r = 1 against the alternative r = 2 and so forth. If the estimated value of the characteristic root is close to zero, then the λ max will be small.
It is well known that Johansen's co-integration tests are very sensitive to the choice of lag length. Firstly, a VAR model is fitted to the time series data in order to find an appropriate lag structure. The Schwarz Criterion (SC) and the Likelihood Ratio (LR) test are used to select the number of lags required in the co-integration test. The Schwarz Criterion (SC) and the Likelihood Ratio (LR) test suggested that the value p = 3 is the appropriate specification for the order of VAR model for Greece. Table 2 shows the results from the Johansen cointegration test.
Vector Error Correction Model
According to Chang and Caudill (2005) since the variables included in the VAR model are found to be cointegrated, the next step is to specify and estimate a Vector Error Correction Model (VECM) including the error correction term to investigate dynamic behavior of Science Publications AJEBA the model. Once the equilibrium conditions are imposed, the VEC model describes how the examined model is adjusting in each time period towards its long-run equilibrium state.
Since the variables are co-integrated, then in the short run, deviations from this long-run equilibrium will feed back on the changes in the dependent variables in order to force their movements towards the long-run equilibrium state. Hence, the co-integrated vectors from which the error correction terms are derived are each indicating an independent direction where a stable meaningful long-run equilibrium state exists.
The VEC specification forces the long-run behavior of the endogenous variables to converge to their cointegrated relationships, while accommodates short-run dynamics. The dynamic specification of the model allows the deletion of the insignificant variables, while the error correction term is retained. The size of the error correction term indicates the speed of adjustment of any disequilibrium towards a long-run equilibrium state (Engle and Granger, 1987). The error-correction model with the computed t-values of the regression coefficients in parentheses is reported in Table 3.
The final form of the Error-Correction Model (ECM) was selected according to the approach suggested by Hendry (Maddala, 1992). Following the study of Chang (2002) Where: ∆ = The first difference operator EC t-1 = The error correction term lagged one period λ = The short-run coefficient of the error correction term (-1<λ<0) ε t = The white noise term
Granger Causality Tests
Granger causality is used for testing the long-run relationship between stock market development and economic growth. The Granger procedure is selected because it consists the more powerful and simpler way of testing causal relationship (Granger, 1986). The following bivariate model is estimated as follows: Where: Y t = The dependent X t = The explanatory variable u t = A zero mean white noise error term in Equation 6 while X t = The dependent Y t = The explanatory variable in Equation 7 In order to test the above hypotheses the usual Wald F-statistic test is utilized, which has the following form: According to Seddighi et al. (2000) and Katos (2004) The results related to the existence of Granger causal relationships among economic growth, stock market development and interest rate appear in Table 4.
RESULTS
The observed t-statistics fail to reject the null hypothesis of the presence of a unit root for all variables Science Publications AJEBA in their levels confirming that they are non-stationary at 1, 5 and 10% levels of significance but when they are transformed into their first differences become stationary and integrated of the same order (Table 1). Therefore, the combined results (ADF, PP, KPSS) from all tests can be characterized as integrated of order one, I(1).
These variables can be co-integrated as well, if there are one or more linear combinations among the variables that are stationary. The results that appear in Table 2 suggest that the number of statistically significant cointegrated vectors for Greece is equal to 1 ( Table 2) and is the following one in (Equation 5a): The co-integrated vector of the model of Greece presented in Table 2 has rank r<p (p = 2). The process of estimating the rank r is related with the assessment of eigenvalues, which are the following for Greece: It is obvious from the above co-integrated vector that economic growth has a positive effect on stock market development in the long-run, while interest rate has a negative effect on stock market development. According to the signs of the vector co-integration components and based on the basis of economic theory the above relationships can be used as an error correction mechanism in a VAR model for Greece respectively.
The error-correction model with the computed tvalues of the regression coefficients in parentheses is reported in Table 3. The dynamic specification of the model allows the deletion of the insignificant variables, while the error correction term is retained.
From the results of Table 3 we can see that a shortrun increase of economic growth per 1% induces an increase of stock market index per 0.41% in Greece, while an increase of interest rate per 1% induces a decrease of stock market index per 1.42% in Greece.
The estimated coefficient of EC t-1 is statistically significant and has a negative sign, which confirms that there is not any problem in the long-run equilibrium relation between the independent and dependent variables in 5% level of significance, but its relatively value (-0.596) for Greece shows a satisfactory rate of convergence to the equilibrium state per period ( Table 3). From the above results the VAR model in which stock market development is examined as a dependent variable has obtained the best statistical estimates. In order to proceed to the Granger causality test the number of appropriate time lags was selected in accordance with the VAR model.
According to Granger causality tests there is a unidirectional causality between stock market development and economic growth with direction from economic growth to stock market development and a unidirectional causal relationship between economic growth and interest rate with direction from economic growth to interest rate ( Table 4).
DISCUSSION
The model of stock market development is mainly characterized by the effect of economic growth and interest rate. Stock market development is determined by the trend of general stock market index. The significance of the empirical results is dependent on the variables under estimation.
Most empirical studies examine the causal relationship between stock market development and economic growth using different estimation financial measures like stock market capitalization, stock market liquidity and general stock market index.
Granger causality test is the more powerful causality test based on the methodology of vector error correction model in relation to other causality tests like Geweke, Sims, Toda and Yamamoto.
Theory provides conflicting aspects for the direction of causality between stock market development and economic growth. Most empirical studies suggested that there is a unidirectional causality between stock market development and economic growth with direction from stock market development to economic growth, while less empirical studies have found bilateral causality between economic growth and stock market development or unidirectional causality with direction from economic growth to stock market development.
The results of this study are agreed with the studies of Levine and Zervos (1998) and Shan (2005). Therefore the direction of causal relationship between stock market development and economic growth is regarded as an important issue under consideration in future empirical Science Publications AJEBA studies. However, more interest should be focused on the comparative analysis of empirical results for the rest of European Union members-states using different estimation measures and causality estimation methods.
CONCLUSION
This study employs with the relationship between financial development and economic growth for Greece, using annually data for the period 1978-2007. The empirical analysis suggested that the variables that determine economic growth present a unit root. Once a co-integrated relationship among relevant economic variables is established, the next issue is how these variables adjust in response to a random shock. This is an issue of the short-run disequilibrium dynamics.
The short run dynamics of the model is studied by analyzing how each variable in a co-integrated system responds or corrects itself to the residual or error from the cointegrating vector. This justifies the use of the term error correction mechanism. The Error Correction (EC) term, picks up the speed of adjustment of each variable in response to a deviation from the steady state equilibrium. The VEC specification forces the long-run behavior of the endogenous variables to converge to their cointegrating relationships, while accommodates the shortrun dynamics. The dynamic specification of the model suggests deletion of the insignificant variables while the error correction term is retained. Economic growth has a direct positive effect on stock market development while interest rate has a negative effect on stock market development and economic growth respectively.
The results of Granger causality tests indicated that there is a unidirectional causality between stock market development and economic growth with direction from economic growth to stock market development and a unidirectional causal relationship between economic growth and interest rate with direction from economic growth to interest rate.
|
2019-05-14T14:03:28.582Z
|
2012-06-12T00:00:00.000
|
{
"year": 2012,
"sha1": "b589da01518497600fe983f1888770286c873597",
"oa_license": null,
"oa_url": "https://thescipub.com/pdf/ajebasp.2012.135.143.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "838b90897aca7a622ecc85168b389d8da76dd333",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
267818008
|
pes2o/s2orc
|
v3-fos-license
|
Learning to Cope With Diversity in Music Retrieval
The approach presented here makes productive use of the multidimensionality of music retrieval. It exploits heterogeneous representations of music objects into a self-adapting retrieval system. The different perspectives of users can be expressed by relevance feedback and serve as the direction for a learning process which ultimately leads to an optimal solution for a user within a certain context. The paper explores the diversity within music retrieval stemming from an abundance of approaches for representing musical objects as well as methods for searching for similarity. As a result, the system designer is usually confronted with a large number of arbitrary decisions. These challenges are discussed within the M-MIMOR framework, which provides an appropriate solution. A fusion with linear combination guarantees that every perspective is integrated. The weight and therefore the strength of one perspective is reflected by the weight of the representation scheme or matching algorithm into the fusion. These weights are adapted according to their success in previous retrieval tasks.
Introduction
Most areas within information retrieval (IR) have created a variety of domain-specific representation, matching and user modeling schemes.For example, there is still a discussion in cross-language text retrieval whether words or n-grams of letters are the most appropriate items for the representation of language (Peters, Braschler, Gonzalo, & Kluck, 2002).
The complexity of other application domains results in even higher levels of diversity.Especially multimedia abounds in the number of possible representation mechanisms.Image retrieval systems may be based on color, texture, histograms, orientation shapes or objects (Santini, 2001).Video adds a temporal aspect as well as possibly sounds.A video retrieval system needs to consider frames as well as the combination and mutual dependency between sounds and graphics (Smeaton, 2001).
Designers of music retrieval systems are also confronted with a large number of choices.A look at some of the implementations proves this point (Downie & Bainbridge, 2001;Fingerhut, 2002).Furthermore, the forms for the user query include a significant number of different parameter choices.
The search for appropriate algorithms and units or atoms for proper representation within the universe of possibilities seems to be a curse, since each solution neglects several aspects.However, it can turn out to be a blessing when the diversity is integrated into a self adapting fusion system considering many heterogeneous solutions.This paper presents a model including a machine-learning approach, to balance the influence of several system parameters according to users' preferences.
The following section introduces information retrieval and related concepts.The third section demonstrates in more detail that music retrieval is confronted with a highly diverse solution space.Fusion approaches for information retrieval are discussed in the fourth section.Next, a self adapting fusion system is introduced which takes the challenges faced by music retrieval into account and makes productive use of them.
Information retrieval
Information retrieval deals with the storage and representation of knowledge and the retrieval of information relevant for a special user problem. he information seeker formulates a query which is compared to document representations extracted during the indexing phase.
The query formulation process is influenced by the users' state of knowledge, their context as well as by the user interface or query language.
Thomas Mandl and Christa Womser-Hacker The representations of documents and queries are typically matched by a similarity function such as the Cosine or the Dice coefficient.The most similar documents are presented to the users who can evaluate the relevance with respect to their problem.Figure 1 gives an overview of the retrieval process.
This information retrieval process is inherently vague.Documents and queries traditionally contain natural language or more and more multimedia objects like graphics, pictures or music pieces.The content of these objects must be analyzed, which is a hard task for artificial systems.Robust semantic analysis of large text collections or multimedia objects has yet to be developed.Therefore, text documents are represented by natural language terms mostly without syntactic or semantic context.These keywords or terms can only imperfectly represent an object.In text retrieval, queries and documents are represented via terms or descriptors.In multimedia retrieval, the context is essential for the selection of a form of query and document representation.Different media representations may be matched against each other or transformations may become necessary (e.g., to match terms against pictures or spoken language utterances against documents in written text).
Because information retrieval needs to deal with vague knowledge, exact processing methods are not appropriate.Vague retrieval models such as a probabilistic model are more suitable.Within these models, terms are provided with weights corresponding to their importance for a document.These weights mirror different levels of relevance.An overview of information retrieval can be found in Baeza-Yates and Ribeiro-Neto (1999).
Diversity in music retrieval
The complexity of music as a formal system as well as a cultural phenomenon leads to difficulties for the computational representation.A single note by itself has no meaning and there is no correspondence to the syntax nor to words in natural language.In addition, many different technical formats for storing musical data exist (Fingerhut, 2002).
Traditionally, music retrieval has been merely based on textual meta-data, such as author, performer or the name of a piece.In recent years, content-based retrieval methods have been developed which focus on features automatically derived from music objects (Lippincott, 2002).These features try to describe the musical content.Both meta-data retrieval and content-based methods may be appropriate for specific contexts.An optimal music retrieval system allows both and enables cooperation between these approaches.
Musical styles
Music as a cultural phenomenon has led to an abundance of musical styles.They all use different methods to express their intentions or let the listener enjoy a pleasing experience.Sounds, notes, pauses, tempi, instruments, and voices are combined in many ways.This multi-modal structure of music represents one of the roots of the challenge of music retrieval.Depending on the style, different atoms are assembled for a composition.Which elements can be considered as the prominent feature, such as a melody or a theme, depends largely upon the style, the user need and context.Therefore, a formal model for the representation allows an optimization only for one style of music.
Representation
The possibility to store music in digital representations has made it increasingly attractive to search large music collections.In contrast to texts, music "documents" lack separa- tors necessary to identify semantic units such as "words" or "phrases."Like words in texts, the same melodic pattern may occur in more than one piece of music, perhaps composed by different composers.The same entity can be represented in two different main forms: the notated and the acoustic form.Music communication is performed at two levels: the composer creates a musical structure while the interpreter (musician or singer) translates the written score into sounds.The resulting performances may differ a lot from each other.Knowledge within a musical work can be identified at different levels: melody, harmony and rhythm can be assessed in written formats, whereas in the case of musical performances other dimensions like timbre, articulation, and timing may be of interest.However, only a subset of these dimensions is captured by musical representation formats like MIDI.
The most widespread mode for music retrieval is search via similarity.However, similarity in music retrieval presents several difficulties: what part of a song is likely to be perceived as the theme of the music?How can one determine whether two pieces of music with different sequences of notes represent the same theme?Melucci and Orio (1999) discuss some of these issues of content-based indexing of musical data.
Most systems search for similarity on a level of pitch.Usually, these systems, like SEMEX (Lemström & Perttu, 2000), only process monophonic melodies; however, for some musical styles polyphonic matching would be desirable.Further parameters lie in the consideration of transposition invariance and the global or local focus of a representation.A global representation might, for example, only consider a histogram analysis of pitch values.Approaches from speech recognition have also been applied to music retrieval (Logan, 2000).
In Foote (1999), a novel approach to visualizing the time structure of music is presented.The acoustic similarity between any two music objects is displayed in a 2D representation, allowing the identification of structural and rhythmic properties.
Retrieval models
The choice of the retrieval model is an important factor in any domain.One central aspect in a information retrieval model is the similarity calculation between query and object representations.In music information retrieval, a large number of functions to calculate the similarity between melodies have been proposed (Rolland & Ganascia, 1999).They consider the closeness of the match between query input and database objects.In most cases, acoustic information is transcribed and converted into intervals and is used for deriving feature vectors.These vectors can be compared with input vectors with respect to measuring similarity.The vector-space retrieval model can provide ranked output according to the match of query and document vectors.
Input mode
Existing music retrieval systems allow a variety of input modes for querying.They can be classified into two fundamental modes: first, the specification of retrieval criteria using textual meta-data, second, the query by example mode, which accepts the users' acoustic input like singing or humming for example via a microphone, an uploaded file or typing in part of a melody e.g., by a MIDI keyboard.Haus and Pollastri (2000) present a hierarchy of music interfaces that reflects the level of expertise of a particular type of user.At the top of the hierarchy the textual input is placed as the easiest mode of querying, while music notation takes the last position as it is considered to be the most difficult interaction mode requiring considerable expertise.
Previous prototypes on music retrieval mostly concentrate on only one input mode.Naoko, Yuichi, Tetsuo, Masashi, and Kazuhiko (2000) and Rolland (2001) used vocal input, while Uitdenbogerd and Zobel (1999) for example, preferred input via a MIDI interface.
Most online music library catalogues use textual metadata.They follow the tradition of textual information retrieval where the rules for maintaining consistency have been refined over many years.These systems do not allow searching by musical content.Within the OMRAS project an integrated approach of the two types of systems is investigated (Dovey, 2001).
Modes requiring the automatic detection of criteria like humming are associated with a level of vagueness or fuzziness resulting from the uncertainty related to the reliability of the detection method.Many efforts in this area focus on music data that contain some built-in semantic information structures or focus on classification of music.
User intentions
The intentions of users of music retrieval systems vary greatly.The usage scenarios comprise entertainment, learning, research or support for composition.Users with different skills (trained musicians or non-professionals) may interact with music systems.The usefulness of multimedia systems largely depends on the way they match the users' expectations and their technical as well as musical skills.
Implementation as selection within a solution space
All parameters for music retrieval discussed above need to be considered when implementing a system.Each parameter represents one dimension in the solution space for a specific retrieval system.The space of potential solutions is highly dimensional.The determination of the value of all parameters defines an instance of a retrieval system as illustrated in Figure 2.
The search for a solution within the high dimensional space has the goal of achieving a good retrieval quality.This search is guided either by heuristics or by empirical results.
Finding an optimal solution requires a large testbed of tasks and evaluation of the results by users or experts.
However, when the conditions change, a different solution might be optimal.These changes may be the consequence of different queries, new user interests or changes of the music content.Now, another solution might produce the optimal result.
Approaches for coping with diversity and heterogeneity
As a consequence of the heterogeneity presented above, most existing music retrieval systems are focused on one type of music only.This form of content specialization is a typical reaction to such complex domains; however, it limits the content and is therefore not desirable for many applications.Haus and Pollastri (2000) developed a multimodal prototype for different users in which the musician who is able to write a musical query on a score can play it with a musical instrument or sing it while the layman can only sing or query by textual data.They suggest translations of audio inputs to pass from the acoustic to the symbolic domain.
Another solution for a highly dimensional optimization problem lies in the active adaptation by the user.In a system suggested by Bainbridge (2000), the user can set a large number of parameters of the matching function himself.These settings include duration, start of pattern and type of match.This solution is effective for musical experts who can predict the consequences of their settings, whereas, the layman may not even know what the parameters mean.The expert may also profit from a more dynamic approach which optimizes the settings according to the context.
An approach focusing on efficiency is presented in Lemström, Wiggins, and Meredith (2001), where three layers are implemented.The first level is more efficient and less thorough.The longer a user waits, the more occurrences to his query may be encountered.This seems to be an appropriate strategy; however, different contexts may already tend to have a good performance in one layer only.
Another highly promising strategy is an adaptive user model such as the one presented by Rolland (2001), which takes into account the multidimensionality of human similarity judgement as well as the different importance of representations.Consequently, a weight vector is assigned to each user representing the importance of different representations.The model is adapted by user feedback, which has proved highly effective in text retrieval.However, the initialization assigns arbitrary weights and the context is not modeled.Both aspects are important, since the first contact with a system should lead to reasonably good results.The consideration of context is crucial because the needs of the user may be dynamic and different optimizations are necessary when the same user queries different musical styles.
Optimization through fusion
The fusion of various approaches is widely used in computer science.The goal of applying several algorithms is to improve the overall performance.Fusion methods delegate a task to several systems and integrate their results into one final result presented to the user.Ideally, the weaknesses of one method do not have a large negative influence on the final result because they are superimposed by another method.Typical examples are committee machines in machine learning (Haykin, 1999).The fusion may be implemented as a voting scheme or as a weighted linear combination.Recently, non-linear committee machines like boosting or bagging have drawn considerable attention because of their high effectiveness (Witten & Frank, 2000).
Fusion in information retrieval
For information retrieval, fusion can also be implemented as a combination of several algorithms.The integration considers several different probabilities for the relevance of a document to a query and calculates one final similarity measure.
Fusion has led to a significant amount of research in information retrieval.This is especially true since experiments carried out within the framework of TREC (Text Retrieval Conference) have shown that the results of similarly wellperforming information retrieval systems often differ.This means that the systems find the same percentage of relevant documents, but the overlap between their results is sometimes low.TREC is an initiative which has led to a higher level of comparability in information retrieval.Whereas, before TREC, most researchers used their own small collection to test their systems, TREC now provides a testbed for the empirical evaluation of different systems (Voorhees & Harman, 2001).
Because of these results, fusion seems to be a promising approach and has been applied to text retrieval (Fox & Shaw, 1994;Vogt & Cottrell, 1998;McCabe, Chowdhury, Grossmann, & Frieder, 1999;Savoy, 2002).A model for a fusion system is presented in Figure 3.A different kind of fusion is carried out by the popular internet meta search engines.These machines have been developed because of the fact that search engines in the internet can hardly index all the documents of the internet.Meta search engines attempt to create a greater basis for the search for relevant material by combining the results of single search engines.However, it is not clear whether meta search really leads to better retrieval results.Some empirical studies have shown no improvement (Wolff, 2000).
The MIMOR model
MIMOR (Multiple Indexing and Method-Object Relations) is a fusion approach taking advantage of heterogeneity (Womser-Hacker, 1997).The MIMOR model samples users' relevance feedback to predict optimal method-object relations where methods are indexing algorithms or retrieval models.These are assigned to the characteristics of users and documents with the goal of improving the overall retrieval quality.From a computational viewpoint, MIMOR is designed as a linear combination of the results of different retrieval systems.The contribution of each system or algorithm to the fusion result is governed by a weight for that system.
A central aspect in MIMOR is learning.The weight of the linear combination of each information retrieval system is adapted according to the success of this system in previous search tasks.The success is measured by the relevance feedback of the users.A system which gave a high retrieval status value (RSV) and consequently a high rank to a document which then received positive relevance feedback contributes Learning in MIMOR leads to a fusion which combines the individual systems in an optimal way after a sufficient period of usage.As a result, MIMOR takes advantage of two of the most promising strategies for improving information retrieval systems, these are relevance feedback and fusion.However, the optimal combination may depend on the context and especially on the users' individual perspectives and the characteristics of the documents.Therefore, MIMOR needs to consider context.
Evaluation
So far, MIMOR has been evaluated twice on a large scale with text data from the Cross Language Evaluation Forum (CLEF, cf.Peters, Braschler, Gonzalo, & Kluck, 2002).In one set of experiments, a corpus of 80000 text documents was processed by a retrieval system with different parameter settings.The results were fused with equal weights and with optimized weights.These optimal weights were derived from the CLEF 2001 data.The fusion with MIMOR led to encour- aging results and gave a much higher performance than the single systems (Hackl, Kölle, Mandl, & Womser-Hacker, 2002).
Our focus is on another set of experiments, where the heterogeneity of the retrieval systems was higher.Therefore, it is more applicable to music retrieval where the differences between the systems are quite substantial.In this experiment, MIMOR was evaluated with a commercially available retrieval software, in this case IBMs DB2 text extender. 1Details of this evaluation can be found in Li (2002).
The corpus consisted of all issues of the German news magazine Der Spiegel from the year 1994 and contained some 14000 documents.Queries were formed from the 30 CLEF topics from the 2000 campaign.The CLEF topics contain three parts; a title, a short description and a longer description (narrative).All of these parts were used for the experiments with DB2.
Text Extender allows many parameter settings mainly based on different linguistic processing modules.Some of these parameter settings were used to construct the different systems for our fusion experiment.Text Extender supports the Boolean retrieval model as well as a probabilistic model.It comprises linguistic pre-processing including stemming and a n-gram approach which does not use words but n-grams as basic units.Alternatively, a precise index without pre-processing was used.
As Table 1 shows, there is an increase in the quality of the retrieval results which lies between 5% and 11%.This gain is calculated from the best individual retrieval result.This means that fusing the best result with another result which may be worse leads to an overall improvement.As Table 1 shows, the variability between the individual retrieval systems modeled within DB2 is high.The probabilistic and the Boolean retrieval model differ in their basic approach.Furthermore, some of the runs use words and others use ngrams as a basic representational unit.Therefore, fusion is an especially promising approach in a highly heterogeneous environment like music retrieval.
Context model
The performances of information retrieval systems differ from domain to domain, and characteristics of the documents relevant for the indexing procedure may be responsible.In one experiment for example, optimal similarity functions for short queries could be developed (Kwok & Chan, 1998).MIMOR builds upon the idea that formal properties can be exploited to improve fusion.Some retrieval methods work better, for example, for short documents.The weight of these systems should be high for short documents only.Some characteristics of text documents seem to be good candidates for such distinctions.Length, difficulty, syntactic complexity, and even layout can be assessed automatically.
These properties are modeled as clusters.All documents which have a property in common belong to the same cluster.Each cluster can develop its own adequate MIMOR model with weights for all participating systems.
The term clustering is usually used for non-supervised learning methods which find structures in data without hypotheses.However, the assignment of text documents to clusters for the improvement of information retrieval processes may also be carried out with supervised learning methods.Therefore, the term cluster in this article does not restrict this process to algorithms based on unsupervised learning.Supervised learning methods for pre-defined classes and even human assignment are compatible with MIMOR.
A theoretical justification for a cluster model can be found in the evaluation strategies for clustering algorithms like minimal description length or category utility (Witten & Frank, 2000).Category utility estimates the value of a cluster by checking how well it can be used to predict attribute values of objects.Clusters are good if the probability of an object having a certain value is higher for objects in a specific cluster than for all objects.If good clusters are found and one attribute is an appropriate retrieval system, then the probability is high that a good retrieval system for that specific object is used.
Introducing clusters in MIMOR can be regarded as the implementation of an individual MIMOR model for each cluster.The final result considers only the weight of the cluster to which the document belongs.The learning formula needs to be modified accordingly.The change in the weight is now applied only to the cluster containing the document.
Clustering documents is a tedious task.In many cases, the hard assignment of a document to only one class is difficult.Therefore, this condition needs to be relaxed and fuzzy clustering has also been integrated into MIMOR (Mandl & Womser-Hacker, 2001).
User model
Further refinement of MIMOR can be achieved by integrating a user model.Unlike other user models in information retrieval, MIMOR introduces an adaptation in the core of an information retrieval system and applies it to the calculation of the RSV.
Similar to the properties of the documents, an additional MIMOR model for each person could be introduced, leading to optimal user models.However, the training of a MIMOR model requires a substantial amount of relevance feedback decisions.Therefore, the user is forced to submit many decisions before the system can be used effectively.Another disadvantage is common to all inductive and incremental learning algorithms.The occurrence of some unusual cases in the initial learning phase may lead the algorithm to an unstable learning curve.This may result in a degradation of the retrieval behavior.
Both problems are solved by introducing separate private and public models.The private model contains a user specific MIMOR model optimized by all the relevance feedback decisions of that user.The public model is trained with all decisions of all users of the system.The public MIMOR is optimized but not individualized.Therefore, it can be used for any user beginning to work with MIMOR because an individual model is not available.Over time, such a beginner will collect a significant number of relevance judgements and will eventually reach a fully individualized and saturated model.During this process, the public model will lose its influence while the importance of the private model grows.
The user model in MIMOR differs from many individualization approaches in information retrieval.Often, the individual preferences are stored as a content model.Many systems use interest vectors.MIMOR applies individualization to the algorithmic layer of the system.
M-MIMOR: Self adaptation for music retrieval systems
The MIMOR approach is very well suited for music retrieval.Music retrieval incorporates high diversity along several dimensions of system parameters.The choices for parameter values are almost arbitrary.On the other hand, MIMOR offers a fusion method which learns from the preferences of the user.A mapping is established between the application of system features and success expressed by positive user feedback.Instead of focusing on one value for each system parameter, each user receives the most appropriate mixture of the options available.As a consequence, we propose a MIMOR for music objects called M-MIMOR.
The diversity in music retrieval approaches has been sketched in Section 3. In the following sections, these aspects of diversity are handled by our M-MIMOR model.Based on the literature and experience from text retrieval, the following distribution of fusion parameters is most favorable.Representation and matching aspects are treated in the basic MIMOR system by allowing a variety of representations.Different styles and contexts are consequently treated in the context model.The heterogeneous user population and different usage scenarios need to be captured by the user model.
Representation of musical objects in M-MIMOR
A large variety of representation formalisms has been presented in Section 3.For a fusion, the aspects shown in Table 2 should be integrated when available in order to achieve a highly heterogeneous representation mix.
Further aspects may need to be integrated in specific situations.Matching aspects do not play such an important role in music retrieval.Since the representations are very different, the similarity algorithms are often adapted for specific representation schemes.Combining all matching methods and representations schemes is sometimes useful in text retrieval.However, it will rarely prove useful in music retrieval because it may lead to inappropriate combinations.
M-MIMOR context model
Automatic genre detection systems have also been developed for music objects (Tzanetakis, Essl, & Cook, 2001).Therefore, genre can be used as one feature in M-MIMOR.The calculation of the similarity between query and musical objects needs to consider not only the systems involved.In addition, the clusters to which an object belongs and the membership function M enter the formal model.
M-MIMOR user model
The reasons for similarity or relevance judgements in music retrieval are highly subjective.Each user may find a different combination of musical characteristics important in a certain situation and apply them to his individual judgement.
Because MIMOR integrates different aspects of music it must be individualized to reach a high overlap between the users' preferences and the internal representation.Between public and private models, another layer for group-specific MIMOR models, e.g., for researchers, could be implemented in the future.
Conclusion
This article introduces a model for music retrieval which automatically learns to adapt itself to the cognitive preferences of the user and supports the multimodal nature of music.Since the evaluation of musical objects is highly subjective, a retrieval system needs to dynamically identify the most appropriate combination of system parameters for a given user.M-MIMOR manages this integration in a linear combination of many possible variables.Consequently, M-MIMOR takes personalization and adaptivity one step further.
As a result, no viewpoint expressed in a certain algorithm or representation method is necessarily neglected but may contribute with a useful weight to the final result.The fusion of a diversity of perspectives will ultimately lead to better retrieval performance.
weight to the final result.The effect of this learning process is shown in Figure4.The following formula enables such a learning process:
, will offer potential for the evaluation of M-MIMOR.A previous version of this paper was published in the proceedings of ISMIR: Fingerhut, Michael (ed.): ISMIR 2002 Proceedings: Third International Conference on Music
Table 1 .
Overview of the experiments.
|
2019-02-11T14:03:06.225Z
|
2003-06-01T00:00:00.000
|
{
"year": 2003,
"sha1": "b6ec8fd010f6e7d47a78c6e1321386e92fe69a06",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/1416560/files/MandlW02.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "99b0b100be2fea309a27e625ca772dc3c9065a31",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
30209985
|
pes2o/s2orc
|
v3-fos-license
|
Molecular Phylogeny of Viperidae Family from Different Provinces in Saudi Arabia
Molecular systematic is important in solving the problematic taxonomy of venomous snakes and development of antivenins. The current study aimed to investigate the molecular phylogeny of 4 venomous species in Saudi Arabia using mitochondrial (mt) 16S rRNA gene. DNA extracted from blood and mt16S rRNA gene amplified by PCR. Sequences submitted to gene bank, examined for similarity with other sequences in the data base using BLAST search, aligned using Clustal W method, and phylogenetic tree was constructed. E. coloratus clustered as a separate group along with the isolate from Oman and Yemen with insignificant relation. Cerastes and Bitis arietans groups strongly correlated as sister taxa. The current B. arietans sample did not significantly correlate to the previously published sample from the same province (Taif). Cerastes groups did not significantly correlate with samples from Egypt or Israel. This information might reflect the need for multiple gene markers for better molecular systematic.
Introduction
Snake bite affects hundreds of thousands of people and tens of thousands are killed or maimed by snakes annually (Warrell, 2010).Envenomation due to terrestrial snakes is a common and frequently devastating environmental and public health problem in some areas of Saudi Arabia (Malik, 1995;AlHarbi, 1999;Ismail et al., 2007).Effective clinical interventions rely on better identification of snake subspecies.
The taxonomy of venomous snakes is complicated because of the complex variable nature of the medically important species (Gillissen et al., 1994).Their classifications have been changed frequently based on new discoveries.Attempting to classify snakes was challenged by identification of novel species and variations in venom composition within subspecies.
The phylogeny of snakes is of considerable interest for the resolution of a problematic taxonomy, which furthermore impinges on the treatment of snakebite.The mitochondrial (mt) genomes of snakes contain a number of characteristics that are unusual among vertebrates.Studies based on complete or near-complete snakes' mt genome sequences demonstrated a peculiar accelerated mt gene evolution (Douglas & Gower, 2010).Higher-level snake relationships are inferred from sequence analyses of four mitochondrial genes [12S rRNA (ribosomal ribonucleic acid), 16S rRNA, ND4 (NADH dehydrogenase subunit 4) and cytochrome b) (Heise et al., 1995;Keogh et al., 1998;Lenk et al., 2001;Vidal & Hedges 2002a & b).rRNA is the one of the only genes present in all cells (Smit et al., 2007).For this reason, genes that encode the rRNA are sequenced to identify an organism's taxonomic group, calculate related groups, and estimate rates of species divergence.Thus many thousands of rRNA sequences are known and stored in specialized databases such as RDP-II (Ribosomal Database Project-II) (Cole et al., 2003) and SILVA (a comprehensive online resource for quality checked and aligned ribosomal RNA sequence data compatible with ARB soft ware) (Pruesse et al., 2007).
Here we present analyses of DNA sequence data bearing on the relationships and biogeography of Viperidae family.We sampled 18 samples of Viperidae (Cerastes cerastes, Cerastes gasperettii, Echis coloratus, and Bitis arietans) from different provinces in Saudi Arabia (Hail, Taif, Riyadh, and Medina).Our analyses indicate a presence of genetic diversity of Viperidae in Saudi Arabia and elucidate the urgent need for peculiar antivenin against those species.
Materials and methods
Blood samples were collected from fieldwork collections (table 1).The use of animals for research followed the Interdisciplinary Principles and Guidelines for the Use of Animals in Research, Testing, and Education by the New York Academy of Sciences, Ad Hoc Animal Research Committee.Genomic DNA (deoxyribonucleic acid) was extracted using the Axy Prep Blood Genomic DNA Miniprep kit (Axygen Biosciences, USA).Briefly, 200 μl of anti-coagulated whole blood mixed with 500 μl of buffer AP1 (cell lysis buffer) by vortexing at top speed for 10 seconds.Subsequently, 100 μl of buffer AP2 (protein-depleting buffer) were added and mixed by vortexing at top speed for 10 seconds.The mixture was then centrifuged at 12,000×g for 10 minutes at ambient temperature to pellet cellular debris.Binding was performed by applying the clarified supernatant to the Miniprep column and centrifugation at 6,000×g for 1 minute.The bound DNA was washed using 700 μl of buffer W1A (wash buffer).The Miniprep column allowed to stand at room temperature for 2 minutes then centrifuged at 6,000×g for 1 minute.Desalting was performed twice using 800 μl then 500 μl of buffer W2 (desalting buffer) followed by centrifugation at 12,000×g for 1 minute.The Miniprep column was then centrifuged at 12,000×g for 1 minute to get rid from any residual solutions.Finally, DNA was eluted in 200 μl of pre-warmed (at 65°C) TE buffer (10 mM Tris, pH 8.0 and 1 mM EDTA).Columns allowed to stand at room temperature for 1 minute then centrifuged at 12,000×g for 1 minute and the eluted DNA stored at -30°C till use.
DNA sequencing was carried out at King Faisal Specialist Hospital & Research Centre (KFSHRC) (Saudi Arabia, Riyadh).Samples were processed on 3730xl ABI Sequencer with POP7 and 50cm Cap Array-plate number AB6516.
The resulting sequences were submitted to the gene bank, compared to that present in the data base using BLAST analysis (Altschul et al., 1990(Altschul et al., & 2009)).Multiple sequence alignment was carried out using Clustal W method (Chenna et al., 2003, Thompson et al., 1994).Phylogenetic and bootstrapping analyses (Efron et al., 1996) were carried out using DNASTAR Laser gene 8.1MegAlign program (DNASTAR, Inc).
Sequence analysis
The PCR produced fragments > 300nt (ranged from 514 to 866nt) of 16S rRNA (Figure 1) which used for final alignment.All sequences showed quite typical mitochondrial nucleotide composition (Table 1); A = 33.2%,C = 24.4%,G = 22.8%, T = 18.7% (Average value).BLAST (Basic Local Alignment Search Tool) search revealed high similarity (92-99%) with 24 isolates (Table 2).Similarities with other isolates which are not known to be present in KSA were excluded from the analysis.
Phylogenetic analysis
Since the rRNA genes are transcribed, but not translated, they fall in the category of non-coding genes.Therefore, no indels or stop codons were tested and the phylogenetic tree was constructed using DNA sequences.E. coloratus from Saudi Arabia clustered as monophyletic group along with the isolates from Oman (Thumrait) and Yemen (Ghoyal Ba-Wazir, and Bir Ali) with insignificant relation (Figure 2) and low divergence (Figure 3).No relation was detected between E. Coloratus and Cerastes groups.Instead a significant support for a sister-group relationship was detected between B. arietans and Cerastes groups.The current B. arietans sample (sequence identity: 96.6%, Figure 4) did not significantly correlate to the previously published sample (Pook et al., 2009) from the same province (Al-Taif).Cerastes groups did not significantly correlate with neither samples from Egypt or Israel (Figure 3) although the sequence identity is high (Figure 3 & 4).There was cross clustering between Cerastes cerastes and Cerastes gasperettii with no significant correlation (Figure 3).
Discussion
Our results urge for development of specific antivenin against the Viperidae family present in Saudi Arabia provinces.The current results did not support any significant sister-group relationship between Echis and Cerastes which was previously reported by Pook et al. (Pook et al., 2009).In our analysis this relation was strongly negative.However, our results were in accordance with their study regarding the clustering of E. coloratus.Different algorithms, data size and data quality might account for such incongruent data (Pruesse et al., 2007, Spinks et al., 2009).On the other hand, single relationship always achieved for species and genus monophyly (Spinks et al., 2009) which might explain the congruent part.
The weak relation achieved for the current B. arietans sample and that previously published appear to be partially due to variability in the compared fragment lengths (517 versus 691nt; respectively).Although >300nt was considered sufficient to include sequences in alignments (Pruesse et al., 2007), Spinks et al., reported insufficient recovery of well-supported relationships among many genera or species using ~6Kb of nuclear sequence but ~1 kb of mtDNA yielded similar support levels (Spinks et al., 2009).Thus longer mt sequences might be needed in the future studies.
Figure 3 .
Figure 3. Sequence distance of Viperidae family
Table 1 .
Base composition of the new 16S rRNA DNA sequences
|
2017-09-09T00:37:50.448Z
|
2011-09-28T00:00:00.000
|
{
"year": 2011,
"sha1": "0ad14ab587ec3cb43c496b3f8d96f484020aa799",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijb/article/download/11178/8711",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0ad14ab587ec3cb43c496b3f8d96f484020aa799",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
9799930
|
pes2o/s2orc
|
v3-fos-license
|
Yield--Optimized Superoscillations
Superoscillating signals are band--limited signals that oscillate in some region faster their largest Fourier component. While such signals have many scientific and technological applications, their actual use is hampered by the fact that an overwhelming proportion of the energy goes into that part of the signal, which is not superoscillating. In the present article we consider the problem of optimization of such signals. The optimization that we describe here is that of the superoscillation yield, the ratio of the energy in the superoscillations to the total energy of the signal, given the range and frequency of the superoscillations. The constrained optimization leads to a generalized eigenvalue problem, which is solved numerically. It is noteworthy that it is possible to increase further the superoscillation yield at the cost of slightly deforming the oscillatory part of the signal, while keeping the average frequency. We show, how this can be done gradually, which enables a trade-off between the distortion and the yield. We show how to apply this approach to non-trivial domains, and explain how to generalize this to higher dimensions.
related to superresolution [14]- [20], to superdirectivity or supergain [21], [22] and actually to compression beyond the Fourier limit [23]. Interestingly, it was discovered that in random functions, defined as superpositions of plane waves with random complex amplitudes and directions, considerable regions are naturally superoscillatory [24], [25]. Various mathematical aspects of the phenomenon have been discussed more recently in [26], [27]. This field is also closely related to several other subjects such as prolate spheroidal wavefunctions [8], [10] that can be seen as sets of orthogonal superoscillating functions, and to the stability of band-limited interpolation [28], [29] where the lack of higher harmonics challenges interpolating procedures that wish to recover signals containing such higher frequencies.
In a very important sense, though, the idea that a band-limited function cannot oscillate faster than its largest Fourier component is not entirely false. It is well known that the superoscillations exist in limited intervals of time (or regions of space, depending on the actual problem) and that the amplitude of the superoscillations in those regions is extremely small compared to typical values of the amplitude in non-oscillating regions [2], [11], [12]. It is generically so small, that any hope of practical application of superoscillating functions depends on tailoring the functions carefully to reduce that effect as much as possible. Two different approaches have been offered over the years to the problem of optimization of superoscillation [8], [12] in the absence of constraints, and one [6] in the presence of constraints, which will discussed in relation to ours as we proceed.
In contrast to the last three references we prefer to work with periodic functions, which will thus be described by a finite number of Fourier coefficients. Periodic superoscillating functions have been studied in the past, e.g. in [23], but in a different direction than the work presented here. The choice of periodic functions, we believe, has a number of advantages. First, it is clear, that it is more difficult to achieve superoscillations, with a finite number of degrees of freedom than with an infinite number of degrees of freedom encoded in the Fourier transform of a band-limited non-periodic function. Thus, achieving superoscillations with a finite number of degrees of freedom is more challenging, yet enables a clear view of questions concerning the total number of oscillation versus the number of degrees of freedom.
Second, we obtain an easy and practical way of constructing optimal superoscillations.
In the following we discuss first superoscillations in one finite subinterval. This is not essential, and towards the end we show that generalizing this to more than one subinterval is straightforward and requires no conceptual modifications. We will also comment on higher dimensions.
Consider the function Choose an interval [0, a] with a < π. Impose on the function f (t) M constraints inside the interval, i.e.
f (t j ) = µ j for 0 ≤ t j ≤ a and j = 0, . . . , M − 1. The constraints result in a set of M linear equations in N + 1 unknowns of the form, where Note that imposing the M constraints described above generically results in M independent linear equations. However, if these equations are not independent, one can eliminate one (or more) constraints such that these eliminated constraints are satisfied automatically when imposing the other constraints.
Therefore, without loss of generality, we will assume in the following that we are dealing with M independent constraints, which yields a non-singular matrix C jm (i.e., a matrix of rank M ).
Therefore, this set of equations has no solution for M > N + 1, has one solution for M = N + 1 and a whole space of solutions for M < N + 1. In particular, we can choose It is thus clear that the frequency of oscillation within the interval [−a, a] can be increased indefinitely just by decreasing its size. Therefore, although to have a solution at all, we need that M ≤ N + 1, the ratio between ω and N , the largest frequency appearing in the Fourier series, can be made as large as we want by decreasing a. Thus it is not a problem at all to obtain superoscillations. This comes, at a cost, of course. First, we can obtain superoscillations with a prescribed frequency ω only within an interval [−a, a] with a ≤ πN ω and as stated before and will be demonstrated in the following (see Fig. 1 below) the amplitude in that region is relatively extremely small.
Next, we would like to optimize our superoscillating function for fixed a and M < N + 1 but we have to decide first in what sense do we want to optimize it. Ferreira and Kempf [6], [12], consider the energy of the signal, E = ∞ −∞ f 2 (t)dt, and then use the fact that f is band-limited and minimize the energy under the interpolation constraints (Eq. (3)). We believe that for many applications, the right quantity to May 3, 2014 DRAFT maximize under the constraints is the superoscillation yield, rather than the total energy. (Note that as will become evident in the following the yield is not just a function of ω but of M and a separately). For the discrete case described in (1), we take instead of the energy, which is infinite, the energy per period. Thus the superoscillation yield that we maximize under the constraints is where the entries of the matrix ∆, which correspond to the choice of the interval [−a, a], are given by Note that a general formula for ∆ mn in any domain is given later in Eq. (19). (2) with the obvious advantage that the last M degrees of freedom in B are constrained independently of each other and are equal to linear combinations of the µ j 's. Thus we denote The numerator I in (6) can be written now in terms of the rotated degrees of freedom B as I = N m.n=0 where ⌢ Γ is the transpose of Γ. The superoscillation yield expressed in terms of the unconstrained B's is Differentiating the yield with respect to B m and equating to zero yields where I is the unit matrix andμ is the vector of theμ j 's. It is thus clear that the components of the vector B depend on I and on D only through the ratio Y = I D . Those components are, by Cramer's rule, a ratio of two determinants. The determinant in the denominator is clearly a polynomial of degree For each entry of B the determinant in the numerator is that of the matrix, obtained from (∆ − Y I) by replacing one of the columns by the vector (Γμ). Therefore, B m , the m'th component of the vector B, for which the yield is extremal, is given by the explicit expression where Clearly, an equation for the superoscillation yield can be obtained by plugging the right hand side of (12) directly into the right hand side of (10). We prefer, though, a different route that yields a more simplified form for the equation. The expression ∂Bm is identically zero by the extremum condition. This results in the following simplified equation for Y , 4. In Fig. 3 we give the different eigenvalues for fixed N and M as a function of a. In Fig. 4 we give the eigenvalue for fixed a and N as a function of i for various values of M . This will give not just an inequality but a full quantitative picture for small (not necessarily very small) a. Note that in both figures we divide each eigenvalue λ i by the factor a 4(N −i)+5 to highlight the small-a behaviour of these eigenvalues. This factor is inspired by the small-a behaviour of the eigenvalues of ∆ (as shown in Refs. [12], [30] for example). The flat nature of the curves in Fig. 3 verifies this behaviour. oscillation at all on the function in the superoscillating region. Since, optimal f ′ s, which are necessarily symmetric, are not expected to vanish at the origin, this implies that we are not constraining the function at all and the maximization of the yield is equivalent to maximizing it under the requirement that the total energy is normalized. This will yield a set of N + 1 ordinary eigenvalues and eigenvectors. In the corresponding continuum problem the number of eigenvalues is infinite and the eigenvectors are the prolate spheroidal wavefunctions of Slepian and Pollak [8]. (Ref. [10] mentions a discrete case but it is a totally different discreteness than that we study, i.e., in equation (1).) It is interesting to note that the signals obtained by the discrete Ferreira-Kempf procedure [6], [12] (who consider a similar question) do not belong to the family of functions described above. Those signals are obtained by minimizing our May 3, 2014 DRAFT denominator (defined in (6)), and under the same oscillation constraints we use (i.e., equation (3)). It is clear thus that those eigenvectors are obtained by simply setting We now go back and discuss briefly the case of superoscillations in more than one subinterval. Consider the same expansion as in Eq. (1) and let D be the domain (which can be in general multiply connected) over which we are interested in having superoscillations. We would like this expansion to be done such that the energy of the signal in D will be maximized compared to the total energy of the signal in (−π, π). This implies maximizing the following superoscillation yield that generalizes Eq. (5). Representing the signal using its Fourier decomposition (1) gives where the matrix ∆ is now D-dependent, namely From this point on, all the approach developed earlier carries exactly the same, with the only difference being that the more general matrix ∆ mn (D) is used instead of the one given by Eq. (7).
May 3, 2014 DRAFT
In order to demonstrate how this works, in the following we will focus on the domain D = (−b, −a) ∪ (a, b) in which case the matrix ∆ becomes −m cos(an) sin(am)+m cos(bn) sin(bm)+n cos(am) sin(an)−n cos(bm) sin(bn) m 2 −n 2 , m = n = 0 We impose the following constraints inside (a, b), namely f (t j ) = µ j (j = 0, . . . , M − 1), with Generalizing this discussion to other types of expansions (e.g., an expansion that included the sin functions) or to higher dimensions (for application see Ref. [31] for example) is also straightforward, and only requires stating the desired basis for expansion (which fixes the basis functions), the desired domain for superoscillations (which fixes ∆ similar to Eq. (19)) and the desired constraints (which fixes the t j 's and µ j 's). Other than that, the concepts and algorithm itself remain the same.
To conclude, we have shown in this paper how to obtain yield-optimized superoscillating signals that allow a gradual trade-off between superoscillation yield and quality of the signal. In particular we show how the constrained optimization of the energy-ratio can be formulated as a generalized eigenvalue problem. This problem coincides with the standard eigenvalue problem of the operator ∆ in the unconstrained case that give rise to the prolate spheroidal wavefunctions [8]. The approach presented here allows to shape these wavefunction in ways that may be appealing to applications, such as superresolution [14]- [20], compression [23], supergain [21], [22] or band-limited interpolation [28], [29], where the possibility to improve the shape of such superoscillating signals may turn a beautiful idea to a useful application, which otherwise might remain impractical.
Since our optimization process is based on a specific way of constraining the signal to produce superoscillations, given by equation (3), improvements of the yield and/or the superoscillation quality may be expected and will be discussed in future publications, as well as generalizations to other expansion bases and higher dimensions. It is also of interest to obtain rigorous estimates of the optimal yield for superoscillating signals in the presence of many Fourier components (N >> 1) and large number of constraints (M >> 1).
|
2013-07-31T18:36:52.000Z
|
2012-09-27T00:00:00.000
|
{
"year": 2012,
"sha1": "3192efcdc63bce41043034257fe4bd0aad156646",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1209.6572",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3192efcdc63bce41043034257fe4bd0aad156646",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
]
}
|
240544355
|
pes2o/s2orc
|
v3-fos-license
|
Academic mothers with disabilities: Navigating academia and parenthood during COVID‐19
Abstract Academic mothers (including nonbinary, trans, and genderqueer parents) have always faced challenges in their profession due to systemic barriers and a “motherhood tax”; however, COVID‐19 has exacerbated already existing inequalities (Oleschuk, 2020). This study examines how the pandemic has affected academic mothers with mental health and physical disabilities, as these voices often remain hidden and unheard in academia despite increased awareness of their presence (Brown & Leigh, 2018; Kelly & Senior, 2020). Here, we share the voices of 23 participants using a qualitative methodology drawing from social justice and feminist theories to highlight the lived experience of academic mothers with mental and/or physical disabilities and their experiences as a scholar and parent during COVID‐19. Understanding the lived experience of this intersectional population can provide invaluable insights into ableist privilege within higher education, especially in the context of COVID‐19 which has substantially disrupted work and homelife routines.
| INTRODUCTION
Academic mothers 1 have historically faced challenges in their profession due to systemic barriers within academia and a "motherhood tax"; however, COVID-19 has exacerbated already existing inequalities (Oleschuk, 2020). Studies report that academic mothers, especially those with young children, are experiencing a significant decline in research productivity and output during the pandemic (Gabster et al., 2020;Myers et al., 2020). A recent study about the lived experience of academic mothers during COVID-19 discuss the potential of a "feminist parental ethics" (Kelly & Senior, 2020) where the question of "who is caring for the parents" comes to the forefront. Due to the increased childcare and homeschooling responsibilities combined with working lower paying and less secure jobs, academic mothers are particularly vulnerable during the pandemic and drop out of the workforce at higher rates than men (Collins et al., 2020;Power, 2020). Those that continue to work in academia despite these increased responsibilities experience competing demands on their limited resources with little relief from their institutions.
The intersection of motherhood with other demographics further exacerbates these challenges and adverse consequences to women's careers. We are particularly interested in the intersection of academia, motherhood, and mental and physical disabilities, as these voices often remain hidden and unheard in academia despite increased awareness of their presence (Brown & Leigh, 2018;Burk et al., 2021). In recent years, research has begun to look at the relationship between multiple marginalized identities within academia and experiences of discrimination and oppression, as well as the need for a policy framework to address such disadvantages (Liasidou, 2014). Specifically, researchers have started calling for transformations of the academic profession in the wake of COVID-19, advocating for fostering an ethics of care (Corbera et al., 2020;Miller, 2021) and more inclusive environments (Maas et al., 2020). However, much remains to be uncovered when it comes to the challenges and barriers academic mothers with disabilities experience in this context. The COVID-19 pandemic presents a unique opportunity to understand these challenges and therefore function as a catalyst to spark change and potentially break down these systemic and situational barriers that academic mothers, especially those with disabilities, face in their ability to work and excel in their profession relative to their colleagues.
Moreover, given the dearth of research of disability issues within the field of mental health, understanding the lived experience of academic mothers can provide invaluable insights into ableist privilege as it plays out within the profession. Studying disability is a "prism through which one can gain a broader understanding of society and human experience" (Linton, 1998, p. 118), including that of the impact of COVID-19 on the overall well-being of academic mothers. Given that COVID-19 has disrupted careers as well as routines for motherscholars, especially those with young children, we would expect an impact on their physical and mental health. The intricacies of this impact, however, have yet to be uncovered. Finally, we must consider how academia itself, as an institution, systemically exacerbates the aforementioned struggles, and further disables those with disabilities when accommodations are not provided to working parents with children (Brown & Leigh, 2018;Inckle, 2018). There is a paucity of data on academic parents with disabilities, however, we do know that mothers with mental health disabilities tend to be highly stigmatized.
To address these issues and gaps in our understanding, our qualitative study examines how the COVID-19 pandemic has affected academic motherscholars with mental health and physical disabilities. Overall, we found that the shift of resources towards caretaking, increase in health issues, and lack of accommodations by academic institutions resulted in many participants describing a loss of identity, either as an academic scholar or mother. To illustrate, one of our participants, Jo, described how her sense of identity as an academic scholar shifted after birth and during the pandemic. She wrote: "In becoming a mother, I had not considered the overlap in my identities as both academic and mother to become a motherscholar. I struggle a lot as both a mother and scholar with self-doubt and crippling anxiety.
I wonder if I'm doing enough in either role and if I'm doing each role correctly. Is my work good enough? Am I parenting well-enough? The constant self-evaluation and doubt are challenging. " Jo's example here highlights many of the struggles that our participants face, with regard to changes in identity as it relates to being an academic mother with a disability. We now turn to our methodological approach, which is rooted in feminist and social justice theory in the service of bringing to light the lived experience of academic mothers with disabilities during the pandemic. We then conclude with recommendations for changes that could be implemented as a result of our qualitative findings.
| METHODOLOGICAL APPROACH
Participants are from a research collective ("Motherscholar Collective") that formed in the summer of 2020 and includes motherscholars with children born between 2017 and 2020, including the authors of this manuscript. The objective of the Motherscholar Collective was to engage in meaningful scholarship while coping with the stressors of being an academic parent of young children. Participation in this collective and in data collection was voluntary and anonymous as each response was tied to a pseudonym chosen by the participant. Given the intent of the Motherscholar Collective, the authors are therefore participant-researchers; however, not all participants are involved in this study as researchers.
Our study used a flexible, reciprocal methodology drawing from aspects of social justice and feminist theories (Ackerly & True, 2020;Hesse-Biber, 2014). In addition to the theory surrounding our research principles, we specifically chose a narrative inquiry method of data collection, using journaling prompts as a means to invite participants to share their experiences as a motherscholar with a disability during the pandemic. A social justice approach ensures that motherscholars from our collective have opportunities to join in on research projects at various stages, thereby honoring inevitable work-life commitments that arise throughout the research process. Additionally, this approach also allows historically marginalized groups, such as those with disabilities, to become part of the research project and design, thus providing a sense of empowerment (Lyons et al., 2013). A feminist approach further allows that a variety of voices are heard across the motherscholar spectrum.
To best capture the lived experience of being a motherscholar with a disability during the pandemic, and to respect the various at-home demands experienced by many motherscholars, we invited participants to write about their experiences as a parent within academia. We then analyzed these written accounts using a thematic analysis approach (Braun & Clarke, 2006).
| DATA COLLECTION
The data set examined here comes from the Motherscholar Collective's larger, ongoing research project on the impact of COVID-19 on academic mothers with young children. We collected data over a period of five months in late 2020.
All participants were asked to create a pseudonym that they used throughout the study, and only the lead researcher on the overall project had access to identifying information. The first survey asked for basic demographic information (known as the "demographic survey"), tapping into participant identities related to age, gender, race, parenting status, income, employment status, and academic duties before and after the onset of the pandemic. Importantly, this survey asked participants whether they experienced a disability or chronic condition, and was used to form our initial group of participants.
Participants were then invited to write about their experiences as a parent with a disability or mental health condition during the pandemic in a second survey. This open-ended qualitative survey (known as the "mental health and disability survey") was available to anyone who had a physical or mental disability or condition, regardless of official diagnosis and treatment. Participants were asked to describe their experiences as a motherscholar with a disability or chronic condition; how the disability or chronic condition impacted their motivation, ability, and/or opportunity to manage both work and life at home; the challenges during the pandemic they have experienced that were exacerbated by having the disability or chronic condition; how they managed and coped with such challenges; and, finally, how various intersectional identities impacted their life during the pandemic.
We then conducted a thematic analysis (Braun & Clarke, 2006) of 23 unique survey responses in the service of understanding the lived experience of participants at the intersection of parenting, academia, and disability. Utilizing a thematic analysis allowed us to identify both common and diverging themes across the responses, a particularly useful method for understanding participant views, opinion, knowledge, experiences, or values from a set of qualitative data (Creswell & Poth, 2017). We used an inductive approach (i.e., we allowed the data to determine themes) with a focus on semantic analysis and storytelling.
| PARTICIPANT DEMOGRAPHICS
Our data set included 23 unique respondents that met the following criteria: identified as a woman, transwoman, genderqueer, or non-binary; worked in higher education; and self-reported a mental health condition and/or a physical disability. Regarding the diversity of our participant demographics, three respondents identified as being a person of color, six identified as either lesbian, gay, bisexual, or queer, and two identified as genderqueer or nonbinary. Two participants identified as a single parent. All participants had at least one child under two years and six identified as having more than one child. Participants lived in the US, mostly in the Northeast or Midwest. With respect to their academic careers, participants were a mix of assistant and associate professors with 7 being in tenured positions.
In this sample the most prevalent mental health conditions were anxiety, depression, postpartum anxiety, or postpartum depression (occurring in 83% of the sample). The most prevalent physical condition was an autoimmune disorder, followed by a sensory disorder, with physical conditions occurring in 33% of the sample. Therefore, the data underlying our analysis reflects both mental and physical disabilities but primarily mental ones. 2 Next, we discuss the limitations of our data collection and then turn to presenting our findings.
| LIMITATIONS
While our study covers novel ground and provides a greater understanding of the experiences of scholars in academia during COVID-19 at the intersection of motherhood and disabilities, we note a few limitations regarding data collection. As previously mentioned, we used feminist and social justice approaches to be inclusive; at the same time, we are mindful of the context in which this data was collected (the pandemic). Therefore, not all participants were able to complete all questions in our surveys. One reason for these incomplete responses may be the increased labor in taking time to respond to an open-ended survey during the pandemic as opposed to a survey using a closed-ended question format. Additionally, some respondents may have had additional caregiving, service, or work-related responsibilities, limiting the time that respondents were able to take to complete the questions. Not surprisingly, responses varied in depth, and some surveys were started and not completed. This pattern further highlights the impact of the pandemic on navigating multiple demands, including completing interview questions.
Although the participants' responses provided insight into the challenges faced by motherscholars in the Motherscholar Collective, these experiences may be limited in the extent to which they can be generalized to a larger population of academic mothers. The majority of the respondents were white and in heterosexual marriages, and few participants discussed the impact of race on their experiences. To provide greater insight to the experiences of motherscholars future research could use in-depth interviews or add modifications such as additional questions to capture additional data.
| FINDINGS
Below, we present four qualitative themes that emerged from our thematic analysis as they relate to the topic of this paper. Please note that participants are referred to by pseudonyms. 6.1 | "No one was available to help us": Increased isolation resulting from managing a disability during the pandemic Generally, many respondents felt isolated during the pandemic because of the need for social distancing and working from home which, in turn, impacted the lived experience of having a disability. For motherscholars with disabilities, this sense of isolation increased as they were juggling increased caretaking roles at home and working full time -often without needed support and accommodations. In turn, the lack of support and accommodations increased stress, depression, and other mental health concerns of our participants.
Vanessa, a white married mother-scholar in her late thirties, noted that her "anxiety and OCD contributed to [her] stress of juggling both jobs (mothering and academia)" from home, which was also compounded by social isolation and "digesting all the crazy societal events of the year. " She described how "[managing] dynamics around [her] husband and [their] extended family's safety choices regarding COVID-19" meant less time for herself, which in turn, increased her depressive symptoms. Similarly, Alex, a bisexual woman in her mid-thirties with a young child, described how her depression was impacted by the closing of her son's daycare during the beginning of the pandemic. She wrote: "I was so depressed when my husband and I were the sole caretakers of my son while daycare was closed. Every day was the same. No one was available to help us. Occasionally we would visit with friends outside, but other than that, we didn't see anyone. My family lives 3.5 hours away and while we would Zoom with my parents every day, it wasn't the same as having an extra set of hands to help with a busy toddler. " Alex noted that getting ongoing help for depression was also isolating as her providers were less available due to increased demand on medical resources. Some participants described how these issues were present pre-pandemic and COVID-19 only exacerbated them. Heather, an Asian American married mother of two, with a history of depression and anxiety, noted that her mental health conditions made it difficult for her to be social and bond with her first child. Since the pandemic, her anxiety has worsened. She wrote: "It has been a cycle of isolation because I don't want to socialize with people on Zoom or whatever. Anxiety has been difficult to control since there are so many things out of my sphere of influence -there has been complete terror (beginning of the pandemic) to depression (long months into the pandemic). " For mothers with physical disabilities, the pandemic presented increased challenges with navigating everyday situations, which compounded mental health challenges. Kai, who identifies as white, nonbinary, and queer, is also deaf and relies on lipreading and/or American Sign Language (ASL). She described the challenges of accessing healthcare services during the pandemic: "The wearing of masks during COVID-19 makes it impossible for me to lipread conversations, which is my primary mode of communication. For this reason, I've had to do the labor of seeking out accommodations (such as ASL) when medical centers do not have the resources or time. It also meant that my wife (who is hearing) attended our daughter's medical appointments due to accessibility reasons. I often felt left out, which worsened my postpartum anxiety. " For single parents the social isolation during COVID-19 was frightening. Jessica, a white single mom and tenure track assistant professor, wrote: "It is just my son and I and there were definitely times, especially early on in the pandemic, where I freaked out about the fact that something could just happen to me and no one would know since we live alone. It sucks to be this detached from a support system with no one checking up on you regularly. " Without local family, or clear institutional support, single parents were left to navigate the pandemic and resulting isolation alone without backup care.
As described by Heather above, the "cycle of isolation" was deeply felt by academic mothers with disabilities. Accommodations were unavailable or difficult to find given the need for masking and social distancing, and the emotional labor of trying to find accommodations only increased the stress of these motherscholars. 6.2 | "I always go last": Shifting and navigating priorities for home, health, and work to manage disabilities As portrayed by the participants, the pandemic contributed to an increase in demand for resources, rendering a worklife balance nearly impossible to manage, especially in the presence of a disability. At the same time, needed resources to manage stressors and care for oneself were reduced due to the effects of the pandemic, as we described in the previous theme of isolation from social, health, and institutional resources. Specifically, participants had to shift their priorities to manage their responsibilities. This shift was more successful for some than others, and many still struggled with their health regardless of success and despite seeking help. Participants discussed the impact of disability on motivation and ability to find a workable and sustainable work-home life balance since the onset of COVID-19, with the result being a need to shift priorities.
Jessica noted that since the pandemic she had to shift her priorities to tackle the many demands of her job. She Jessica was able to prioritize the mental health of herself as well as her students in a mutually beneficial way.
For others, this shift was impossible due to a lack of necessary medical support. The bulk of participants described feeling "unheard" and "not understood" by others (family, friends, supervisors, and doctors). Aline, a white partnered mother of a young infant, shared the following story: "I went to my yearly doctor's appointment and sobbed because I was so depressed. I had a 30-min session with a mental health counselor who told me to do more deep breathing and watch less news. I felt unheard…and wanted to meet with a psychologist who practices cognitive-behavioral therapy… but my provider's coordinator couldn't come through for me. " Aline also noted that since the pandemic she has come to experience the following pecking order: "kids come first, then academic research, then I come last. " The shift of priorities to care for family in times of increased stress also resulted in an exacerbation of mental health issues. Participants had to make difficult decisions to support their health needs; yet many still suffered and felt they could not do everything that was required of them as parents and scholars. A Latinx motherscholar, Pau, has a history of postpartum depression which worsened during the pandemic. Since the birth of her child, she has experienced ongoing guilt in her roles as a mother, academic, and person with depression. She wrote: "I question my ability to be both a good mother and a good scholar. I constantly feel guilty for not doing 'more' for my son and yet I feel I'm not as productive with work as I should be. I also ended up getting COVID and felt like I couldn't care for my son well enough when I was so sick. " Navigating health needs caused participants to balance their needs with their children's, analyzing the risk of getting (or foregoing) the care they required. Paige, a white married mother of two in her early thirties, described the intersection of breastfeeding and anxiety during the beginning of the pandemic, and wrote about how this anxiety impacted the choices she made. She described choosing to prioritize her own health, which would allow her to better function for herself and her family. She wrote: "I was on anti-anxiety medication during the first 2 months of the shutdown. My hormonal insomnia flared up in March, so I stopped breastfeeding so I could take my sleep aid. Thus, my body went into a lurch suddenly weaning, so I required an anti-anxiety medication to help curb the panic attacks I began having. " Another participant, Denise, also a single parent, wrote about the intersection of meeting her own health needs for rheumatoid arthritis with that of caring for her son. She wrote: "Every time I need a medication I have to do the calculus of whether I need it so acutely that it's worth taking my son into a risky area for a minute (since he can't wear a mask yet), or if I can go an extra day or two until he has care. " These difficult situations were frequent throughout the pandemic as both healthcare support and childcare were difficult to find. Thus, the aforementioned theme of isolation further exacerbated our participants' assessment of risk and need to prioritize the well-being of their children over their own.
| "
We are expected to support our students but we don't get support in return": The systemic oppression of academia and its failure to support parents Participants further discussed how their struggles as a motherscholar with a disability during the pandemic was exacerbated by lack of support from their academic institutions, which, historically, has oppressed minority groups.
Paige noted that the "stigma attached with mental health is very difficult in academia. " Jo, who is white, partnered, and queer, noted that her institution has failed to acknowledge "that parents may be struggling with lack of childcare or tough childcare decisions. " She went on to say, "We are certainly expected to make allowances for and support students but we don't get support in return. I don't see the same care being given to faculty members. " She described the experience as "disappointing. " Participants described how their academic institutions frequently placed demands on them to support their students above their own health. Lizzy, a white assistant professor in her mid-30s and parent of a young baby, noted that "having to constantly check on students who are not engaging with online classes has increased [her] anxiety, especially as mental health has always been an issue for students at our institution. " Kai noted that her workload increased during the pandemic due to heightened student needs, especially for LGBTQ+ students who were sent home and did not have support from family members. They wrote: "Students were reaching out to me on a daily basis for support, and yet I was still being asked to teach three classes, supervise, and do research and service, as normal. Yet this was anything but normal. " Participants also described the failure of academic institutions to support COVID-19-related safety practices.
Alex, who describes herself as being "sensitive to germ-y surfaces" even pre-pandemic, wrote about feeling unsupported by her institution, especially regarding safety. She wrote: "I didn't want to look like a 'crazy' person in front of my students but I couldn't help but sanitize my hands after touching everything in the classroom. Touching and storing my mask after taking it off was anxiety-provoking…I found myself getting really irritated with my college's facilities folks when they asked why I would need cleaning supplies for my work station. They kept telling me there would be cleaning supplies in the classrooms but there weren't. " The repetition of this anxiety-provoking situation, which occurred each time Alex came to campus, impacted the overall quality of her life.
Single parents also described the challenges of balancing parenting and academic work with a disability. Denise, a single mother in her early 40s with a history of rheumatoid arthritis diagnosed in graduate school, described the ways she had to cope with her physical condition such as completing all academic work way in advance of deadlines. She wrote: "I have gotten used to working around my condition. For example, I never ever let things go until the last minute (like grant applications) because I can't be completely confident that at the last minute I'll feel well enough to do it, and that has worked for me along with other adaptations. " Since the pandemic, however, Denise has found it harder to meet deadlines, especially with no support from her institution.
Despite the isolation, lack of resources, and lack of institutional support, participant stories often showed glimpses of resiliency and a sense of community, as described in our final theme.
| "We just need to get through this": Resilience and working for change
The theme of resiliency and working for change was the undercurrent of a lot of our participant's stories. Participants discussed getting support from an online support group of academic mothers, through lowering expectations, focusing on the good in their relationships, and building advocacy as ways to reduce stress, depression and isolation.
Participants noted their appreciation of support at home and the Motherscholar Collective. Monroe, a white queer-identified parent in their mid-30s, noted that they "try to experience gratitude for what I do have -my partner and I have been fighting more as a result of being home and being around each other all the time and I'm trying to step back and appreciate her for all that she does for our family and for me." Other participants described the merging of identities and how being part of the Motherscholar Collective served as a source of support, as exemplified in this quote from Vanessa: "I think I am starting to merge and appreciate the overlap in my identities as mother and scholar. The roles themselves are challenging to navigate in that both require so much of me and so much time. I wonder if the support of various academic mama groups online has helped with the merging of those identities, in seeing other mamas navigate their roles and identities with such grace." Some participants described how various aspects of their identities came to the surface during the pandemic, particularly with respect to disability advocacy in the workplace. Alex wrote: "Seeing women (colleagues and friends) drowning under all the work makes me furious. I've gotten more bold in emails and Zoom meetings when I believe there is pressure to do unnecessary work. I'm pursuing policy development to help mothers and parents in academia [with disabilities]. I'm reminding colleagues that now is not the time to 'function as usual' . " This relates to Jessica's shift in priorities, as previously discussed, which also involved advocating for less pressure on everyone during the pandemic. Jessica wrote, "It's not the time to make things difficult on either the students or myself. All of us just need to get through this. " By striving to see the positive in their relationships and working to create change in their working styles and institutions, academic mothers show their resiliency in working and mothering through the pandemic.
| DISCUSSION
Taken together, our results show that participants had to make difficult choices during the pandemic related to disruptions in childcare and routines and isolation from resources that typically would help overcome these disruptions.
These choices included concentrating less on teaching and/or research and saying no to career opportunities due to lack of time, which in turn impacted mental and physical health as well as the motherscholar's sense of identity across the domains of parenting, academia, and relationships.
These disruptions were further exacerbated by a lack of accommodations from academic institutions and support from family, spouses, and partners regarding physical and mental health conditions. Many participants discussed the difficulties of balancing mental and physical health needs with being a stay-at-home parent (due to the pandemic making childcare unavailable or unsafe) while also working full-time. For some participants, the combined identities of being an academic and a parent with a disability during COVID -especially for those who did not have childcare -meant that the academic side suffered as caretaking responsibilities were prioritized. These conflicting roles caused participants to struggle finding time for their children, partners, and work at the same time as demands for both increased during the pandemic. All participants felt they could not adequately fulfill all their roles and wished for more institutional support, and almost all participants noted a shift in identity. Ongoing mental and physical health suffered as a result of this lack of support and time; additionally, many participants cited the onset of anxiety and depression as a result of the pandemic and difficulty accessing high quality mental healthcare remotely. While some participants were able to easily access telehealth therapy, others struggled to find competent and available providers in their location. Given the aforementioned findings, we now discuss the implications of our study and recommendations that may be useful for academic workplaces to consider when implementing resources.
| IMPLICATIONS AND RECOMMENDATIONS
The challenging experiences of academic mamas with mental health and physical disabilities during the pandemic identified in this project highlight the need for additional accommodations and support in higher education to ensure that these academic mothers are not left behind. This work further underscores the need for long-term policy reform such that academic workplace structures become more equitable and resilient to external shocks that could otherwise widen existing inequalities as we have observed during the pandemic.
Based on the themes that emerged from our data, we see an opportunity for academic workplaces to cultivate positive changes that support academic mothers who have disabilities. All of these suggestions would also benefit students which, in turn, may increase enrollment and retention as students with disabilities will be better supported.
These suggestions for creating positive, supportive work environments are: • More support from teaching and learning centers on how to structure courses (especially online and hybrid) for efficiency and inclusivity. This inclusion will benefit both students and instructors, as noted by our participant Jessica.
• More support for physical disabilities -access to captioning services or sign language interpreters for deaf and hard of hearing instructors and extended time to complete tasks to allow for times when it is difficult to work for health reasons. Allowing more time to meet deadlines also accommodates fluctuating childcare availability.
• Increase in access to mental health support for faculty through the workplace such as counselors who are specifically knowledgeable about disabilities and academic job stress/unique challenges. Providing counselors through the university counseling center or EAP who specialize in faculty experiences would likely result in an increase in coping skills and decrease in mental health struggles within this population.
• Acknowledgment of additional stress caused by intersectional identities and effort to provide support and relief to acamamas with disabilities facing racism, homophobia, and/or transphobia. As noted by Manchanda (2020), accommodations and supports for disabilities must also be inclusive and anti-racist.
• Taking a "universal design" 3 approach so more people are supported without having to out themselves (Goldsmith, 2012). This proactive approach will accommodate those who develop a disability later in life, and those who do not realize they would benefit from support (Hamraie, 2017).
• On-site childcare, or employer-supplemented childcare. This will ensure that motherscholars can better focus on their health and their work knowing that their children are safe and cared for during work hours.
• Stronger "listening" procedures to capture and include the voices of motherscholars with disabilities, especially in times of crises such that these voices are represented and included in crisis decision-making (and decision-making in general).
| CONCLUSION
As the stories above have demonstrated, being an academic parent with a disability (whether physical or mental) puts an additional demand on one's resources while, at the same time, requiring additional resources from the environment to maintain a delicate balance between parenting and career progress. The pandemic added to these demands while, simultaneously, limited access to those resources needed to manage disabilities properly, resulting in an increase in symptoms related to mental and physical health. While these are issues that academic mothers face at the best of times, the pandemic magnified those issues for all academic mothers and even more so for those with disabilities. Our stories also highlight the resiliency of academic mothers in the face of impossible choices. Although participants had to make choices that sometimes put their own needs last in service of their families and careers, most participant stories had an undercurrent of resiliency as they each found creative ways to cope.
At the time of this writing, the pandemic is waning as vaccination rates are increasing. Some participants have returned to work in-person, for many childcare is available again, and for some stress is decreasing. A few are transitioning or have transitioned out of academia; in some cases, this transition was in part a result of inadequate institutional response to the pandemic. However, participants' disabilities and their need for accommodations remain, especially as academic institutions increasingly seek to focus on diversity, equity, and inclusion. We hope that the spotlight the pandemic brought to the needs of motherscholars with disabilities will remain and that positive change will occur in academic institutions.
DATA AVAILABILITY STATEMENT
Due to the nature of participant confidentiality, research data is not shared. Please direct any questions about the research data to the principal author, Kathryn Wagner at Kathryn.wagner@gallaudet.edu.
|
2021-10-05T20:10:05.005Z
|
2021-09-16T00:00:00.000
|
{
"year": 2021,
"sha1": "d3710d289ea5c43851d671bffd83da3460feba7d",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8652755",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "145520b70d311b05393b00484d006b2252b92f39",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
}
|
115164078
|
pes2o/s2orc
|
v3-fos-license
|
Loop quantum cosmology of Bianchi I models
The"improved dynamics"of loop quantum cosmology is extended to include anisotropies of the Bianchi I model. As in the isotropic case, a massless scalar field serves as a relational time parameter. However, the extension is non-trivial because one has to face several conceptual subtleties as well as technical difficulties. These include: a better understanding of the relation between loop quantum gravity (LQG) and loop quantum cosmology (LQC); handling novel features associated with the non-local field strength operator in presence of anisotropies; and finding dynamical variables that make the action of the Hamiltonian constraint manageable. Our analysis provides a conceptually complete description that overcomes limitations of earlier works. We again find that the big bang singularity is resolved by quantum geometry effects but, because of the presence of Weyl curvature, Planck scale physics is now much richer than in the isotropic case. Since the Bianchi I models play a key role in the Belinskii, Khalatnikov, Lifshitz (BKL) conjecture on the nature of generic space-like singularities in general relativity, the quantum dynamics of Bianchi I cosmologies is likely to provide considerable intuition about the fate of generic space-like singularities in quantum gravity. Finally, we show that the quantum dynamics of Bianchi I cosmologies projects down \emph{exactly} to that of the Friedmann model. This opens a new avenue to relate more complicated models to simpler ones, thereby providing a new tool to relate the quantum dynamics of LQG to that of LQC.
I. INTRODUCTION
Loop quantum gravity (LQG) [1][2][3] is a non-perturbative, background independent approach to the unification of general relativity and quantum physics. One of its key features is that space-time geometry is treated quantum mechanically from the beginning. Loop quantum cosmology (LQC) [4,5] is constructed by applying methods of LQG to mini-superspaces obtained by a symmetry reduction of general relativity. In the homogeneous, isotropic cosmological models with a massless scalar field, quantum geometry effects of LQG have been shown to create a new repulsive force in the Planck regime. The force is so strong that the big bang is replaced by a specific type of quantum bounce. The force rises very quickly once the scalar curvature reaches ∼ −0.15π/ℓ 2 Pl (or matter density ρ reaches ∼ 0.01 ρ Pl ) to cause the bounce but also dies very quickly after the bounce once the scalar curvature and the density fall below these values. Therefore outside the Planck regime the quantum space-time of LQC is very well approximated by the space-time continuum of general relativity. This scenario is borne out in the k=0, Λ=0 models, [6][7][8][9][10][11][12][13], Λ =0 models [14,15], the k=1 closed model [16,17], k=−1 open model [18] and the k=0 model with an inflationary potential with phenomenologically viable parameters [19]. Going beyond the big-bang and big-crunch singularities, LQC has also been used to argue that its quantum geometry effects resolve all strong curvature singularities in homogeneous, isotropic situations in which matter is a perfect fluid with an equation of state of the standard type, p = p(ρ) [20]. (For recent reviews, see, e.g., [21,22].) Finally, recent investigations [23,24] of Gowdy models, which have an infinite number of degrees of freedom, also indicate that the big-bang is replaced by a quantum bounce.
Detailed and viable quantum theories were constructed in the homogeneous, isotropic case using the so-called "μ" scheme. A key open question has been whether or not the qualitative features of their Planck scale physics will persist in more realistic situations in which these strong symmetry assumptions do not hold exactly. A first step in this direction is to retain homogeneity and extend the "improved dynamics" of [10] to anisotropic situations. In the isotropic case, there is only one non-trivial curvature invariant, the (space-time) scalar curvature (or, equivalently, matter density). In anisotropic situations Weyl curvature is nonzero and it too diverges at the big bang. Therefore, now one can enter the Planck regime in several inequivalent ways which suggests that the Planck scale physics would now be much richer.
In this paper we will continue the LQC explorations of this issue by analyzing in detail the simplest of anisotropic models, the Bianchi I cosmologies. (Previous work on this model is discussed below.) As in the isotropic case we will use a massless scalar field as the matter source, and it will continue to provide the "relational" or "internal" time a la Leibniz with respect to which other physical quantities of interest -e.g., curvatures, shears, expansion and matter density-"evolve". Again, as in the isotropic case, the framework can be further extended to accommodate additional matter fields in a rather straightforward fashion.
Although the Bianchi I models are the simplest among anisotropic cosmologies, results obtained in the context of the Belinskii, Khalatnikov, Lifshitz (BKL) conjecture [25,26] suggest that they are perhaps the most interesting ones for the issue of singularity resolution. The BKL conjecture states that, as one approaches space-like singularities in general relativity, terms with time derivatives would dominate over those with spatial derivatives, implying that the asymptotic dynamics would be well described by an ordinary differential equation. By now considerable evidence has accumulated in favor of this conjecture [27][28][29][30][31]. For the case when the matter source is a massless scalar field in full general relativity without any symmetry assumption, these results suggest that, as the system enters the Planck regime, dynamics along any fixed spatial point would be well described by a Bianchi I metric. Therefore understanding the fate of Bianchi I models in LQC could provide substantial intuition for what happens to generic space-like singularities in LQG [32,33].
Indeed, in cosmological contexts where one has approximate homogeneity, a natural strategy in full LQG is to divide the spatial 3-manifold into small, elementary cells and assume that there is homogeneity in each cell, with fields changing slowly as one moves from one cell to the next. (For an exploration along these lines in the older "µ o scheme," see [34].) Now, if one were to assume that geometry in each elementary cell is also isotropic, then the Weyl tensor in each cell -and therefore everywhere-would be forced to be zero. A natural strategy to accommodate realistic, non-vanishing Weyl curvature would be to use Bianchi I geometry in each cell and let the parameters k i vary slowly from one cell to another. In this manner, LQC of the Bianchi I model can pave way to the analysis of the fate of generic space-like singularities of general relativity in full LQG.
Because of these potential applications, Bianchi I models have already drawn considerable attention in LQC (see in particular [35][36][37][38][39][40]). During these investigations, groundwork was laid down which we will use extensively. However, in the spatially non-compact context (i.e., when the spatial topology is R 3 rather than T 3 ), the construction of the quantum Hamiltonian constraint turned out to be problematic. The Hamiltonian constraint used in the early work has the same difficulties as those encountered in the "µ o -scheme" in the isotropic case (see, e.g., [12], or Appendix B of [21]). More recent papers have tried to overcome these limitations by mimicking the "μ" scheme used successfully in the isotropic case. However, to make concrete progress, at a key point in the analysis a simplifying assumption was made without a systematic justification. 1 Unfortunately, it leads to quantum dynamics which depends, even to leading order, on the choice of an auxiliary structure (i.e., the fiducial cell) used in the construction of the Hamiltonian framework [40]. This is a major conceptual drawback. Also, the final results inherit certain features that are not physically viable (e.g. the dependence of the quantum bounce on "directional densities" in [36,37]). We will provide a systematic treatment of quantum dynamics that is free from these drawbacks.
To achieve this goal one has to overcome rather non-trivial obstacles which had stalled progress for the past two years. This requires significant new inputs. The first is conceptual: we will sharpen the correspondence between LQG and LQC that underlies the definition of the curvature operatorF i ab in terms of holonomies. The holonomies we are led to use in this construction will have a non-trivial dependence on triads, stemming from the choice of loops on which they are evaluated (see footnote 1). As a result, at first it seems very difficult to define the action of the resulting quantum holonomy operators. Indeed this was the primary technical obstacle that forced earlier investigations to take certain short cuts -the assumption mentioned above-while definingF i ab . The second new input is the definition of these holonomy operators without having to take a recourse to such short cuts. But then the resulting Hamiltonian constraint appears unwieldy at first. The third major input is a rearrangement of configuration variables that makes the constraint tractable both analytically, as in this paper, and for the numerical work in progress [41].
Finally, we will find that the resulting Hamiltonian constraint has a striking feature which could provide a powerful new tool in relating the quantum dynamics of more complicated models to that of simpler models. It turns out that, in LQC, there is a well-defined projection from the Bianchi I physical states to the Friedmann physical states which maps the Bianchi I quantum dynamics exactly to the isotropic quantum dynamics. Previous investigations of the relation between quantum dynamics of a more complicated model to that of a simpler model generally began with an embedding of the Hilbert space H Res of the more restricted model in the Hilbert space H Gen of the more general model (see, e.g., [43,44]). In generic situations, the image of H Res under this embedding was not left invariant by the more general dynamics on H Gen . This led to a concern that the physics resulting from first reducing and 1 In the isotropic case, "improved" dynamics [10] required thatμ be proportional to 1/ |p|. In the anisotropic case, one has three p i and quantum dynamics requires the introduction of threeμ i . In the Bianchi I case now under consideration, it was simply assumed [36,37,40] thatμ i be proportional to 1/ |p i |. We will see in section III B that a more systematic procedure leads to the conclusion that the correct generalization of the isotropic result is more subtle. For example,μ 1 is proportional to |p 1 |/|p 2 p 3 |. then quantizing may be completely different from that obtained by quantizing the larger system and regarding the smaller system as its sub-system. The new idea of projecting from H Gen to H Res corresponds to "integrating out the degrees of freedom that are inaccessible to the restricted model" while the embedding H Res in to H Gen corresponds to "freezing by hand" these extra degrees of freedom. Classically, both are equally good procedures and in fact the embedding is generally easier to construct. However, in quantum mechanics it is more appropriate to integrate out the "extra" degrees of freedom. In the present case, one "integrates out" anisotropies to go from the LQC of the Bianchi I models to that of the Friedmann model. This idea was already proposed and used in [42] in a perturbative treatment of anisotropies in locally rotationally symmetric, diagonal, Bianchi I model. We extend that work in that we consider the full quantum dynamics of diagonal Bianchi I model without additional symmetries and, furthermore, use the analog of the "μ scheme" in which the quantum constraint is considerably more involved than in the "µ o -type" scheme used in [42]. The fact that the LQC dynamics of the Friedmann model is recovered exactly provides some concrete support for the hope that LQC may capture the essential features of full LQG, as far as the quantum dynamics of the homogeneous, isotropic degree of freedom is concerned.
The material is organized as follows. We will begin in section II with an outline of the classical dynamics of Bianchi I models. This overview will not be comprehensive as our goal is only to set the stage for the quantum theory which is developed in section III. In section IV we discuss three key properties of quantum dynamics: the projection map mentioned above, agreement of the LQC dynamics with that of the Wheeler DeWitt theory away from the Planck regime and effective equations. (The isotropic analogs of these equations approximate the full LQC dynamics of Friedmann models extremely well.) In section IV we summarize the main results and discuss some of their ramifications. The Appendix A discusses parity type discrete symmetries which play an important role in the analysis of quantum dynamics.
II. HAMILTONIAN FRAMEWORK
In this section we will summarize those aspects of the classical theory that will be needed for quantization. For a more complete description of the classical dynamics see, e.g., [35][36][37]45].
Our space-time manifold M will be topologically R 4 . As is standard in the literature on Bianchi models, we will restrict ourselves to diagonal Bianchi I metrics. Then one can fix Cartesian coordinates τ, x i on M and express the space-time metric as: where N is the lapse and a i are the directional scale factors. Thus, the dynamical degrees of freedom are encoded in three functions a i (τ ) of time. Bianchi I symmetries permit us to rescale the three spatial coordinates x i by independent constants. Under x i → α i x i , the directional scale factors transform as 2 a i → α −1 i a i . Thus, the numerical value of a directional scale factor, say a 1 , is not an observable; only ratios such as a 1 (τ )/a 1 (τ ′ ) are. The matter source will be a massless scalar field which will serve as the relational or internal time. Therefore, it is convenient to work with a harmonic time function, i.e. to ask that τ satisfy τ = 0. From now on we will work with this choice. Since the spatial manifold is non-compact and all fields are spatially homogeneous, to construct a Lagrangian or a Hamiltonian framework one has to introduce an elementary cell V and restrict all integrations to it [7]. We will choose V so that its edges lie along the fixed coordinate axis x i . As in the isotropic case, it is also convenient to fix a fiducial flat metric o q ab with line element We will denote by o q the determinant of this metric, by L i the lengths of the three edges of V as measured by o q ab , and by V o = L 1 L 2 L 3 the volume of the elementary cell V also measured using o q ab . Finally, we introduce fiducial co-triads o ω i a = D a x i and the triads o e a i dual to them. Clearly they are adapted to the edges of V and are compatible with o q ab (i.e., satisfy As noted above, Bianchi I symmetries allow each of the three coordinates to be rescaled by an independent constant α i . Under these rescalings, We must ensure that our physical results do not change under these rescalings. Finally, the physical co-triads are given by ω i a = a io ω i a and the physical 3-metric q ab is given by q ab = ω i a ω j b δ ij . With these fiducial structures at hand, we can now introduce the phase space. Recall first that in LQG the canonical pair consists of an SU(2) connection A i a and a triad E a i of density weight one. Using the Bianchi I symmetry, from each gauge equivalence class of these pairs we can select one and only one, given by: where c i , p i are constants and q = ( is the determinant of the physical spatial metric q ab . Thus the connections A i a are now labelled by three parameters c i and the triads E a i by three parameters p i . If p i are positive, the physical triad e a i and the fiducial triad o e a i have the same orientation. A change in sign of, say, p 1 corresponds to a change in the orientation of the physical triad brought about by the flip e a 1 → −e a 1 . These flips are gauge transformations because they do not change the physical metric q ab . The momenta p i are directly related to the directional scale factors: where we take the directional scale factor a i to be positive if the triad vector e a i is parallel to o e a i and negative if it is anti-parallel. As we will see below, in any solution to the field equations, the connection components c i are directly related to the time derivatives of a i . The factors of L i in (2.3) ensure that this parametrization is unchanged if the fiducial co-triad, triad and metric are rescaled via x i → α i x i . However, the parametrization does depend on the choice of the cell V. Thus the situation is the same as in the isotropic case [7]. (The physical fields A i a and E a i are of course insensitive to changes in the fiducial metric or the cell.) To evaluate the symplectic structure of the symmetry reduced theory, as in the isotropic case [7], we begin with the expression of the symplectic structure in the full theory and simply restrict the integration to the cell V. The resulting (non-vanishing) Poisson brackets are given by: To summarize, the phase space in the Bianchi I model is six dimensional, coordinatized by pairs c i , p i , subject to the Poisson bracket relations (2.5). This description is tied to the choice of the fiducial cell V but is insensitive to the choice of fiducial triads, co-triads and metrics.
Next, let us consider constraints. The full theory has a set of three constraints: the Gauss, the diffeomorphism and the Hamiltonian constraints. It is straightforward to check that, because we have restricted ourselves to diagonal metrics and fixed the internal gauge, the Gauss and the diffeomorphism constraints are identically satisfied. We are thus left with just the Hamiltonian constraint. Its expression is obtained by restricting the integration in the full theory to the fiducial cell V: where N is the lapse function and the gravitational and the matter parts of the constraint densities are given by and Here γ is the Barbero-Immirzi parameter, F ab k is the curvature of the connection A i a , given by K i a is related to the extrinsic curvature K ab via K i a = K ab e bi and ρ matt is the energy density of the matter fields. In general, A i a is related to K i a and the spin connection Γ i a defined by the triad e a i via A i a = Γ i a + γK i a . However, because Bianchi I models are spatially flat, Γ i a = 0 in the gauge chosen in (2.3), whence A i a = γK i a . This property and the fact that spatial derivatives of K i a vanish by the Bianchi I symmetry leads us to the relation Therefore, the gravitational part of the Hamiltonian constraint can be simplified: Finally, recall that our matter field is a massless scalar field T . The matter energy density of the scalar field T is given by ρ matt = p 2 (T ) /2V 2 , where V = |p 1 p 2 p 3 | is the physical volume of the elementary cell. Our choice of harmonic time τ implies that the lapse function is given by N = |p 1 p 2 p 3 |. With these choices the constraint (2.6) simplifies further: . (2.12) Physical states of the classical theory lie on the constraint surface C H = 0. The time evolution of each p i and c i is obtained by taking their Poisson bracket with C H . 14) The four other time derivatives can be obtained via permutations. Although the phase space coordinates c i , p i themselves depend on the choice of the fiducial cell V, the dynamical equations for A i a and E a i -and hence also for the physical metric q ab and the extrinsic curvature K ab -that follow from (2.13) and (2.14) are independent of this choice.
Combining Eqs. (2.4), (2.13) and (2.14), one finds It is instructive to relate the c i to the directional Hubble parameters H i = d ln a i /dt where t is the proper time, corresponding to the lapse function N (t) = 1. Since t is related to the harmonic time τ via Ndτ = N (t) dt .
(2.16) Therefore, we have where L i a i is the length of the ith edge of V as measured by the physical metric q ab . Next, it is convenient to introduce a mean scale factor a := (a 1 a 2 a 3 ) 1/3 which encodes the physical volume element but ignores anisotropies. Then, the mean Hubble parameter is given by where is the shear term. The right hand side of (2.20) brings out the fact that the anisotropic shears (H i − H j ) contribute to the energy density; they quantify the energy density in the gravitational waves. Using the fact that our matter field has zero anisotropic stress one can show that Σ 2 is a constant of the motion [37]. If the space-time itself is isotropic, then Σ 2 = 0 and Eq. (2.20) reduces to the usual Friedmann equation for the standard isotropic cosmology. These considerations will be useful in interpreting quantum dynamics and exploring the relation between the Bianchi I and Friedmann quantum Hamiltonian constraints.
Next, let us consider the scalar field T . Because there is no potential for it, its canonically conjugate momentum p (T ) is a constant of motion (which, for definiteness, will be assumed to be positive). Therefore, in any solution to the field equations T grows linearly in the harmonic time τ . Thus, although T does not have the physical dimensions of time, it is a good evolution parameter in the classical theory. The form of the quantum Hamiltonian constraint is such that T will also serve as a viable internal time parameter in the quantum theory.
We will conclude with a discussion of discrete 'reflection symmetries' that will play an important role in the quantum theory. (For further details see the Appendix.) In the isotropic case, there is a single reflection symmetry, Π(p) = −p which physically corresponds to the orientation reversal e a i → −e a i of triads. These are large gauge transformations, under which the metric q ab remains unchanged. The Hamiltonian constraint is invariant under this reflection whence one can, if one so wishes, restrict one's attention just to the sector p ≥ 0 of the phase space. In the Bianchi I case, we have three reflections Π i , each corresponding to the flip of one of the triad vectors, leaving the other two untouched (e.g., Π 1 (p 1 , p 2 , p 3 ) = (−p 1 , p 2 , p 3 )). As shown in [46], the Hamiltonian flow is left invariant under the action of each Π i . Therefore, it suffices to restrict one's attention to the positive octant in which all three p i are non-negative: dynamics in any of the other seven octants can be easily recovered from that in the positive octant by the action of the discrete symmetries Π i .
• Remark: In the LQC literature on Bianchi I models, a physical distinction has occasionally been made between the fiducial cells V which are "cubical" with respect to the fiducial metric o q ab and those that are "rectangular." (In the former case all L i are equal.) However, given any cell V one can always find a flat metric in our collection (2.1) with respect to which that V cubical. Using it as o q ab one would be led to call it cubical. Therefore the distinction is unphysical and the hope that the restriction to cubical cells may resolve some of the physical problems faced in [36,37] was misplaced.
III. QUANTUM THEORY
This section is divided into four parts. In the first, we briefly recall quantum kinematics, emphasizing issues that have not been discussed in the literature. In the second, we spell out a simple but well-motivated correspondence between the LQG and LQC quantum states that plays an important role in the definition of the curvature operatorF ab k in terms of holonomies. However, the paths along which holonomies are evaluated depend in a rather complicated way on the triad (or momentum) operators, whence at first it seems very difficult to define these holonomy operators. In the third subsection we show that geometric considerations provide a natural avenue to overcome these apparent obstacles. The resulting Hamiltonian constraint is, however, rather unwieldy to work with. In the last sub-section we make a convenient redefinition of configuration variables to simplify its action. The simplification, in turn, will provide the precise sense in which the singularity is resolved in the quantum theory.
A. LQC Kinematics
We will summarize quantum kinematics only briefly; for details, see e.g. [36,37]. Let us begin by specifying the elementary functions on the classical phase space which are to have unambiguous analogs in the quantum theory. In LQC this choice is directly motivated by the structure of full LQG [1][2][3]. As one might expect from the isotropic case [7,9], the elementary variables are the three momenta p i and holonomies h (ℓ) i along edges parallel to the three axis x i , where ℓL i is the length of the edge with respect to the fiducial metric o q ab . 3 These functions are (over)complete in the sense that they suffice to separate points of the phase space. Taking the x 1 axis for concreteness, the holonomy h where I is the unit 2×2 matrix and τ i constitute a basis of the Lie algebra of SU (2), satisfying Thus, the holonomies are completely determined by almost periodic functions exp(iℓc j ) of the connection; they are called "almost" periodic because ℓ is any real number rather than an integer. In quantum theory, then, elementary operatorsĥ (ℓ) i andp i are well-defined and our task is to express other operators of physical interest in terms of these elementary ones.
Recall that in the isotropic case it is simplest to specify the gravitational sector of the kinematic Hilbert space in the triad of p representation: it consists of wave functions Ψ(p) which are symmetric under p → −p and have a finite norm: ||Ψ|| 2 = p |Ψ(p)| 2 < ∞. In the Bianchi I case it is again simplest to describe H grav kin in the momentum representation. Consider first a countable linear combination, of orthonormal basis states |p 1 , p 2 , p 3 , where Next, recall that on the classical phase space the three reflections Π i represent large gauge transformations under which physics does not change. They have a natural induced ac-tionΠ i on the space of wave functions Ψ(p 1 , p 2 , p 3 ). (Thus, for example, .) Physical observables commute withΠ i . Therefore, as in gauge theories, each eigenspace ofΠ i provides a physical sector of the theory. SinceΠ 2 i = I, eigenvalues of Π i are ±1. For definiteness, as in the isotropic case, we will assume that the wave functions Ψ(p 1 , p 2 , p 3 ) are symmetric underΠ i . Thus, the gravitational part H grav kin of the kinematical Hilbert space is spanned by wave functions Ψ(p 1 , p 2 , p 3 ) satisfying which have finite norm (3.2). The basis states |p 1 , p 2 , p 3 are eigenstates of quantum geometry: In the state |p 1 , p 2 , p 3 the face S i of the fiducial cell V orthogonal to the axis x i has area |p i |. Note that although p i ∈ R, the orthonormality holds via Kronecker deltas rather than the usual Dirac distributions; this is why the LQC quantum kinematics is inequivalent to that of the Schrödinger theory used in Wheeler DeWitt cosmology. Finally the action of the elementary operators is given by: and similarly forp 2 , exp iℓc 2 ,p 3 and exp iℓc 3 . The full kinematical Hilbert space H kin will be the tensor product, H kin = H grav kin ⊗ H matt kin where, as in the isotropic case, we will set H matt kin = L 2 (R, dT ) for the Hilbert space of the homogeneous scalar field T . On H matt kin , the operatorT will act by multiplication and p (T ) := −i d/dT will act by differentiation. Note that we can also use a "polymer Hilbert space" for H matt kin spanned by almost periodic functions of T . The quantum Hamiltonian constraint (3.22) will remain unchanged and our construction of the physical Hilbert space will go through as it is [47].
B. The curvature operatorF ab k To discuss quantum dynamics, we have to construct the quantum analog of the Hamiltonian constraint. Since there is no operator corresponding to the connection coefficients c i on H grav kin , we cannot use (2.12) directly. Rather, as in the isotropic case [10], we will return to the expression (2.11) involving curvature F ab k . Our task then is to find the operator on H grav kin corresponding to F ab k . As is usual in LQG, the idea is to first express the curvature in terms of our elementary variables -holonomies and triads-and then replace them by their direct quantum analogs. Recall first that, in the classical theory, the a-b component of F ab k can be written in terms of holonomies around a plaquette (i.e., a rectangular closed loop whose edges are parallel to two of the axes x i ): where Ar is the area of the plaquette and the holonomy h ij around the plaquette ij is given by whereμ j L j is the length of the jth edge of the plaquette, as measured by the fiducial metric o q ab . (There is no summation over i, j.) Because the Ar is shrunk to zero, the limit is not sensitive to the precise choice of the closed plaquette . Now, in LQG the connection operator does not exist, whence if we regard the right side of (3.6) as an operator, the limit fails to converge in H grav kin . The non-existence of the connection operator is a direct consequence of the underlying diffeomorphism invariance [48] and is intertwined with the fact that the eigenvalues of geometric operators -such as the area operatorÂr associated with the plaquette under consideration-are purely discrete. Therefore, in LQC the viewpoint is that the non-existence of the limit Ar → 0 in quantum theory is not accidental: quantum geometry is simply telling us that we should shrink the plaquette not till the area it encloses goes to zero, but rather only to the minimum non-zero eigenvalue ∆ ℓ 2 Pl of the area operator (where ∆ is a dimensionless number). The resulting quantum operatorF ab k then inherits Planck scale non-localities.
To implement this strategy in full LQG one must resolve a difficult issue. If the plaquette is to be shrunk only to a finite size, the operator on the right side of (3.6) would depend on what that limiting plaquette is. So, which of the many plaquettes enclosing an area ∆ ℓ 2 Pl should one use? Without a well-controlled gauge fixing procedure, it would be very difficult to single out such plaquettes, one for each 2-dimensional plane in the tangent space at each spatial point. However, in the diagonal Bianchi I case now under consideration, a natural gauge fixing is available and indeed we have already carried it out. Thus, in the i-j plane, it is natural to choose a plaquette ij so that its edges are parallel to the x i -x j axis. Furthermore, the underlying homogeneity implies that it suffices to introduce the three plaquettes at any one point in our spatial 3-manifold.
These considerations severely limit the choice of plaquettes ij but they do not determine the lengths of the two edges in each of these plaquettes. To completely determine the plaquettes, as in the isotropic case, we will use a simple but well-motivated correspondence between kinematic states in LQG and those in LQC. However, because of anisotropies, new complications arise which require that the correspondence be made much more precise. Fix a state |p 1 , p 2 , p 3 in H grav kin of LQC. In this state, the three faces of the fiducial cell V orthogonal to the x i -axis have areas |p i | in the LQC quantum geometry. This is the complete physical information in the ket |p 1 , p 2 , p 3 . How would this quantum geometry be represented in full LQG? First, the macroscopic geometry must be spatially homogeneous and we have singled out three axes with respect to which our metrics are diagonal. Therefore, semi-heuristic considerations suggest that the corresponding LQG quantum geometry state should be represented by a spin network consisting of edges parallel to the three axes (see Fig. 1(a)). Microscopically this state is not exactly homogeneous. But the coarse grained geometry should be homogeneous. To achieve the best possible coarse grained homogeneity, the edges should be packed as tightly as is possible in the desired quantum geometry. That is, each edge should carry the smallest non-zero label possible, namely j = 1/2.
For definiteness, let us consider the 1-2 face S 12 of the fiducial cell V which is orthogonal to the x 3 axis (see Fig. 1(b)). Quantum geometry of LQG tells us that at each intersection of any one of its edges with S 12 , the spin network contributes a quantum of area ∆ ℓ 2 Pl on this surface, where ∆ = 4πγ √ 3 [49]. For this LQG state to reproduce the LQC state |p 1 , p 2 , p 3 under consideration S 12 must be pierced by N 3 edges of the LQG spin network, where N 3 is given by Thus, we can divide S 12 into N 3 identical rectangles each of which is pierced by exactly one edge of the LQG state, as in Fig. 1(b). Any one of these elementary rectangles encloses an area ∆ℓ 2 Pl and provides us the required plaquette 12 . Let the dimensionless lengths of the edges of these plaquettes beμ 1 andμ 2 . Then their lengths with respect to the fiducial metric o q ab areμ 1 L 1 andμ 2 L 2 . Since the area of S 12 with respect to o q ab is L 1 L 2 , we have Equating the expressions of N 3 from the last two equations, we obtain This relation by itself does not fixμ 1 andμ 2 . However, repeating this procedure for the 2-3 face and the 3-1 face, we obtain, in addition, two cyclic permutations of this last equation and the three simultaneous equations do suffice to determineμ i : To summarize, by exploiting the Bianchi I symmetries and using a simple but wellmotivated correspondence between LQG and LQC states we have determined the required elementary plaquettes enclosing an area ∆ ℓ 2 Pl on each of the three faces of the cell V. On the face S ij , the plaquette is a rectangle whose sides are parallel to the x i and x j axes and whose dimensionless lengths areμ i andμ j respectively, given by (3.9). Note that (as in the isotropic case [10]) theμ i and hence the plaquettes are not fixed once and for all; they depend on the LQC state |p 1 , p 2 , p 3 of quantum geometry in a specific fashion. The functional form of this dependence is crucial to ensure that the resulting quantum dynamics is free from the difficulties encountered in earlier works.
Components of the curvature operatorF ab k can now be expressed in terms of holonomies around these plaquettes:F with whereμ j are given by (3.9). (There is no summation over i, j.) Using the expression (3.1) of holonomies, it is straightforward to evaluate the right hand side. One finds: where the usual summation convention for repeated covariant and contravariant indices applies and sinμc µ where there is now no sum over i. This is the curvature operator we were seeking.
We will conclude with a discussion of the important features of this procedure and of the resulting quantum dynamics.
1. In the isotropic case all p i are equal (p i = p) whence our expressions forμ i reduce to a single formula,μ = ∆ℓ 2 Pl /|p|. This is precisely the result that was obtained in the "improved dynamics" scheme for the k = 0 isotropic models. Thus, we have obtained a generalization of that result to Bianchi I models.
2. In both cases, the key observation is that the plaquette should be shrunk till its area with respect to the physical -rather than the fiducial-geometry is ∆ ℓ 2 Pl . However, there are also some differences. First, in the above analysis we set up and used a correspondence between quantum geometries of LQG and LQC in the context of Bianchi I models. In contrast to the previous treatment in the isotropic models [10], we did not have to bring in classical geometry in the intermediate steps. In this sense, even for the isotropic case, the current analysis is an improvement over what is available in the literature.
3. A second difference between our present analysis and that of [10] is the following. Here, the semi-heuristic representation of LQC states |p 1 , p 2 , p 3 in terms of spin networks of LQG suggested that we should consider spin networks which pierce the faces of the fiducial cell V as in Fig. 1(a). (As one would expect, these states are gauge invariant.) The minimum non-zero eigenvalue of the area operator on such states is ∆ ℓ 2 Pl with ∆ = 4 √ 3πγ. This is twice the absolute minimum of non-zero eigenvalue on all gauge invariant states. However, that lower value is achieved on spin networks (whose edges are again labelled by j = 1/2 but) which do not pierce the surface but rather intersect it from only one side. (In order for the state to be gauge invariant, the edge then has to continue along a direction tangential to the surface. For details, see [49].) Obvious considerations suggest that such states cannot feature in homogeneous models. Since the discussion in the isotropic case invoked a correspondence between LQG and LQC at a rougher level, this point was not noticed and the value of ∆ used in [10] was 2 √ 3πγ. We emphasize, however, that although the current discussion is more refined, it is not a self-contained derivation. A more complete analysis may well change this numerical factor again.
4. On the other hand, we believe that the functional dependence ofμ i on p i is robust: As in the isotropic case this dependence appears to be essential to make quantum dynamics viable. Otherwise quantum dynamics can either depend on the choice of the fiducial cell V even to leading order, or is physically incorrect because it allows quantum effects to dominate in otherwise "tame" situations, or both. The previous detailed, quantum treatments of the Bianchi I model in LQC did not have this functional dependence because they lacked the correspondence between LQG and LQC we used. Rather, they proceeded by analogy. As we noted above, in the isotropic case there is a singleμ and a single p and the two are related byμ = ∆ℓ 2 Pl /|p|. The most straightforward generalization of this relation to Bianchi I models isμ i = ∆ℓ 2 Pl /|p i |. This expression was simply postulated and then used to construct quantum dynamics [36,37]. The resulting analysis has provided a number of useful technical insights. However, this quantum dynamics suffers from the problems mentioned above [40]. The possibility that the correct generalization of the isotropic results to Bianchi I models may be given by (3.9) was noted in [38,50] and in the Appendix C of [37]. However, for reasons explained in the next sub-section, construction of the quantum Hamiltonian operator based on (3.9) was thought not to be feasible. Therefore, this avenue was used only to gain qualitative insights and was not pursued in the full quantum theory.
C. The quantum Hamiltonian constraint
With the curvature operatorF ab k at hand, it is straightforward to construct the quantum analog of the Hamiltonian constraint (2.6) because the triad operators can be readily constructed from the threep i . Ignoring for a moment the factor-ordering issues, the gravitational part of this operator is given bŷ (3.14) where for simplicity of notation here and in what follows we have dropped hats on p i and sinμ i c i . To write the action of this operator on H grav kin , it suffices to specify the action of the operators exp(iμ i c i ) on the kinematical states Ψ(p 1 , p 2 , p 3 ). The expression (3.9) ofμ i and the Poisson brackets (2.5) imply: and its cyclic permutations. At first sight this expression seems too complicated to yield a manageable Hamiltonian constraint.
• Remark: In the isotropic case, the corresponding expression is simply where v ∼ |p| 3/2 is the physical volume of the fiducial cell V, this operator can be essentially written as exp(d/dv) and acts just as a displacement operator on functions Ψ(v) of v. In the operator (3.15) by contrast, all three p i feature in the exponent. This is why its action was deemed unmanageable. As we noted at the end of section III B, progress was made [36,37] by simply postulating an alternative, more manageable expressionμ i = ( √ ∆ ℓ Pl / |p i |), the obvious analog of µ = ( √ ∆ ℓ Pl )/ |p| in the isotropic case [10]. Then each exp(±iμ i c i ) can be expressed essentially as a displacement operator exp d/dv i with v i ∼ |p i | 3/2 and the procedure used in the isotropic case could be implemented on states Ψ(v 1 , v 2 , v 3 ). Bianchi I quantum dynamics then resembled three copies of the isotropic dynamics. However, as noted above this solution is not viable [40].
Our new observation is that the operator (3.15) can in fact be handled in a manageable fashion. Let us first make an algebraic simplification by introducing new dimensionless variables λ i : (so that sgn(λ i ) = sgn(p i )). Then, we can introduce a new orthonormal basis |λ 1 , λ 2 , λ 3 in H grav kin by an obvious rescaling. These vectors are again eigenvectors of the operators p i : We can expand out any ket |Ψ in H grav kin as |Ψ = Ψ(λ 1 , λ 2 , λ 3 ) |λ 1 , λ 2 , λ 3 and re-express the right side of (3.15) as an operator on wave functions Ψ( λ), where the notation E ± i has been introduced as shorthand. (Here, we have used the property γ = sgn(p 1 p 2 p 3 )|γ| of the Barbero-Immirzi parameter from Appendix A.) To obtain the explicit action of E ± i on wave functions Ψ( λ) we note that, since the operator is an exponential of a vector field, its action is simply to drag the wave function Ψ( λ) a unit affine parameter along its integral curves. Furthermore, since the vector field d/dλ 1 is in the λ 1 direction, the coefficient 1/λ 2 λ 3 is constant along each of its integral curves. Therefore it is possible to write down the explicit expression of E ± i : The non-triviality of this action lies in the fact that while the wave function is dragged along the λ 1 direction, the affine distance involved in this dragging depends on λ 2 , λ 3 . This operator is well-defined because our states have support only on a countable number of λ i . In particular, the image E ± 1 Ψ ( λ) vanishes identically at points λ 2 = 0 or λ 3 = 0 because Ψ does not have support at λ 1 = ∞. Thus the factor λ 2 λ 3 appearing in the denominator does not cause difficulties.
We can now write out the gravitational part of the Hamiltonian constraint: withĈ where we have used the simplest symmetric factor ordering that reduces to the one used in [11] in the isotropic case. (Ĉ (2) grav andĈ (3) grav are given by the obvious cyclic permutations.) In Appendix A, we show that, under the action of reflectionsΠ i on H grav kin , the operators sinμ i c i have the same transformation properties that c i have under reflections Π i in the classical theory. As a consequence,Ĉ grav is also reflection symmetric. Therefore, its action is well defined on H grav kin :Ĉ grav is a densely defined, symmetric operator on this Hilbert space. In the isotropic case, its analog has been shown to be essentially self-adjoint [52]. In what follows we will assume that (3.20) is essentially self-adjoint on H grav kin and work with its self-adjoint extension.
Finally, it is straightforward to write down the quantum analog of the full Hamiltonian constraint (2.6): where Θ = −C grav . As in the isotropic case, one can obtain the physical Hilbert space H phy by a group averaging procedure and the result is completely analogous. Elements of H phy consist of 'positive frequency' solutions to (3.22), i.e., solutions to which are symmetric under the three reflection mapsΠ i , i.e. satisfy The scalar product is given simply by: where T o is any "instant" of internal time T .
• Remark : In the isotropic LQC literature [10,16,17] one began in the classical theory with proper time t (which corresponds to the lapse function N (t) = 1) and made a transition to the relational time provided by the scalar field only in the construction of the physical sector of the quantum theory. If we had used that procedure here, the factor ordering of the Hamiltonian constraint would have been slightly different. In this paper, we started out with the lapse N = |p 1 p 2 p 3 | 1/2 already in the classical theory because the resulting quantum Hamiltonian constraint is simpler. In the isotropic case, for example, this procedure leads to an analytically soluble model (the one obtained in [11] by first starting out with N (t) = 1, then going to quantum theory, and finally making some well-motivated but simplifying assumptions). It also has some conceptual advantages because it avoids the use of "inverse scale factors" altogether.
D. Simplification ofĈ grav
It is straightforward to expand out the Hamiltonian constraintĈ grav using the explicit action of operators sin(μ i c i ) given by (3.19) and express it as a linear combination of 24 terms of the typeĈ (where, i = j and as before there is no summation over i, j). Unfortunately, the sgn(λ i ) factors in this expression and in the action of E ± i make the result quite complicated. More importantly, it is rather difficult to interpret the resulting operator. The expression can be simplified if we introduce the volume of V as one of the arguments of the wave function. In particular, this would make quantum dynamics easier to compare with that of the Friedmann models. With this motivation, let us further re-arrange the configuration variables and set v = 2 λ 1 λ 2 λ 3 . (3.27) The factor of 2 in (3.27) ensures that this v reduces to the v used in the isotropic analysis of [10] (if one uses the value of ∆ used there). As the notation suggests, v is directly related to the volume of the elementary cell V: One's first impulse would be to introduce two other variables in a symmetric fashion, e.g., following Misner [53]. Unfortunately, detailed examination shows that they make the constraint (3.20) even less transparent! 4 Let us simply use λ 1 , λ 2 , v as the configuration variables in place of λ 1 , λ 2 , λ 3 . This change of variables would be non-trivial in the Schrödinger representation but is completely tame here because the norms on H grav kin are defined using a discrete measure on R 3 . As a consequence, the scalar product is again given by the sum in (3.25), the only difference is that λ 3 is now replaced by v. Since the choice (λ 1 , λ 2 , v) breaks the permutation symmetry, one might have first thought that it would not be appropriate. Somewhat surprisingly, as we will now show, it suffices to make the structure of the constraint transparent. (Of course, the simplification of the constraint would have persisted if we had chosen to replace either λ 1 or λ 2 -rather than λ 3 -with v.) Finally, note that the positive octant is now given by λ 1 ≥ 0, λ 2 ≥ 0 and v ≥ 0.
To obtain the explicit action of the constraint, it is extremely convenient to use the fact that states Ψ in H grav kin satisfy the symmetry condition (3.24) and thatĈ grav has a well defined action on this space. Therefore, to specify its action on any given Ψ it suffices to find the restriction of the image Φ(λ 1 , λ 2 , v) := (Ĉ grav Ψ)(λ 1 , λ 2 , v) to the positive octant. The value of Φ in other octants is determined by its symmetry property. This fact greatly simplifies our task because we can use it to eliminate the sgn(λ i ) factors in various terms which complicate the expression tremendously.
For concreteness let us focus on one term in the constraint operator (which turns out to 4 Misner-like variables -volume and logarithms of metric components-were used in the brief discussion of Bianchi I models in [38]. This discussion already recognized that the use of volume as one of the arguments of the wave function would lead to simplifications. Dynamics was obtained by starting with the Hamiltonian constraint in the µ o scheme from [35] and then substitutingμ i of (3.9) for µ i o in the final result. This procedure does simplify the leading order quantum corrections to dynamics. By contrast, our goal is to simplify the full constraint. More importantly, constraint (3.20) is an improvement over that of [38] because we introducedμ i from the beginning of the quantization procedure and systematically defined the operators sin(μ i c i ) (in section III C).
be the most non-trivial one for our simplification): If we now restrict the argument of Ĉ −− 12 Ψ to the positive octant, the expression simplifies: Now the action of this operator is more transparent: the wave function is multiplied by functions only of volume and, in the argument of the wave function, volume simply shifts by -4 and λ 1 , λ 2 are rescaled by multiplicative factors which also depend only on the volume. Since the full constraint is a linear combination of terms of this form, its action is also driven primarily by volume. As we will see, this key property makes the constraint manageable and greatly simplified the task of analyzing the relation between the LQC quantum dynamics of Bianchi I and Friedmann Models. From now on, unless otherwise stated, we will restrict the argument of the images Ĉ ± ij Ψ to lie in the positive octant; its value in other octants is given simply by Ĉ ± ± ij Ψ (λ 1 , λ 2 , v) = Ĉ ± ± ij Ψ (|λ 1 |, |λ 2 |, |v|). The form (3.30) of the action of operatorsĈ ± ± ij enables us to discuss singularity resolution. For completeness, let us first write out the four terms corresponding to i, j=1, 2 (which are the most complicated of the 24 terms inĈ grav ): for other values of i, j and hence byĈ grav and all its powers. 5 Therefore, the relational dynamics of (3.22) decouples H grav sing from H grav reg . In particular, if one starts out with a "regular" quantum state at T = 0, it remains regular throughout the evolution. In this precise sense, the singularity is resolved.
Next, let us write out explicitly the full Hamiltonian constraint (3.22): where Ψ ± 0,4 are defined as follows: and where, as before, we have given the restriction of the image ofĈ grav to the positive octant. Because H grav reg is left invariant by evolution we can in fact restrict λ 1 , λ 2 , v to be strictly positive. On the right sides of (3.36) and (3.37), arguments of Ψ can take negative values. However, since Ψ(λ 1 , λ 2 , v) = Ψ(|λ 1 |, |λ 2 |, |v|), we can just introduce absolute value signs on these arguments. Consequently, knowing the restriction of Ψ to the positive octant, (3.36) and (3.37) enable us to directly calculate its image underĈ grav . In particular, numerical evolutions can be carried out by restricting oneself to the positive octant.
Let us now examine the structure of this equation. As in the isotropic case, the right side is a difference equation. As far as the v dependence is concerned, the steps are uniform: the argument of the wave function involves v − 4, v, v + 4 exactly as in the isotropic case. The step sizes are also the same as in [10] because, as noted above, our variable v is in precise agreement with that used in the isotropic case. There is again superselection. For each ǫ ∈ [0, 4), let us introduce a 'lattice' L ǫ consisting of points v = 4n if ǫ = 0 and v = 2n + ǫ if ǫ = 0. 6 Then the quantum evolution -as well as the action of the Dirac observablespreserves the subspaces H ǫ phy consisting of states with v-support on L ǫ . The most interesting of these sectors is the one labelled by ǫ = 0 since it contains the classically singular points, v = 0. Therefore in what follows, unless otherwise stated, we will restrict ourselves to this sector.
The dependence ofĈ grav Ψ on λ 1 , λ 2 , by contrast, is much more difficult to control technically because the first two arguments of the wave function cannot be chosen to lie on a regular lattice in any simple way. In particular, even if we started out with a wave function which has support only on a lattice, say λ 1 = nλ o for some λ o , the action ofĈ grav shifts support to points such as λ 1 = [(v ± 2)/v]nλ o which do not lie on this lattice. Thus, there is no obvious superselection with respect to λ 1 and λ 2 ; we have to work with the entire R 2 they span. Had it been permissible to setμ i ∝ / |p i |, we could have restricted λ i to lie on a regular lattice [36]. Then, following [40], we could have repeated the strategy used successfully in the isotropic case in [11] to simplify dynamics by carrying out a Fourier transform to pass to variables which are conjugate to λ 1 , λ 2 . However, as remarked earlier, that choice ofμ i is inadmissible and hence the strategy cannot be repeated in the Bianchi I case. Nonetheless, it is still feasible to carry out numerical simulations. For, if one knows the support of the quantum state at an initial time T o and the number of time-steps across which one wants to evolve, one can calculate the number of points on a (irregular) grid in the λ 1 -λ 2 plane on which the wave function will have support. Numerical work has in fact already commenced [41]. It would be interesting to investigate whether the efficient algorithms that have been introduced in the context of regular lattices [54] can be extended to this case.
We will conclude this discussion by noting that it is possible to read off some qualitative features of dynamics from (3.35) - (3.37). Since the steps in v of this difference equation are the same as those in the isotropic case, the dynamics of volume -and also of the matter densityρ matt , sincep (T ) is a constant of motion-would be qualitatively similar to that in the isotropic case. What about anisotropies? The λ I (I = 1, 2) do not feature in the overall numerical factors in (3.35); they appear only in the argument of the wave functions. Under the action ofĈ grav , these arguments get rescaled by factors v ± 4/v ± 2, v ± 2/v and v/v ± 2. For large volumes, or more precisely low densities, these factors go as 1 + O(ρ matt /ρ Pl ). Hence, to leading order, we will recover of the classical result that a 1 a 2 a 3 (H i − H j ) are constants, where a i are the directional scale factors and H i := d ln a i /dt, the directional Hubble parameters. Since quantum corrections go as ρ/ρ Pl they are utterly negligible away from the Planck regime.
In the next section we will discuss three important features of dynamics dictated by (3.35) which provide significant physical intuition in complementary directions.
IV. PROPERTIES OF THE LQC QUANTUM DYNAMICS
This section is divided into three parts. Since we have used the same general procedure as in the isotropic case it is natural to ask how the quantum dynamics of (3.35) compares to that in [10]. In the first part we show that there is a natural projection from a dense subspace of the physical Hilbert space of the Bianchi I model to that of the Friedmann model which maps the Bianchi I Hamiltonian constraint to that of the Friedmann model. This result boosts confidence in the overall coherence and reliability of the quantization scheme used in LQC. In various isotropic models [10,[14][15][16]18], one can derive certain effective equations. Somewhat surprisingly, for states which are semi-classical at a late initial time, they faithfully capture quantum dynamics throughout the entire evolution, including the bounce. The same considerations lead to effective equations in Bianchi I models which were already analyzed by Chiou and Vandersloot in Appendix C of [37]. In the second sub-section we briefly discuss these equations and their consequences. In the third, we show that, as in the isotropic case [10,11], there is a precise sense in which the LQC quantum dynamics reduces to that of the Wheeler-DeWitt theory in the low curvature regime.
A. Relation to the LQC Friedmann dynamics
The problem of comparing dynamics of a more general system with that of a restricted, symmetry reduced one has been discussed in the literature in several contexts. In the classical theory, symmetric states often provide symplectic sub-manifolds Γ Res of the more general phase spaces Γ Gen . Furthermore Γ Res are preserved by the dynamics on Γ Gen . Therefore, it is tempting to repeat the same strategy in the quantum theory. Indeed, sometimes it is possible to find natural sub-spaces H Res of states with additional symmetry in the full Hilbert space H Gen of the more general system. However, generically H Res is not left invariant by the more general dynamics (see, e.g., [43,44]). In our case, one can introduce an isotropic sub-space of H Res in the quantum theory based on any given fiducial cell V: isotropic states correspond to wave functions Ψ(λ 1 , λ 2 , v) which have support only at points λ 1 = λ 2 = (v/2) 1/3 . (But note that this sub-space is not invariantly defined; it is tied to V!) It is easy to check that the space H Res of these states is not left invariant by the Bianchi I quantum dynamics (3.35).
However, this fact cannot be interpreted as saying that there is no simple relation between the quantum dynamics of the two theories: since restriction to H Res amounts to a sharp freezing of anisotropic degrees of freedom, in view of the quantum uncertainty principle, this procedure is not well suited to compare the quantum dynamics of the two systems. As pointed out in section I, a better strategy is to integrate out the extra, anisotropic degrees of freedom. This would correspond to a projection map from H Gen to H Res rather than an embedding of H Res into H Gen .
Consider first, as an elementary example, a particle moving in R 3 . Suppose that the potential depends only on z so that dynamics has a symmetry in the x, y directions. In the classical theory, there are several natural embeddings of the phase space Γ Res into Γ Gen . For example, we can set (z, p z ) → (x=x o , y=y o ,z; p x =0, p y =0, p z ) and the Hamiltonian vector field of the full theory is then tangential to the images of each of these embeddings. However, in the quantum theory the Hilbert space H Gen of the full system is L 2 (R 3 , d 3 x) and there is no natural embedding ψ(z) → Ψ(x, y, z). The classical strategy would suggest setting Ψ(x, y, z) = δ(x, x o ) δ(y, y o )ψ(z) but this is not a normalizable state in H Gen for any ψ(z). Even if one were to ignore this fact and try to evolve these states, one would find that they are not preserved by the full Hamiltonian operatorĤ.
Note however that there is a natural projectionP from a dense subspace in H Gen to that in H Res : Ψ(x, y, z) → (PΨ)(z) := dx dy Ψ(x, y, z) ≡ ψ(z) .
(4.2) (The idea of using such a map already appeared in [42] where the map was defined between elements of Cyl ⋆ of the locally rotationally symmetric Bianchi I model and that of the Friedmann model.) Again,P is a well defined projection from a dense subspace of the Bianchi I Hilbert space to a dense subspace of the Friedmann Hilbert space, consisting, for example, of states which have support only on a finite number of points. As is manifest from (4.2), its effect is to focus on volume by "integrating out" the anisotropic degrees of freedom with the same volume. Applying this projection mapP to Eq. (3.35), we find This is precisely the quantum constraint describing the LQC dynamics of the Friedmann model with lapse 7 N = |p| 3/2 . The reason for the exact agreement is two-fold. First, the Hamiltonian constraintĈ grav of the Bianchi I model is a difference operator whose coefficients depend only on v and, second, the shift in the argument is dictated only by v. Thus, conceptually, λ 1 , λ 2 are "inert directions" in the same sense that x, y are in the elementary example discussed above. To summarize, there is a simple -and exact-relation between quantum dynamics of the two theories. It would be interesting to investigate if this result admits a suitable extension to other Bianchi models [33,55].
In completely general situations, of course, this exact agreement will not persist: the projected dynamics will provide extremely non-trivial corrections to the dynamics of the simpler system. However, the BKL conjecture says that the dynamics of general relativity greatly simplifies near space-like singularities: In this regime, the time evolution at any one spatial point is well modelled by that of Bianchi I cosmology. Therefore, in a large class of situations there may well be a sense in which the quantum dynamics in the deep Planck regime can be projected to that of the Friedmann model with only small corrections. If so, the Planck scale quantum dynamics of the isotropic, homogeneous degree of freedom in the full theory will be much simpler than what one would have a priori expected.
B. Effective equations
Physically, the most interesting quantum states are those that are sharply peaked at a classical trajectory at late times. As explained in section I, in the isotropic case such states remain peaked at certain effective trajectories at all times, including the epoch during which the universe undergoes a quantum bounce. Thus, even in the deep Planck regime quantum physics is well captured by a smooth metric although its dynamics can no longer be approximated by the classical Einstein's equations and its components now contain large, -dependent terms. The effective equations obeyed by these geometries were first derived using ideas from geometrical quantum mechanics [56,57]. However, the assumptions made in these derivations break down in the deep Planck regime. Therefore a priori there was no reason to expect these equations to describe quantum dynamics so well also in the Planck regime. That they do was first shown by numerical simulations of the exact quantum equations [9,10] in the k=0, Λ=0 case. It was then realized that this model is in fact exactly soluble [11,51] and the power of the effective equations could be attributed to this property. However, k=0 models with non-zero cosmological constant and the closed k=1 models do not appear to be exactly soluble. Yet, numerical solutions of the exact quantum equations show that the effective equations continue to capture full quantum dynamics extremely well [14][15][16].
New light was shed on this phenomenon by recent work on a path integral formulation of quantum cosmology [58]. The idea here is to return to the original derivation of path integrals due to Feynman and Hibbs [59] starting from quantum mechanics. In the isotropic case, then, the strategy is to begin with the kinematics and dynamics of LQC and then rewrite the transition amplitudes as path integrals. The resulting framework has several novel features. First, because the LQC kinematics relies on quantum geometry, paths that feature in the final integral are different from what one would have naively expected from the Wheeler-DeWitt theory. Second, the action that features in the measure is not the Einstein-Hilbert action but contains non-trivial quantum corrections. When expressed in the phase space language, L = pq − H(p, q), the "Hamiltonian" H turns out to be precisely the effective Hamiltonian constraint derived in [56,57], even though this casting of the LQC transition amplitudes in the path integral language is exact and does not pre-suppose that we are away from the Planck regime. Now, in the path integral approach, we have the following general paradigm. Consider the equations obtained by varying the action that appears in the path integral. (Generally these are just the classical equations but in LQC they turn out to be the effective equations of [10,16,57].) Fix a path representing a solution to these equations. If the action evaluated along this path is large compared to then that solution is a good approximation to full quantum dynamics. If one applies this idea to isotropic LQC, one is led to conclude that solutions to the effective equations of [56,57] should be good approximations to full quantum dynamics also in the k=0, Λ =0 and k=1 cases. This is precisely what one finds in numerical simulations. Thus, the path integral approach may well provide a deeper explanation of the power of effective equations. While such a path integral analysis is yet to be carried out in detail in the anisotropic case, because of the situation in the simpler cases it is of considerable interest to find effective equations and study their implications.
This task was carried out already by Chiou and Vandersloot in the Appendix C of [37]. We will summarize the relevant results and briefly comment on the general picture that emerges.
Without loss of generality, we can restrict ourselves to the positive octant. Then the effective Hamiltonian constraint is given simply by the direct classical analog of (3.14): Since sin x is bounded by 1 for all x, these equations immediately imply that the matter density, ρ matt = p 2 (T ) /2V 2 ≡ p 2 (T ) /2p 1 p 2 p 3 can never become greater than the critical density ρ crit ≈ 0.41ρ Pl , first found in the isotropic case [10-12, 16, 18]. Since ρ becomes infinite at the big bang singularity in the classical evolution, there is a precise sense in which the singularity is resolved in the effective theory.
Effective equations are obtained via Poisson brackets as in section II but using (4.4) in place of the classical Hamiltonian constraint. This gives, for example, Equations for p 2 , c 2 and p 3 , c 3 are obtained by cyclic permutations. These effective equations include "leading order quantum corrections" to the classical evolution equations (2.13) and (2.14). In any solution, these corrections become negligible in the distant past and in the distant future. As we noted in section II, the shear Σ defined in Eq. (2.21) is a constant of motion in the classical theory. This is no longer the case in the effective theory. However, one can show that it remains finite throughout the evolution, becomes approximately constant in the low curvature region both in the distant past and in the distant future. Furthermore, its value in the distant future is the same as that in the distant past along any effective trajectory in the phase space. Vandersloot (personal communication) has also carried out numerical integration of these equations. In the isotropic case each effective trajectory undergoes a quantum bounce when the matter density ρ matt achieves a critical value ρ crit ≈ 0.41ρ Pl . As one might expect, now the situation is more complicated because of the additional degrees of freedom. First, there are now several distinct "bounces". More precisely, in addition to ρ matt (or the scalar curvature), we now have to keep track of the three Hubble rates H i which directly control the Weyl curvature. In the backward evolution towards the classical big bang, Einstein's equations approximate the effective equations extremely well until the density of one of the H i enters the Planck regime. Then the quantum corrections start rising quickly. Their net effect is to dilute the quantity in question. Once the quantity exits the Planck regime as a result of this dilution, quantum geometry effects again become negligible. Thus, as in the isotropic case, one avoids the ultraviolet-infrared tension [21] because the quantum geometry effects are extremely strong in the Planck regime but die off extremely quickly as the system exits this regime. Secondly, the "volume" or the "density bounce" occurs when the matter density is lower than ρ crit . This is not surprising because what matters is the total energy density and now there is also a contribution from gravitational waves. Finally, although there are distinct "bounces" for density (or scalar curvature) and the H i (or the Weyl curvature invariants), they all occur near each other in the relational time T .
There are indications that the general scenario provided by effective equations correctly captures the qualitative features of the full quantum evolution. However, the arguments are not conclusive. For conclusive evidence for (or against) this picture, one needs numerical simulations [41] of the exact quantum equations of section III D, or a detailed, path integral treatment of the Bianchi I models along the lines of [58].
C. Relation to the Wheeler-DeWitt Dynamics
Quantum dynamics of LQC is governed by a difference -rather than a differentialequation because of the quantum geometry effects. However, we will now show that, as in the isotropic case [10,11,16], the LQC quantum dynamics is well approximated by the Wheeler-DeWitt (WDW) differential equation away from the Planck regime where quantum geometry effects become negligible.
In the WDW theory the directional scale factors, and hence the three λ i can assume any real value and it is simpler to work with the three λ i rather than with λ 1 , λ 2 , v = 2λ 1 λ 2 λ 3 . Let us therefore set Ψ(λ 1 , λ 2 , λ 3 ; T ) = Ψ(λ 1 , λ 2 , v; T ) and assume that Ψ admits a smooth extension to all real values of λ i . The idea is to pair various terms in Eqs. (3.36) and (3.37) in such a way so that two of the three arguments of Ψ are the same. For example, one such pair is Ignoring the common pre-factors in Eqs. (3.36) -(3.37), the two paired terms in Eq. (4.8) can be expressed as: where n > 1. (Notice that the v ′ in the denominator in front of the partial derivative will cancel the v + 2 pre-factor in Eq. (3.35).) One can suitably pair all terms in (3.36) and (3.37) and express them as differential operators with corrections which are small for large values of λ i . Let us ignore these corrections -i.e. assume that the (1/λ i λ j ) n ∂ n k √ vΨ is negligible for n > 1 because Ψ is slowly varying and we are in the low density, large scalefactor regime. Then we find that the LQC Hamiltonian constraint (3.35) reduces to a rather simple differential equation: This equation can be further simplified by introducing σ i = log λ i and Φ = √ vΨ. The result is: where v is now given by 2 exp( σ i ). This is precisely the equation we would have obtained if we had started from the classical Hamiltonian constraint, used the Schrödinger quantization and the "covariant factor ordering" of the constraint as in the WDW theory. Thus, the LQC Hamiltonian constraint reduces to the WDW equation under the assumption that Ψ is slowly varying in the sense that (1/λ i λ j ) n ∂ n k √ vΨ can be neglected for n > 1 relative to the term for n = 1. Since (λ i λ j ) 2 is essentially the area of the i-j face of the fiducial cell V in Planck units, this should be an excellent approximation well away from the Planck regime. However, in the Planck regime itself the terms which are neglected in the LQC dynamics are comparable to the terms which are kept whence, as in the isotropic case, the WDW evolution completely fails to approximate the LQC dynamics.
V. DISCUSSION
In this paper we extended the "improved" LQC dynamics of Friedmann space-times [10] to obtain a coherent quantum theory of Bianchi I models. As in the isotropic case, we restricted the matter source to be a massless scalar field since it serves as a viable relational time parameter (a la Leibniz) both in the classical and quantum theories. However, it is rather straightforward to accommodate additional matter fields in this framework.
To incorporate the Bianchi I model, we had to overcome several significant obstacles. First, using discrete symmetries we showed that to specify dynamics it suffices to focus just on the positive octant. This simplified our task considerably. Second, in section III B we introduced a more precise correspondence between LQG and LQC and used it to fix the parametersμ i that determine the elementary plaquettes, holonomies around which define the curvature operatorF ab k . This procedure led us to the expressionsμ 2 1 = (|p 1 |∆ ℓ 2 Pl )/|p 2 p 3 |, etc. They reduce to the expressionμ 2 = (∆ ℓ 2 Pl )/|p| of the isotropic models [10,16,18]. But even there, the current reasoning has the advantage that it uses only quantum geometry, avoiding reference to classical areas even in the intermediate steps. However, because of this rather complicated dependence ofμ i on p i , the task of defining operators sinμ i c i seems hopelessly difficult at first. Indeed, this was the key reason why the earlier treatments [36, 37, 40] took a short cut and simply setμ 2 i = (∆ ℓ 2 Pl )/|p i | by appealing to the relationμ 2 = (∆ ℓ 2 Pl )/|p| in the isotropic case. With this choice, quantization of the Hamiltonian constraint became straightforward and the final Bianchi I quantum theory resembled three copies of that of the Friedmann model. However, this result had the physically unacceptable consequence that significant departures from general relativity could occur in "tame" situations. By a non-trivial extension of the geometrical reasoning used in the isotropic case, in section III C we were able to define the operators sinμ i c i for our expressions ofμ i . However, the structure of the resulting Hamiltonian constraint turned out to be rather opaque. To simplify its form, in section III D we introduced volume as one of the arguments of the wave functions. The action of the gravitational part of the Hamiltonian constraint then became transparent: it turned out to be a difference operator where the multiplicative coefficients in individual terms depend only on volume and the change in the arguments of the wave functions also depends only on volume; individual anisotropies do not feature (see (3.35) -(3.37)). This simplification enabled us to show that the sector H grav reg of quantum states which have no support on classically singular configurations is preserved by quantum dynamics. In this precise sense the big-bang singularity is resolved. Furthermore, this quantum dynamics is free from the physical drawbacks of the older scheme mentioned above.
In section IV we explored three consequences of quantum dynamics in some detail. First, we showed that there is a projection mapP : H Gen → H Res from the Hilbert space of the more general Bianchi I model to that of the more restricted Friedmann model which maps the Bianchi I quantum constraint exactly to the Friedmann quantum constraint. This is possible because, as noted above, it is just the volume -rather than the anisotropiesthat govern the action of the Bianchi I quantum constraint. This result is of considerable interest because, in view of the BKL conjecture, it suggests that near generic space-like singularities the LQC of Friedmann models may capture qualitative features of the full, LQG dynamics of the isotropic, homogeneous degree of freedom. In section IV B we briefly recalled the effective equations of Chiou and Vandersloot (see Appendix C of [37]). These equations provide intuition for the rich structure of quantum bounces in the Bianchi I model. Their analysis suggests that classical general relativity is an excellent approximation away from the Planck regime. However, in the Planck regime quantum geometry effects rise steeply and forcefully counter the tendency of the classical equations to drive the matter density, the Ricci scalar and Weyl invariants to infinity. (In particular, as in the isotropic case, the matter density is again bounded above by ρ crit ≈ 0.41ρ Pl .) Thus the quantum geometry effects dilute these quantities and, once the quantity exits the Planck regime, classical general relativity again becomes an excellent approximation. In section IV C we showed that, as in the isotropic case [10,11,16], there is a precise sense in which LQC dynamics is well approximated by that of the WDW theory once quantum geometry effects become negligible.
The rather complicated dependence ofμ i on p i is also necessary to remove a fundamental conceptual limitation of the older treatments of the Bianchi I model. Recall that, because we have homogeneity and the spatial topology is non-compact, we have to introduce a fiducial cell V to construct a Lagrangian or a Hamiltonian framework. Of course, the final physical results must be independent of this choice. At first this seems like an innocuous requirement but it turns out to be rather powerful. We will now recall from [40] the argument that this condition is violated with the simpler choiceμ 2 i = (∆ ℓ 2 Pl )/|p i | but respected by the more complicated choice we were led to from LQG.
For definiteness, let us fix a fiducial metric o q ab and denote by L i the lengths of the edges of the fiducial cell V. Suppose we were to use a different cell, V ′ whose edges have lengths L ′ i = β i L i (no summation over i). Since the basic canonical fields A i a and E a i are insensitive to the choice of the cell, Eq. (2.3) implies that the labels c i and p i we used to characterize them change to c ′ 1 = β 1 c 1 , p ′ 1 = β 2 β 3 p 1 , etc. The gravitational part of the classical Hamiltonian constraint (2.12) is just rescaled by an overall factor (β 1 β 2 β 3 ) 2 and the inverse symplectic structure is rescaled by (β 1 β 2 β 3 ) −1 . Hence the Hamiltonian vector field is rescaled by (β 1 β 2 β 3 ), exactly as it should because the lapse is rescaled by the same factor. Thus, as one would expect, the classical Hamiltonian flow is insensitive to the change V → V ′ . What is the situation in the quantum theory? Physical states belong to the kernel of the Hamiltonian constraint operatorĈ H whence the two quantum theories will carry the same physics only ifĈ H is changed at most by an overall rescaling. Analysis is a bit more involved than in the classical case becauseĈ grav involves factors of sinμ i c i . Now, under V → V ′ , ourμ i transform asμ 1 →μ ′ 1 = β −1 1μ 1 , whenceμ ′ 1 c ′ 1 =μ 1 c 1 , etc, and the Hamiltonian constraint (3.14) is rescaled by an overall multiplicative factor (β 1 β 2 β 3 ) 2 just as in the classical theory. What happens if we setμ 2 i = ∆ ℓ 2 Pl /|p i | as in [36,37,40]? Then, we are led toμ ′ 1 c ′ 1 = (β 1 / √ β 2 β 3 )μ 1 c 1 etc. Since the constraint (3.14) is a sum of terms of the type p 1 p 2 |p 3 | sinμ 1 c 1 sinμ 2 c 2 it has a rather uncontrolled transformation property and is not simply rescaled by an overall factor. It is then not surprising that, in the Planck regime, the dynamical predictions of the resulting quantum theory (as well as of the effective theory) depend on the choice of the elementary cell. It is rather remarkable that the more complicated form ofμ i that we are led to from LQG kinematics has exactly the right form to make quantum dynamics insensitive to the choice of the fiducial cell V. As mentioned above, it also ensures that the predictions of quantum theory is free of drawbacks of the earlier treatments [36], such as the correlation between the bounce and "directional densities" which do not have an invariant significance.
From physical considerations, as in the isotropic case, it would be most interesting to start at a "late time" with states that are sharply peaked at a classical solution in which the three scale factors assume values for which the curvature is "tame" and p (T ) is very large compared to in classical units c=G=1. One would then evolve these states backward and forward in the "internal" time T . As we just discussed, analytical considerations show that, since the initial wave function is in H grav reg , it will continue to be in that sub-space; there is no danger that the expectation values of curvature, anisotropies or density would diverge. But several important questions remain. Are there quantum bounces with a pre-big-bang branch again corresponding to a large, classical universe in the distant past? Is there is a clear distinction between evolutions of data in which there are significant initial anisotropies and data which represent only perturbations on isotropic situations? Even in the second case, do anisotropies grow (or decay) following predictions of the classical theory or are there noticeable deviations because of accumulations of quantum effects over large time periods? Numerical simulations of the LQC equations are essential to provide confidence in the general scenario suggested by effective equations and to supply us with detailed Planck scale physics.
Finally, let us return to full LQG. At the present stage of development, there appears to be considerable freedom in the definition of the quantum Hamiltonian constraint in the full theory. Furthermore, our current understanding of the physical implications of these choices is quite limited. Already in the isotropic models, the "improved" dynamics scheme provided some useful lessons: it brought out the fact that these choices can be non-trivially narrowed down by carefully analyzing conceptual issues (e.g., requiring that the physical results should be independent of auxiliary structures introduced in the intermediate steps) and by working out the physical consequences of the theory in detail (to ensure that the quantum geometry effects are not dominant in the low energy regime). Rather innocuous choices -such as those made in arriving at the older "µ o -scheme"-can lead to unacceptable consequences on both these fronts [12]. The Bianchi I analysis has sharpened these lessons considerably. The fact that the kinematical interplay between LQG and LQC has a deep impact on the viability of quantum dynamics is especially revealing. A quantum analysis of inhomogeneous perturbations around Bianchi I backgrounds is therefore a promising direction for understanding the physical implications of the choices that have to be made in the definition of the Hamiltonian constraint in full LQG. Such an analysis is likely to narrow down choices and lead us to viable quantization schemes in LQG that lead to a good semi-classical behavior.
APPENDIX A: PARITY SYMMETRIES
In this appendix we recall and extend results on parity symmetries obtained in [46]. In non-gravitational physics, parity transformations are normally taken to be discrete diffeomorphisms x i → −x i in the physical space which are isometries of the flat 3-metric thereon. In the phase space formulation of general relativity, we do not have a flat metricor indeed, any fixed metric. However, if the dynamical variables have internal indices -such as the triads and connections used in LQG-we can use the fact that the internal space I is a vector space equipped with a flat metric q ij to define parity operations on the internal indices. Associated with any unit internal vector ξ I , there is a parity operator Π ξ which reflects the internal vectors in the 2-plane orthogonal to ξ. This operation induces a natural action on triads e a i , the connections A i a and the conjugate momenta P a i =: (1/8πGγ)E a i (since they are internal vectors or co-vectors). It turns out that e a i are proper internal co-vectors while A i a and P a i are pseudo internal vectors and co-vectors, respectively. These geometrical considerations show that the Barbero-Immirzi parameter γ must change sign under any one of these parity operations, i.e., if it has the value |γ| for say, positively oriented triads, it should have the value −|γ| for negatively oriented triads. Its value on degenerate triads is ambiguous so on the degenerate sector we cannot unambiguously recover the triads e a i from the momenta P a i . If one were to make γ a dynamical field [61,62], it follows that the field should be a pseudo-scalar under internal parity transformations; geometrical considerations involving torsion have led to the same conclusion in [62]. (For details, see [60]).
In the diagonal Bianchi I model, we can restrict ourselves just to three parity operations Π i . Under their action, the canonical variables c i , p i transform as follows: Π 1 (c 1 , c 2 , c 3 ) = (c 1 , −c 2 , −c 3 ), Π 1 (p 1 , p 2 , p 3 ) = (−p 1 , p 2 , p 3 ) , (A1) and the action of Π 2 , Π 3 is given by cyclic permutations. Under any of these maps Π i , the Hamiltonian (2.12) is left invariant. This is just as one would expect because Π i are simply large gauge transformations of the theory under which the physical metric q ab and the extrinsic curvature K ab do not change. It is clear from the action (A1) that if one knows the dynamical trajectories on the octant p i ≥ 0 of the phase space, then dynamical trajectories on any other octant can be obtained just by applying a suitable (combination of) Π i . Therefore, in the classical theory one can restrict one's attention just to the positive octant.
Let us now turn to the quantum theory. We now have three operatorsΠ i . Their action on states is given byΠ These transformation properties of sinμ 1 c 1 underΠ i simply mirror the transformation properties of c 1 under the three parity operations Π i in the classical theory. (Note that, because of the absolute value signs in the expressions (3.9),μ i do not change under any of the parity maps.) From Eqs. (3.20)-(3.21) it now immediately follows that the gravitational part of the Hamiltonian constraint is left invariant underΠ i . Sincep 2 (T ) is manifestly invariant, we have: just as in the classical theory. Because of this invariance property, given any state Ψ ∈ H grav kin , the restriction to the positive octant of its image underĈ grav determines its image everywhere on H grav kin . As we saw in section III D, this property simplifies the task of finding the explicit action of the Hamiltonian constraint considerably.
|
2009-03-19T19:40:09.000Z
|
2009-03-19T00:00:00.000
|
{
"year": 2009,
"sha1": "03e6aef1a7de3077d26a10e7a28c9ac4f5dcea88",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0903.3397",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "03e6aef1a7de3077d26a10e7a28c9ac4f5dcea88",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
15231915
|
pes2o/s2orc
|
v3-fos-license
|
On the nature of the X-ray absorption in the Seyfert 2 galaxy NGC 4507
We present results of the ASCA observation of the Seyfert 2 galaxy NGC 4507. The 0.5-10 keV spectrum is rather complex and consists of several components: (1) a hard X-ray power law heavily absorbed by a column density of about 3 10^23 cm^-2, (2) a narrow Fe Kalpha line at 6.4 keV, (3) soft continuum emission well above the extrapolation of the absorbed hard power law, (4) a narrow emission line at about 0.9 keV. The line energy, consistent with highly ionized Neon (NeIX), may indicate that the soft X-ray emission derives from a combination of resonant scattering and fluorescence in a photoionized gas. Some contribution to the soft X-ray spectrum from thermal emission, as a blend of Fe L lines, by a starburst component in the host galaxy cannot be ruled out with the present data.
INTRODUCTION
Studies of X-ray spectra of Seyfert 2 galaxies above a few keV (Awaki & Koyama 1993;Salvati et al. 1997) have revealed the presence of highly obscured nuclei with power law spectra and Fe Kα lines similar to Seyfert 1 objects lending further support to the popular 'Unification' models for Active Galactic Nuclei (AGNs) (Antonucci 1993 and references therein). In these models the viewing angle explains most of the observed differences among Seyfert 1 and Seyfert 2 galaxies in terms of absorption by circumnuclear matter, possibly a molecular torus. If the orientation is such that the line of sight intercepts the torus, the Optical/UV radiation including the Broad Lines as well as the soft X-rays from the nucleus are blocked and the object is classified as a type 2. A fraction of the order of a few percent of the nuclear radiation can be detected in scattered light, the scattering medium being a warm plasma visible to both the nucleus ⋆ Andrea Comastri (comastri@astbo3.bo.astro.it) and the distant observer. The origin and the physical state of such a reflecting mirror are poorly known at present; this is unfortunate, as the mirror is a key component of the Unified model (Krolik & Kallman 1987) and several features originating from it are expected in X-rays (Krolik & Kriss 1995;Matt, Brandt & Fabian 1996), which should be observable at energies where the torus completely blocks the nuclear radiation. However, despite extensive studies of Seyfert 2 galaxies with ROSAT in the soft X-ray band (Mulchaey et al. 1993;Turner et al. 1993) the relatively low spectral resolution of the PSPC and the weakness of Seyfert 2 galaxies in the 0.1-3.0 keV energy range hampered detailed investigations of the soft component.
NGC 4507 is a nearby (z = 0.011) barred spiral galaxy, it was classified as a SBab(rs)I by Sandage & Brucato (1979). Optical spectra of the nucleus show emission lines characteristic of a Seyfert 2 type without any detectable broad line component (Durret & Bergeron 1986). The relatively high luminosity in the O [iii] line (∼ 6.5 × 10 41 ergs s −1 ; Mulchaey et al. 1994), which is though to be a good indicator of the luminosity of the Active Nuclei, suggests the presence of a powerful source of ionizing radiation. NGC 4507 is also a very bright far infrared source with a 60-100 µm luminosity derived from IRAS fluxes of ∼ 10 44 ergs s −1 . In soft X-rays NGC 4507 is rather faint and only a marginal detection with the Einstein IPC has been reported . At higher energies (E > 3 keV) it is a bright X-ray source and has been observed by several X-ray missions. A Ginga observation revealed a hard power law with a flat slope (Γ 2−20keV ∼ 1.4±0.2), a high column density (NH ∼ 3.7±0.5 × 10 23 cm −2 ) and a strong iron emission line (EW ∼ 400±100 eV) Smith & Done 1996). OSSE observations (Bassani et al. 1995) showed a steeper photon index Γ ∼ 2.1±0.3 in the 50-200 keV energy range in agreement with those of Seyfert 1 galaxies in the same energy range (Johnson et al. 1994).
Here a 40 ksec ASCA observation of this bright Seyfert 2 galaxy is presented with the aim of a better understanding of the soft-to-medium X-ray emission. Throughout the paper a Hubble constant H0 = 50 Km s −1 Mpc −1 and a deceleration parameter q0 = 0 are assumed.
ASCA OBSERVATIONS AND DATA REDUCTION
NGC 4507 was observed with the Gas Imaging Spectrometer (GIS) and ths Solid State Spectrometer (SIS) onboard the ASCA satellite (Tanaka, Inoue & Holt 1994) over the period 1994 February 12-13. The SIS data were obtained using 2-CCD readout mode, where 2 CCD chips are exposed on each SIS with the target at the nominal position. The source position in SIS1 was slightly offset from the nominal value. For this reason a fraction of the order of 20-30 per cent of the source flux was lost in the gap between the chips. Following a software-related problem on board ASCA the data collected from the GIS3 were corrupted. As they could not be easily recovered, they were therefore excluded from the analysis. In the following with GIS we refer to GIS2 data only. All the SIS data were collected in FAINT telemetry mode, which maximises the CCD spectral resolution. The following criteria have been applied for the selection of good times intervals: the spacecraft was outside the South Atlantic Anomaly, the elevation angle from the Earth's limb was > 5 degrees, the minimum bright Earth angle was > 25 degrees, the magnetic cutoff rigidity was greater than 6 GeV c −1 for SIS data and greater than 7 GeV c −1 for GIS data. 'Hot' and flickering pixels were removed from the SIS data, and rise-time rejection was applied to exclude particle events for the GIS data. SIS grades 0,2,3 and 4 were considered for the analysis. Finally a short period of unstable pointing at the phase of target acquisition was removed manually. After applying the above selection criteria, 25 ks for each SIS and 40 ks for the GIS detector were collected.
We fit the GIS and SIS data above 3 keV with an absorbed power law plus a narrow Gaussian line. The best fit energies in the three instruments are in good agreement. Since the gain of SIS and GIS were within 1 per cent of their nominal values, the source spectrum can be considered substantially not affected by instrumental gain offset.
NGC 4507 is clearly detected in all instruments together with a nearby (∼ 6 arcmin) bright (V = 5.8) A0V star. In the SIS0 the star is detected at the edge of the other CCD chip while is barely visible in the SIS1. Moreover the gap between the two chips prevents from a significant contamination. In order to estimate the possible contamination in the GIS field we have analysed the star spectrum. A thermal Raymond-Smith model (kT ∼ 1 keV) provides a good fit to the data with some evidence of excess flux at higher energies suggesting that contamination effects between the two sources may be relevant. For this reason we have considered in the subsequent GIS analysis only counts at energies greater than 3 keV. Circular extraction cells of radius ∼ 3.5 arcmin for SIS and ∼ 4 arcmin for GIS centered on NGC 4507 were used, with corresponding background regions defined in source-free areas of the same CCD chip for SIS and from calibration background regions (blanksky) for GIS.
RESULTS
Source plus background light-curves were accumulated for all the instruments showing no clear evidence for variability. GIS and SIS spectra were binned with more than 20 counts per bin in order to apply χ 2 statistics. The response matrices, effective areas and XRT PSF used were those released with the latest version of FTOOLS (3.6). Since the spectral parameters obtained by fitting an absorbed power law plus a soft component to the SIS0 and SIS1 spectra were all consistent at the ∼ 90 per cent level and the residuals from those fits were very similar, we have added the two SIS spectra. In the overlapping energy range (3-10 keV) the GIS spectrum is consistent with the SIS results except for a slight mismatch (< 10 per cent) in the relative normalizations. In the following the spectral results are referred to the combined SIS0+SIS1 spectrum simultaneously fitted with the GIS2 one, leaving the relative normalizations free to vary. Unless explicitly stated all the quoted errors correspond to 90 per cent confidence intervals for one interesting parameter (∆χ 2 = 2.71).
The continuum emission requires at least two components: an heavily absorbed hard X-ray power law at high energies and a soft component below ∼ 3 keV. However, this model does not provide an adequate fit to the data. Two line-like excesses are clearly visible in the data/model ratio around 6.4 keV, indicative of iron Kα emission (a feature commonly seen in Seyfert galaxies, Nandra & Pounds 1994;Mushotzky et al. 1995;Nandra et al. 1997) and around 0.9 keV, whose possible origin will be discussed later on. Moreover, strong deviations are present in the 1-3 keV region ( Fig. 1). In the following subsections we provide a detailed description of the spectral complexity of NGC 4507.
The hard X-ray component
The hard X-ray spectrum is well fitted by an absorbed (NH ∼ 10 23.5 ) power law model plus a narrow Fe Kα line, whose rest energy (E = 6.36 ± 0.03 keV) indicates emission from neutral matter. The line EW (190 ± 40 eV) is consistent with the mean value of Seyfert 1 galaxies (Nandra et al. 1997). The power law photon index is somewhat dependent on the precise spectral model chosen for the broad band spectrum. Fitting the data in the 3-10 keV energy range we obtain a best fit value of Γ = 1.61 ± 0.20, which lies at the flatter end of the Seyfert 1 photon index distribution, which is characterized by an average value of Γ ≃ 1.9 (Nandra & Pounds 1994;Nandra et al. 1997). A reflection component has been added to the primary power law spectrum; however, given the relatively low effective area of ASCA above 6-7 keV, its amplitude is unconstrained by the present data and will not be further considered. A summary of spectral parameters is reported in Table 1.
The spectrum over the full energy range
The whole spectrum was then fitted using an absorbed power law plus iron line as a baseline model for the high energy spectrum, while several different models were fitted to the low energy spectrum In all cases the absorption by our own Galaxy has been fixed at the value of N HGal = 7.19 × 10 20 cm −2 (Dickey & Lockman 1990). The results are reported in Table 2.
Simple thermal models (Bremsstrahlung and Raymond -Smith) for the soft component do not provide good fits leaving strong residuals at all energies below 3 keV. A steep Γ = 2.43 ± 0.22 power law model for the soft X-ray band gives instead a better description of the data, with a relative normalization with respect to the hard power law of ∼ 2 per cent; however, also in this case, the fit is rather poor (Table 2), since two remarkable structures appear in the residuals: a line-like feature around 0.9 keV and a waving behaviour in the 1-3 keV range with an 'hump' between 2 and 3 keV (Fig. 1). It appears clear that a single power law for the soft X-ray spectrum which could be interpreted as the fraction of the direct continuum emission which is scattered into our line of sight by the reflecting mirror is a too simple an approximation. The addition of a narrow line gives a significant improvement (∆χ 2 ≃ 19) with a best fit line energy of 0.90 ± 0.02 keV and EW ∼ 140 ± 50 eV.
The slope of the low energy power law is steeper than the hard one and would be inconsistent with a simple scattering model. However the soft X-ray nuclear spectrum scattered into our line of sight could be intrinsically steeper because affected by the soft excess frequently observed in Seyfert 1 galaxies. If the soft photon index is forced to be the same of the hard one the fit is slightly worse as some flux excess is present in the 0.5-0.7 keV region. A possible interpretation of this excess as a blend of unresolved oxygen lines is discussed below.
An inherent limitation of the ASCA data is that the complexity of the spectrum in terms of absorption and emission lines features hampers a precise estimate of the spectral slope. For this reason in the following we assume that the slope of the scattered power law is equal to the hard one (Γs ≡ Γ h ).
A combination of a thermal model and a power law for the soft spectrum clearly improves the fit owing to the greater number of free parameters. The derived temperature for the thermal component (kT ∼ 0.7 ± 0.1 keV) is much lower than the typical values derived for late type galaxies (kT ∼ 3-5 keV; ) such as NGC 4507, but consistent with the characteristic temperatures inferred from recent ASCA observations of starburst galaxies (Serlemitsos, Ptak & Yaqoob 1997). Leaving the abundances of the Raymond-Smith model free to vary there is no improvement in the fit. We note that, even with such a complex model, the line-like feature at 0.9 keV is not completely accounted for (unless Neon abundances are left free to vary) and, moreover, the residuals below ∼ 0.8 keV are steeply increasing towards lower energies (Fig. 2).
The shape of the residuals below ∼ 3 keV and the evidence of a line-like excess at ∼ 0.9 keV suggests, instead (see below), that ionized absorption/reflection of the nuclear radiation by a warm scattering mirror could be important in the modelling of the soft X-ray spectrum.
It should be noted that the putative ionized absorber has significant effects only on the soft scattered component, while the hard X-ray emission is absorbed by almost neutral material as described in the previous section.
A composite cold plus warm absorber model has been fitted to the overall spectrum of NGC 4507 using the AB-SORI model available in XSPEC 9.0. Given the relatively large number of free parameters of the warm absorber model we have fixed the temperature of the warm material at T = 10 5 K (the temperature dependence in this model is very weak in the range T = 10 4.5−5.5 K), the iron abundance at the solar value and the slope of the primary power law at the same value of the hard component as is expected if the soft emission is scattered into our line of sight by the warm mirror. The fitted parameters are thus the column density NW of the warm gas and the ionization parameter ξ which are related by: ξ = L/neR 2 . A summary of the derived values is reported in Table 3. It should be noted that such a model only describes the photoabsorption of a background X-ray source, while the case of NGC 4507 is possibly more complex owing to the different geometry and physical parameters of the scattering region (see below and § 4.2.1). For this reason ABSORI should be considered as a first order approximation for the description of the soft X-ray continuum in NGC 4507.
With such a model the waving structure in the 1-3 keV range can be almost entirely accounted for by the characteristic absorption features of warm gas the only remaining feature is a strong line at ∼ 0.9 keV. A highly significant improvement (at > 99.99 per cent according to an F-test) has been obtained by adding a narrow gaussian line with EW ∼ 370 eV and ∼ 1.3 keV with respect to the unabsorbed (Fig. 3) and absorbed continuum, respectively, the latter value in agreement with the calculations by Netzer (1996).
The line energy (E = 0.92 ± 0.02 keV) is consistent with Ne ix and may be produced in the photoionized gas with contributions from both the resonant scattering and recombination emission (Matt et al. 1996). The relative contribution of the two components depends strongly on the optical depth of the emitting matter, and will be discuss in the next section. We note that the Ne ix line is among the most prominent features predicted in the warm absorber model of Netzer (1996; see his Fig. 4), owing to the relatively high abundance of Neon and to the large continuum absorption of ionized gas around 0.8-0.9 keV. An alternative possibility for the line-like emission at 0.9 keV is in terms of thermal emission from a hot thin Table 3. 0.5-10 keV Spectral Fits with warm absorber. a Photon spectral index; b Cold absorption column density (units of 10 22 cm −2 ); c Warm absorption column density (units of 10 22 cm −2 ); d Ionization parameter in ergs cm s −1 ; e Thermal models temperature (keV); f Soft Line energy (keV); g Soft Line equivalent width (eV) with respect to the unabsorbed continuum; h Total χ 2 and degrees of freedom. plasma, possibly associated with a starburst component, is also viable. The improvement with respect to a fit with a warm absorber model for the soft spectrum is significant (see Table 3) even if this fit is not as good as the one with a narrow line at 0.9 keV (∆χ 2 ∼ 5). Leaving the abundances of the Raymond-Smith model free to vary there is no improvement in the fit quality. We find a significantly larger ionized column density and ionization parameter with respect to the average properties of the warm absorbers in Sey 1s (< ξ > ∼ 30 ergs cm s −1 , < NW > ∼ a few × 10 21 cm −2 , . The larger NW is required from the fitting to account for the spectral behaviour in the ∼ 1.5-3 keV energy range (Fig. 1). As a consequence, a greater ξ is needed to account for the data below ∼ 1.5 keV. We note, however, that these values do not necessarily require a different ionization structure between NGC 4507 and Sey 1s', but can be explained with a higher inclination angle (i.e. a 'warm scattering mirror' rather than a 'warm absorber'). It's interesting to note that even larger values for NW and ξ have been recently reported for the Sey 2 galaxy Mkn 3 ).
The obscured nucleus
The high energy (> 3 keV) power law slope is relatively flat, but consistent with a typical Seyfert 1 spectrum. The absorption column density, spectral slope and flux level are consistent with the previous Ginga observation, without any evidence of flux and/or spectral variability over a timescale of about 4 years Smith & Done 1996). On the other hand the source was a factor 2 brighter in the 2-10 keV band during the 1984 EXOSAT observation (Polletta et al. 1996).
The observed 2-10 keV flux of the hard power law component is 2.1 × 10 −11 ergs cm −2 s −1 , corresponding to an absorption corrected luminosity of 3.7 × 10 43 ergs s −1 , which is within the range of Seyfert 1 nuclei.
A comparison of the best-fit spectral parameters for the hard component with the previous Ginga values ( Awaki et al. 1991, Smith & Done 1996 suggests a possible variation of the Fe Kα line intensity and, eventually, of the absorption column density. However, given the uncertainties due to the lower energy resolution of Ginga and in the cross-calibration of the two instruments, firm conclusions on this issue cannot be drawn. With the present constraints, the Fe Kα emission line intensity is consistent with transmission trough cold matter with a column density of a few 10 23 cm −2 (Awaki 1991;Ghisellini, Haardt & Matt 1994), though the data do not rule out some contribution from a reflection component in the continuum and in the line.
Contribution from ionized gas
The evidence of highly ionized material that leaves significant imprints on the 0.1-10 keV spectrum of Seyferts galaxies and quasars is by now widely recognized and, thanks to the ASCA capabilities, 'warm absorbers' have been clearly detected in several Seyfert 1 and quasars (Fiore et al. 1993;. Several theoretical models have been developed to explain the observed features (e.g. Netzer 1996 and references therein).
When the ionized gas lies on the line of sight, as for Seyfert 1 objects, absorption features of the oxygen edges at 0.74 and 0.87 keV are usually the most evident characteristics, while the strongest lines when observed against the direct continuum have typical equivalent widths of a few tens of eV at most, so that are difficult to detect with the present detectors. Much larger equivalent widths for the emission lines are expected if the central continuum source is obscured as in the case of Seyfert 2 galaxies (e.g. Netzer 1996).
We therefore interpret the observed feature around 0.9 keV we have detected in the spectrum of NGC 4507 as an emission line (parametrized as a gaussian) from warm material out of the line of sight, i.e. the same material responsible for the scattering of the continuum. In figures 4 and 5 the contour plots of the line energy vs. normalization (with the line width set to zero) and vs. the width (when permitted to vary) are shown. The best fit line energy suggests the identification of this feature with Kα emission from He-like Neon (0.92 keV). If the reflecting matter would be optically thin to all processes, the expected EW would be (see Matt et al. 1996 for the relevant formulae) about 6 keV (assuming solar abundances and a fraction of Ne ix of 0.4-0.5), resonant scattering accounting for about 90 per cent of it, to be compared with an observed EW of more than one order of magnitude smaller (Table 2 and 3). However, matter becomes optically thick to resonant scattering when NH ∼ a few 10 19 and to photoabsorption when NH ∼ 10 23 at the edge energy, and at a value five times smaller at the line energy. Because for these values of the column density Compton scattering is still optically thin, line EWs are largely dimmed (Matt et al. 1996). If the NH is actually of order of 10 23 , as suggested by the amount of scattered continuum photons, equivalent widths of the order of the observed one can be attained. A further reduction in the line strength arises from the fact that the line or, better, the resonant and intercombination lines, is resonantly trapped in the medium and may be eventually destroyed by photoabsorption by an Oxygen atom. The exact value of the line EW depends on several physical and geometrical details, and its precise evaluation is beyond the scope of this paper. Note that a strong He-like oxygen line at 0.57 keV is also to be expected. Unfortunately, the SIS efficiency at that energy is rather poor and prevents a detailed study of this feature; in any case, the obtained upper limit of O/Ne ∼ 4-5 is consistent with the expectation.
Another possible explanation for the ∼0.9 keV feature is in term of recombination on ground state of completely stripped oxygen, following a photoionization of an O viii atom. The threshold energy is 0.87 keV, which is inconsistent at the 3σ level with the observed line energy (Fig. 4).
However, the recombination feature should have a linelike appearance only if the temperature of the matter is much lower than the threshold energy; but with a temperature of 10 6 K, not impossible in photoionized plasma, the width of the feature should be not negligible and still consistent with the observation (Fig. 5), with the centroid energy shifted towards higher energy. Assuming, as usual, photoionized plasma, the expected EW in the optically thin case is (for solar abundances and a fraction of O viii of 0.4-0.5) about 2 keV. Again, this EW diminishes with the column density, and again values similar to that observed can be obtained for columns of the order of 10 23 cm −2 . In this case a recombination line at 0.65 keV from O viii, with a similar equivalent width, is also to be expected. The obtained 90 per cent upper limit to such a line is about 50 eV. At these column densities, however, such a line got resonantly trapped and may be eventually destroyed by photoabsorption by for example C vi, which may help explaining the low intensity of this line with respect to the recombination continuum. Obviously a final possibility is that both Ne ix and the O viii lines can contribute to the observed emission. In fact, despite the fact that the ionization potentials of Ne viii and O vii are quite different (as in the last ion the ionization should involve a K instead than L electron), it is possible to have both ions rather abundant at the same time (see e.g. Nicastro et al. 1997).
With the values of the column density of the warm matter as derived by both the amount of reflected continuum and the equivalent width of the 0.9 keV feature (whatever its origin), a substantial re-absorption of the scattered photons is expected. As the matter responsible for both the photons emission and absorption is the same, the adoption, for the fitting procedure, of a warm screen in front of the line and continuum emitting region, as we have done in the previous section, may be not completely appropriate to the physical situation under investigation. A self-consistent grids of models taking into account also emission and reflection processes in ionized plasmas, such as those computed using e.g. CLOUDY or XSTAR would probably be more appropriate. We note, however, that the physical picture obtained using ABSORI plus a Gaussian line at ∼ 0.9 keV is in overall good agreement with much more detailed calculations (Krolik & Kriss 1995;Netzer 1996), and that, in any case, the quality of the data is not such to permit an unambiguous determination of all the parameters.
Interestingly, the derived value of the absorbing column density is of the same order of that derived for the reflector, giving a check of our hypothesis that the reflector and the absorber are one and the same material. The line EW is now somewhat greater, but of the same order of magnitude, than in the previous case. The best fit ionization structure of the absorbing matter suggests that H-like oxygen and neon ions are dominating over He-like ions, consistent with the O viii recombination line hypothesis for the 0.9 keV feature. Observations with instruments with higher energy resolution and sensitivity, like AXAF, XMM and ASTRO-E, are clearly requested to clarify this issue.
Contribution from the starburst
Galaxies where the star formation rate greatly exceed the average rate of normal galaxies are called starburst galax-ies. Their optical spectra are characterized by intense narrow emission lines due to a recent episode of star formation and strong infra-red emission probably due to dust reprocessing of the radiation from hot stars. Shock-heated gas is expected to emit in the X-ray band with luminosities a few order of magnitude lower than those usually observed for Seyfert galaxies (see Serlemitsos et al. 1997 for a recent review) However when the emission of the AGN is obscured as in the case of Seyfert 2 galaxies the starburst component could provide an important contribution to the soft X-ray emission. An estimate of the expected X-ray luminosity from the starburst component can be obtained from the far infrared luminosity (David, Jones & Forman 1992).
Given a FIR luminosity of NGC 4507 of ∼ 1.1 × 10 44 ergs s −1 the expected 0.5-4.5 keV luminosity is ∼ 7.8 × 10 40 ergs s −1 (equation 1 in David et al. 1992), while the observed luminosity of the soft component in the same energy band is ∼ 2.2 × 10 41 ergs s −1 , i.e. about a factor three greater. This result suggests that the thermal component may account only for a relatively small fraction (< 30 per cent) of the observed soft X-ray luminosity.
Previous ROSAT and ASCA observations of starburst galaxies (i.e. Makishima et al. 1994;Moran & Lehnert 1997) have shown that the soft X-ray emission below 2 keV is frequently extended over a region greater than 1 arcmin in extent. A pointed PSPC observation of NGC 4507 has been retrieved from the public archive. The source is rather weak with an observed 0.5-2.0 keV flux of ∼ 3 × 10 −13 ergs cm −2 s −1 consistent with the value derived from our ASCA analysis in the same energy range. The image is consistent with a pointlike source at the PSPC spatial resolution of ∼ 25 arcsec to be compared with the extent of the optical image (∼ 1.3 × 1.7 arcmin). The lack of any extended emission and the spectral analysis results indicate that the contribution of a starburst component, if present, plays a minor role for explaining the soft X-ray emission of NGC 4507.
SUMMARY
The main results of the ASCA observation of the bright Seyfert 2 galaxy NGC 4507 can be summarized as follows: • The hard (> 3 keV) power law slope (Γ ≃ 1.4−1.8) lies at the flatter-end of the Seyfert 1 photon index distribution. The continuum is strongly absorbed at few keV by a column density NH ≃ 3 × 10 23 cm −2 . The iron line intensity (EW ≃ 190 ± 40 eV) is consistent with transmission through cold matter with such a column density.
• The soft X-ray spectrum is rather complex and cannot be approximated with a single component. A scattered power law plus emission from hot thermal gas provide an acceptable description of the soft X-ray continuum, but several features are left in the residuals. A thermal component possibly due to a starburst would in any case account for < 30 per cent of the soft X-ray flux.
• Reflection and self-absorption in a photoionized plasma provide a better description of the overall soft Xray spectrum. A line feature at 0.9 keV is clearly detected, probably due to the Ne ix Kα recombination, even if some contribution from the O viii recombination continuum cannot be excluded. Note that the presence of visible soft emission line is Seyfert 2's are expected (Matt et al. 1996;Netzer 1996), and have probably already been detected by ASCA in other Seyferts like Mkn 3 (Iwasawa et al. 1994), NGC 4051 (Guainazzi et al. 1996) and NGC 4388 (Iwasawa et al. 1997), as well as in the well known Seyfert 1.5 NGC 4151 (Leighly et al. 1997).
A broad band observation over the full X-ray domain from 0.1 to several tens of keV, like that will be performed by BeppoSAX, would allow a better estimate of the relative contribution of the scattered radiation from the warm gas and the starburst component. A more detailed study of the warm absorption/reflection requires good energy resolution and sensitivity; AXAF, XMM and ASTRO-E will surely improve significantly our understanding of these phenomena.
|
2014-10-01T00:00:00.000Z
|
1997-10-28T00:00:00.000
|
{
"year": 1997,
"sha1": "5613c057537774cba3c64a48560cab72b19b503a",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/295/2/443/3100084/295-2-443.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e4d74031c16c14c3dfb867c56bfe1384155931c6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
266054288
|
pes2o/s2orc
|
v3-fos-license
|
Development of preimplantation genetic testing for monogenic diseases in China
Abstract Preimplantation genetic testing for monogenic diseases (PGT-M) can effectively interrupt the transmission of genetic diseases from parents to the offspring before pregnancy. In China, there are over ten million individuals afflicted with monogenic disorders. This literature review summarizes the development of PGT-M in China for the past 24 years, covering the general steps such as the indications and contraindications, genetic and reproductive counselling, biopsy methods, detecting techniques and strategies during PGT-M application in China. The ethical considerations of PGT-M are also be emphasized, including sexual selection, transferring for mosaic embryos, the three-parent baby, and the different opinions for serious adult-onset conditions. Some key policies of the Chinese government for the application of PGT-M are also considered. Methods for regulation of this technique, as well as specific management to increase the accuracy and reliability of PGT-M, are regarded as priority issues in China. The third-generation sequencing and variants testing from RNA level, and non-invasive preimplantation genetic testing using blastocoel fluid and free DNA particles within spent blastocyst medium might be potential techniques and strategies for PGT-M in future.
Introduction
Monogenic disorders refer to diseases caused by variation in a single gene.According to Online Mendelian Inheritance in Man (OMIM) database (https://omim.org) updated on October 20 th 2023, approximately 8,000 monogenic disorders have been identified, of which around 6,728 have a known molecular basis.China with a population of more than 1.4 billion, shares the largest burden of rare genetic diseases worldwide.In China, it is estimated that there are over ten million individuals afflicted with monogenic diseases (Cram & Zhou, 2016).Most of the monogenic diseases with a prevalent rate exceeding 1%, at present, still lack effective treatments, and have high disability, dysgnosia and mortality rates.Therefore, it is crucial to prevent birth defect caused by monogenic disease.Conventional prenatal diagnosis during pregnancy can prevent some birth defects.However, once the fetus is diagnosed with abnormality, the pregnant women and their families may have to face great pain and mental burden brought by the termination of pregnancy.
Preimplantation genetic testing (PGT) (previously also known as preimplantation genetic diagnosis (PGD) or preimplantation genetic screening (PGS)) is a test performed to analyze the genetic materials from oocytes (polar bodies) or embryos (cleavage blastomere or blastocyst).The origin of PGT technology started way back in 1890 when Walter Heape successfully transferred embryos in the Belgian Hare doe rabbits (Heape, 1891).In 1990, Alan Handyside used polymerase chain reaction (PCR) to amplify Y-chromosome specific repeat sequences to offer PGT for a mother who carries an X-linked disorder; this is the first successful clinical case of PGT (Handyside et al., 1990).Two years later, Handyside's team successfully detected a three-nucleotide deletion (�F508) in the cystic fibrosis transmembrane regulator (CFTR) gene through PCR with nested primers in embryos, interrupting the transmission of �F508 from couples both with cystic fibrosis to their offspring (Handyside et al., 1992).PGT was further subdivided into PGT for aneuploidy (PGT-A), PGT for monogenic/single-gene disorders (PGT-M) and PGT for chromosomal structural rearrangements (PGT-SR) (Zegers-Hochschild et al., 2017).PGT-M was performed for couples who are at an increased risk of having a child with a specific monogenic disorders or single gene defects.PGT-M is indicated for couples who have certain pathogenic gene variants or related linkage markers, and is not applicable for genetic diseases without clear pathogenic variants for certain genes, or selection for nondisease phenotypes.
This review aims to summarize the application and progress of PGT-M in China, in the past 24 years, mainly covering the indications and contraindications, genetic and reproductive counselling, biopsy methods, detecting techniques, ethical considerations, policies and regulations and future prospects for PGT-M.
Application of PGT-M in China
In 1999, Professor Guanglun Zhuang and his team firstly interrupted the transmission of haemophilia in a carrier couple to their offspring (Xu et al., 2002).Following the understanding of human genetics and development of sequencing technologies, an increasing number of pathogenic genes and variants have been identified, which promotes the progress and application of PGT-M in China.
Indications and contraindications of PGT-M
Due to different policies and regulations, the indications of PGT-M are various in many countries.In China, PGT-M is permitted to interrupt the transmission of severe diseases with clear genetic diagnosis or linkage markers, including monogenic diseases, mitochondrial disease, HLA typing and hereditary cancer syndromes.
In the past two decades, PGT-M is widely applied to the high prevalence of monogenic diseases in China, covering blood disorders (such as Alpha (a-) and Beta (b-) thalassaemia, sickle cell anaemia and haemophilia), muscle disorders (such as Duchenne Muscular Dystrophy, Spinal Muscular Atrophy(SMA)), metabolic disorders (Phenylketonuria (PKU)), sensory disorders (hereditary hearing loss) and mental disabilities (such as Fragile X syndrome(FXS)) (Cram & Zhou, 2016).In 2018, the Rare Disease List and Rare Disease Treatment Guidelines were published in China with a total of 121 monogenic diseases.PGT-M was regarded as an essential strategy in interrupting the transmission of monogenic disease, and contributing to the Chinese government's policy in reducing overall birth defects.Increasing number of reproductive centres have begun to encourage couples to take carrier screening before pregnant for common genetic diseases in Chinese population, including thalassaemia, SMA, and FXS.
Under some special circumstances, PGT-M can be considered after sufficient genetic counselling.When couples had �2 instances of adverse pregnancy history, the fetuses or children showed similar clinical symptoms and carried same pathogenic variants, it is highly suspicious of gonadal mosaicism.
For the contraindications, PGT-M cannot be offered to couples who have uncertain clinical and genetic diagnosis or the causative genes with unclear inheritance mode.Additionally, PGT-M is inappropriate for the female who is the proband with severe signs or symptoms and cannot bear the risk from pregnancy after multidisciplinary assessment.
Genetic and reproductive counselling of PGT-M
Genetic counselling helps couples to understand the current clinical and genetic diagnosis and treatment for the disease, the probability of inheritance, the approaches to interrupt and the risk of pregnancy (De Rycke et al., 2020).Except the aspects mentioned above, the average success rates and risks of falsenegative diagnosis should not be ignored.In addition, genetic and reproductive counselling before PGT-M also helps couples to alleviate emotional distress and uncertainty during the process of deciding whether to pursue PGT-M (Pastore et al., 2019).Thus, it is necessary and important for the couple to get genetic and reproductive counselling before starting a clinical cycle.
Genetic counselling was first coined by geneticist Sheldon C. Reed in 1947; the United States has been the leading pioneer in the field of clinical genetics, however, the development of genetic counselling in many countries is very preliminary.In 1972, Dr. Jia-hui Xia led and established the first clinic offering genetic counselling services at Xiangya Hospital in Central South University.The National Committee of Human and Medical Genetics was founded in 1979 with eight speciality groups.The Chinese Board of Genetic Counselling (CBGC) was founded in 2015, which aims to standardize workflow of genetic counselling, promote the standardization, professionalization and normalization of genetic counselling (Sun et al., 2019).While the demand for genetic counselling services is experiencing a rapid increase, genetic counselling is barely available only in a few large cities (Sun et al., 2019).In 2021, Ministry of Human Resources and Social Security, together with the State Administration for Market Regulation and National Bureau of Statistics, released 16 newly recognized careers including "birth deficiency diagnostic counsellor", which has further contributed to the construction of a professional advisor team in the field of heredity.Although the development of genetic counselling in China is preliminary than the developed countries, during the last decade of applying PGT-M in China and increasing demand for genetic counselling, the training of professional genetic counsellor training is in focus.
Biopsy methods of PGT-M
Based on different conditions within patients, different biopsy approaches were used correspondingly, including biopsy for polar bodies, blastomere, and blastocyst.Polar body biopsy can be used to test the oocyte or zygote, and is capable of identifying maternal pathogenic variants or chromosome aberrations in early stages, but cannot detect paternal origin disorders (Montag et al., 2009).Blastomere biopsy is carried out at eight-cell stage of embryo, typically at the Day 3 cleavage stage, where one or two blastomeres are removed for PGT (Bar-El et al., 2016;Kalma et al., 2018;Zacchini et al., 2017).Blastocyst biopsy is performed with trophectoderm (TE) cells from blastocysts at Day 5 or Day 6, where 5-10 TE cells are usually removed (Aoyama & Kato, 2020).Because polar bodies biopsy has no adverse effect on embryos, it is recommended to be used in a minority of European countries with legal or ethical restrictions on the biopsy of embryos.Approximately at the beginning fifteen years of developing PGT-M in China, blastomere biopsy has been regarded as the gold-standard for many years.With the improvement of embryo vitrification technique, blastocyst biopsy is the most widely used technique at present in China.Compared to previously mentioned two biopsy approaches, blastocyst biopsy has relatively more resource of materials with minimal harm to the inner cell mass (ICM).
Detecting techniques of PGT-M
The advances in genetic testing methods have proved crucial for the development of PGT-M clinical application.In 1990, polymerase chain reaction (PCR) was first used in PGD for two couples with different Xlinked disease through blastomere biopsy (Handyside et al., 1990).However, a few biopsy cells and imbalanced expression of allele ended up with allele dropout (ADO), addressing an important limitation for PGT-M diagnosis (Dreesen et al., 2014).This problem was overcome with the development of whole genome amplification (WGA) technique in 2004, which profoundly increased the accuracy and reliability for single-cell testing (Zheng et al., 2011).The common WGA techniques include multiple displacement amplification (MDA), multiple annealing and looping-based amplification cycles (MALBAC) and degenerate oligonucleotide-primed PCR (DOP-PCR).Among these WGA techniques, MALBAC method is most widely used in China, while MDA method is more common used in European countries and USA (ESHRE PGT-M Working Group, Carvalho et al., 2020).
With the progress in molecular detection techniques, Sanger sequencing, targeted DNA fragment amplification, restriction fragment length polymorphism (RFLP), quantitative real-time PCR (qPCR), dual amplification refractory mutation system (D-ARMS) and short tandem repeats (STR) or single nucleotide polymorphism (SNP) based linkage analysis are used to detecting the targeted variants.RFLP and D-ARMS techniques were less used, while Sanger sequencing and linkage analysis are the mainstream used techniques at present in China.
WGA of single or a small number of cells for detection of targeted variant sites combined with lineage analysis is the main detection strategy for PGT-M.Next-generation sequencing (NGS) based on WGA strategy is increasingly used in China, which can simultaneously obtain the information of chromosomal aneuploidy and pathogenic variant site, and at the same time, the SNPs around the pathogenic genes are used for linkage analysis.This strategy was firstly reported in 2015 by Qiao's team, so as to realize onestep sequencing to complete chromosome screening, pathogenic variant detection and linkage analysis of the embryo, and this strategy successfully prevented the transmission of multiple osteochondroma within a family (L.Yan et al., 2015).In the same year, the first "cancer-free baby" for China was born in Reproductive and Genetic Hospital of Citic-Xiangya with the help of PGT, where the transmission of retinoblastoma allele in the family was blocked.With the application of PGT-M in China, several new PGT-M detecting or analyzing methods were developed, including MARSALAbased SMA (Ren et al., 2016), SNP-based HLA typing (Wang et al., 2020), DIRECTED (Ren et al., 2021), GEPLA (Wang et al., 2021), scHaplotyper (Z.Yan et al., 2020).According to the data from Reproductive Medicine Center of Peking University Third Hospital, so far, more than 300 diseases have been the subject of prepregnancy diagnosis, and approximately 300 healthy babies have been born through MARSALA.Compared with the PGT-M status in European countries and USA, SNP-array and karyotyping based strategies are more widely used (ESHRE PGT-M Working Group, Carvalho et al., 2020).
Ethical considerations with regard of PGT-M
Although PGT-M is a mature technology nowadays, it still should be treated with caution, since gene manipulation, especially in embryos, could lead to ethical issues.Similar to prenatal diagnosis, PGT technology provides information that can result in selective abortion, in which children are allowed to be born only when they are not inheriting gene disorders from their parents (Dondorp & de Wert, 2019).The ability of PGT to choose certain characteristics over others can lead to bias towards certain gender, such as "son preference" in China.Such preference is more like a belief, in which boys are thought to have more value than girls due to physical strength and naming habit (Li et al., 2004).Even worse, PGT is put in the moral ground of eugenics, a historical attempt to manually carry out human evolution by selectively breeding individuals with desired characteristics (Schulman & Edwards, 1996).In China, sex selection is only permitted for PGT for sex-linked disorders; social sex selection is not allowed.
Furthermore, whether to implant mosaic embryos also brings about an ethical issue.Mosaic embryos have more than one cell line with differing karyotypes within a single individual (Knouse et al., 2014).Mosaicism usually occurs due to errors in early cell division.Although studies have reported that mosaic embryos have lower reproductive potential, it has been found that transferring low/medium-grade mosaic embryo resulted in similar neonatal outcomes with the euploid embryo group (Capalbo et al., 2021).However, compared to non-mosaic embryos, the rate of mosaic embryo implantation is lower and that of miscarriage is higher (Greco et al., 2015;Munn� e et al., 2017;Spinella et al., 2018).The patient's specific situation should be considered and the couple must be well informed of the risks before transferring mosaic embryos.
Another extensively debated ethical issue is the three-parent baby, who inherits DNA from one male and two females (Reardon, 2017).Mitochondrial replacement therapy (MRT), or three-person in vitro fertilization (IVF), is mainly criticized on safety grounds for carrying out genetic engineering on an embryo, privacy of the third-parent egg donor, and objections raised by the population's traditional understanding about procreation (Rulli, 2016).Recommendations have been made by the Ethics Committee of the American Society for Reproductive Medicine (ASRM) about allowing those with past PGT-M experience to serve as genetic counsellors and talk with patients considering such procedures (Ethics Committee of the American Society for Reproductive Medicine, 2018).However, in clinic this special circumstance for threeparent baby is not permitted in China due to the uncertain safety of this technique and ethical issue.
Lastly, it is hard to define the proper range of what constitutes serious disease, and therefore whether to conduct PGT-M.This range can vary dramatically due to difference in education level, tradition, belief, career, and personal stereotype among the population, making it impossible to set up a universal standard.For example, PGT-M for some nonfatal diseases such as autosomal dominant polycystic kidney disease (ADPKD), are not suggested in many countries.The Ethics Committee of the ASRM recommended that PGT-M for serious adult-onset conditions and no known interventions is ethically justifiable.PGT-M for less serious or of lower penetrance adult-onset conditions is acceptable (Ethics Committee of the American Society for Reproductive Medicine, 2018).A Chinese experts' consensus pointed that it is necessary to comprehensively consider the severity of the disease and the actual situation of the patients (Professional Committee on Reproductive Medicine, 2021).
Among the mentioned ethical issues, the attempt to choose certain characteristics, the safety for threeparent baby, and the definition for proper range of serious diseases are the unique ethical issues for PGT-M compared to other PGT technologies.
Policies and regulations of PGT-M
Assisted reproductive technology (ART) is a special clinical treatment technology, which is restricted in its application.To regulate ART services and reduce risk, the Chinese government has developed a number of policies, after carrying out technical reviews.The Measures for Control of Human Assisted Reproductive Technology were published and put into force in 2001.Subsequently, relevant technical specifications, basic standards and ethical principles were formulated to put forward clear requirements for the standardized application and enhanced regulation of PGT.The policies specify the indications for PGT technology, mainly for monogenetic disorders, chromosomal disorders, sex-linked disorders and high-risk couples who may have abnormal children, etc.It has also promulgated the Guiding Principles for the Application of Assisted Human Reproductive Technology (2021 Edition), Ethical Principles for Assisted Human Reproductive Technology and Human Sperm Banks, and the Action Program for the Management of Assisted Human Reproductive Technology to provide a general standardized rule of carrying out PGT technology.According to the regulations, the institutions applying for PGT technology should have been performing IVFembryo transfer or ICSI for at least five years, and only institutions approved to perform PGD/PGT-M technology can perform PGS/PGT-A technology.According to the statistical data from National Health Commission of the People's Republic of China (http://www.nhc.gov.cn), until 31st December, 2020, 78 reproductive centres have licenses to carry out PGT-M among 536 reproductive medical institutions in the entire country.
To strengthen the management of rare diseases in China, improve the level of rare disease diagnosis and treatment, and safeguard the health rights of rare disease patients, the Chinese government has published the Rare Disease List and Rare Disease Treatment Guidelines.Furthermore, the knowledge and skills training for medical, and genetic counselling physicians related to the prevention and treatment of birth defects have been strengthened.The national birth defects prevention and treatment personnel training programme has been carried out, which is focused on the prevention and treatment of birth defects, including genetic counselling training.In recent years, public awareness and screening for rare genetic diseases have played an important role in disease prevention through government-sponsored projects.For example, in South China, special education programmes were created to make the public aware of the existence of thalassaemia, which led to a very high acceptance rate of thalassaemia screening programmes.As a result, the birth rate of thalassaemia patients has dropped dramatically (Cram & Zhou, 2016).
Prevention and treatment of birth defects has four potential points for prevention, including premarital screening, pre-pregnancy examination (such as PGT), prenatal screening and diagnosis, and newborn disease screening.The prevention of genetic diseases still faces great challenges.As 50% of the population in China lives in rural areas, and the cost of genetic testing is not covered by national insurance, a lot of people do not have access to genetic testing (Chopra & Duan, 2015).In China, the government has strongly promoted health education for birth defects prevention and control, premarital medical examination, prepregnancy health examination, folic acid supplementation to prevent neural tube defects and other primary prevention services to reduce the risk of birth defects.The national premarital medical examination rate reached 68.4%, and the pre-pregnancy health examination rate reached 96.4% in 2021.Secondly, prenatal screening and prenatal diagnosis and other secondary prevention services were available to reduce the birth of children with fatal and severe disabling defects.National prenatal serological screening rate for Down syndrome increased to 81.1% by 2020.Thirdly, the Chinese government has strengthened newborn disease screening and medical security to prevent and reduce congenital disabilities in children.National newborn genetic metabolic disease screening rate reached 98.6%, and hearing impairment screening rate reached 86.5% (http://www.nhc.gov.cn/).In China, the infant mortality rate and under-five mortality rate caused by birth defects have been significantly reduced, and the incidence of major birth defects such as Down syndrome, neural tube defects, congenital hydrocephalus and limb shortening are on the decline, and the prevention and treatment of birth defects have made evident progress and effectiveness.Active research and effective clinical translation using new technologies such as NGS and third-generation sequencing will play important roles in further reducing the total burden of rare single-gene disorders in China (Chopra & Duan, 2015;Madrigal et al., 2014;Mitsuhashi & Matsumoto, 2020).
Prospects for PGT-M
Despite the considerable improvement in techniques, new technologies hold significant promise.For example, the emergence of pathogenic variant detection at the RNA level provides new strategy for splicing variants.Third generation sequencing, such as single molecule real-time (SMRT) sequencing technology from PacBio shows benefit for identifying tandem repeat disorders, pseudogene discrimination, polymorphic region detection (Ardui et al., 2018).SMRT sequencing currently has been used in the diagnosis of monogenic diseases, and could also be a potential technique applied in PGT in future.Non-invasive preimplantation genetic testing may be the next revolution in reproductive genetics.In the last several years, there has been increased attention on blastocoel fluid (BF) and free DNA particles within spent blastocyst medium (SBM) (Chen et al., 2021;Rubio et al., 2020).Compared to other traditional methods, BF and SBM samples have more DNA templates used for genetic testing, and avoids micromanipulation of the embryo (Capalbo et al., 2018;Leaver & Wells, 2020).Non-invasive PGT-A has been used in clinic (Huang et al., 2019).However, BF and SBM are not as yet widely used in clinical PGT-M, because of the impact of confounding factors, such as maternal contamination.
Discussion
This narrative review has traced out the development of PGT-M in China over the past 24 years (Figure 1).Because of the size of its population, China has the largest burden of genetic diseases worldwide.In combination with government policies which have promoted health and prevention of disease, and the economic development of the country which has resulted in considerable investment, significant advances have been made in PGT-M technology and its application in the clinical setting resulting in considerable benefit to affected patients.Whilst there are many ongoing ethical debates, care has been taken to ensure that the field is tightly regulated, to prevent society imbalance for example by sex selection for social reasons.
The strength of this review is that it provides an overview of an important technology for the prevention of inheritable disease in China, and compared the differences of PGT-M with the other countries.Whilst this review gives a general overview of the development of PGT-M in China, there is considerable variation in its application in the 78 reproductive medicine centres which have the license to perform PGT-M, and so may not represent clinical practice in all centres.
Implementation of this technology needs to be scaled across China, including the large rural areas, in which there is limited access to assisted reproductive technologies.This may require development of new accessible centres, or possibly new patterns of clinical care, such as remote consultations.Critical will be the training of doctors, embryologists, nurses, and most importantly genetic counsellors, in what is a young discipline in China.Ethical considerations remain at the forefront of policy and clinical decision making, and debate must continue to be encouraged.
As well as the ongoing development of new technologies for genetic analysis, much effort is directed to non-invasively, but reliably, collecting sample genetic material from the embryo, which will likely make the technique more affordable and therefore accessible.New models of clinical practice to reach all areas of the population should be evaluated for safety and efficacy.
Figure 1 .
Figure 1.Timeline summarizes the development and application of PGT-M in China.
|
2023-12-08T06:17:08.650Z
|
2023-08-08T00:00:00.000
|
{
"year": 2023,
"sha1": "32f137ad2a86048ed41772e3f19298c62d673127",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14647273.2023.2284153?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d871c03fe8a2b27bc24b34b3d18fc89ec6845593",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258548398
|
pes2o/s2orc
|
v3-fos-license
|
Ketoconazole as second-line treatment for Cushing’s disease after transsphenoidal surgery: systematic review and meta-analysis
Introduction The first-line treatment for Cushing’s disease is transsphenoidal surgery for pituitary tumor resection. Ketoconazole has been used as a second-line drug despite limited data on its safety and efficacy for this purpose. The objective of this meta-analysis was to analyze hypercortisolism control in patients who used ketoconazole as a second-line treatment after transsphenoidal surgery, in addition to other clinical and laboratory criteria that could be related to therapeutic response. Methods We searched for articles that evaluated ketoconazole use in Cushing’s disease after transsphenoidal surgery. The search strategies were applied to MEDLINE, EMBASE, and SciELO. Independent reviewers assessed study eligibility and quality and extracted data on hypercortisolism control and related variables such as therapeutic dose, time, and urinary cortisol levels. Results After applying the exclusion criteria, 10 articles (one prospective and nine retrospective studies, totaling 270 patients) were included for complete data analysis. We found no publication bias regarding reported biochemical control or no biochemical control (p = 0.06 and p = 0.42 respectively). Of 270 patients, biochemical control of hypercortisolism occurred in 151 (63%, 95% CI 50-74%) and no biochemical control occurred in 61 (20%, 95% CI 10-35%). According to the meta-regression, neither the final dose, treatment duration, nor initial serum cortisol levels were associated with biochemical control of hypercortisolism. Conclusion Ketoconazole can be considered a safe and efficacious option for Cushing’s disease treatment after pituitary surgery. Systematic review registration https://www.crd.york.ac.uk/prospero/#searchadvanced, (CRD42022308041).
Introduction
Cushing's disease (CD) results from an adrenocorticotropic hormone (ACTH) secreting pituitary tumor, which leads to chronic hypercortisolism (1,2). It is a potentially fatal disease, with mortality rates up to 3.7 times higher than the general population (3, 4). CD is three times more common in women.
According to consensus, the first-line treatment for CD is pituitary tumor resection surgery with the transsphenoidal technique (4, 5), which achieves short-term biochemical control rates of 60 to 80%, depending on the experience of the treatment center. In long-term follow-up, recurrence rates range from 20 to 30% even in cases with complete initial biochemical control (6,7).
Medication is a therapeutic option in patients who do not achieve biochemical control with transsphenoidal surgery (TSS), have recurrent hypercortisolism, and have contraindications or high surgical risk, or it can be used while waiting for the efficacy of radiation techniques (8). In such cases, adrenal-blocking drugs become important.
Ketoconazole is an antifungal drug, a synthetic imidazole derivative that blocks multiple enzymes involved in adrenal steroidogenesis pathways (CYP11A1, CYPP17, CYP11B2, and CYP11B1). It was recently approved for use in CD by the European Union (9) and has been recommended for off-label use in the United States (2,10,11). Although recommended by professional guidelines (not regulatory authorities) for hypercortisolism, its use as an antifungal has been more restricted since regulatory agencies in Europe and the United States have issued statements regarding its high risk of hepatotoxicity, including reported deaths from liver failure (12,13). Recently, a levorotatory derivative (Levoketoconazole) with estimated lower hepatotoxicity was introduced (14).
Clinical studies evaluating the efficacy and adverse effects of ketoconazole in CD are scarce. Their limited and heterogeneous samples include hypercortisolism control as a first-line therapy or after TSS and they include patients with ACTH-dependent Cushing's syndrome with indeterminate etiology (11)(12)(13).
Two recent meta-analyses had divergent results regarding hypercortisolism remission rates with ketoconazole use: 46% vs. 64% (15, 16). Adverse effects, treatment interruption, and treatment-associated deaths have also been reported. Thus, studies evaluating the efficacy of ketoconazole for its main indication and continued or recurrent hypercortisolism after TSS are not currently available.
This meta-analysis aimed to analyze the prevalence of biochemical control of hypercortisolism in CD patients who used ketoconazole as a second-line therapy after TSS, in addition to clinical and laboratory parameters that can predict therapeutic response and serious adverse effects due to ketoconazole treatment.
Materials and methods
This systematic review and meta-analysis study was performed according to the PRISMA system (17) and was registered in the International Prospective Register of Systematic Reviews (CRD42022308041).
Identification of studies
A search was performed in three databases: MEDLINE, EMBASE, and SciELO. In MEDLINE, using the Medical Subject Headings "Pituitary ACTH hypersecretion" or "Cushing's disease" and "Ketoconazole" or "Fluconazole", 305 articles were found. In EMBASE, using the Emtree terms "Cushing's disease" and "ketoconazole" or "fluconazole", 544 results were found. In SciELO, using the terms "Cushing's disease" and "Ketoconazole" or "fluconazole", five articles were found.
The complete search strategy can be found in Supplementary Material 1. The searches were performed in June 2021 and updated in May 2022 although no new studies were added to the analysis through this step. A manual search was performed for references to reviews and meta-analyses in the included studies, as well as systematic reviews or articles on related topics. Every potential article was considered eligible for review, with no language limitations. Whenever necessary, authors were contacted to confirm information or supply missing data.
Selection criteria
We selected observational, case-control, or clinical trials that included CD patients diagnosed through clinical manifestations in association with at least two positive screenings for hypercortisolism, baseline ACTH > 20 pg/ml, pituitary adenoma confirmed in surgery, bilateral petrosal sinus catheterization, or pituitary MRI showing a lesion > 6 mm (18). Patients must have undergone transsphenoidal surgery as first-line therapy, either without postoperative remission or with recurrence during clinical follow-up. Consequently, ketoconazole was used as a second-line treatment to control hypercortisolism. Studies of patients who received radiotherapy concomitantly with ketoconazole were not excluded.
Study selection, data extraction, and quality assessment
Two authors (CV and ACVM) performed independent searches in the databases, selecting potential studies based on titles and abstracts for further analysis of the complete articles. Inter-rater agreement was 0.88 according to Cohen's kappa coefficient (95% CI, 0.83-0.93) for the selected studies. Disagreements were resolved by consensus between the investigators (CV and ACVM) or when necessary, by a discussion with a third investigator (MAC). Baseline characteristics and outcomes were extracted from studies that met the inclusion criteria, including baseline and post-drug cortisol measurements, mean and maximum treatment duration, ketoconazole dose, potential adverse effects, and drug intolerance. The considered outcomes were the prevalence of complete, partial (reduction of > 50% in cortisol levels despite incomplete normalization of 24-h UFC), or no biochemical control of hypercortisolism with ketoconazole use.
Data were extracted only when the studies reported ketoconazole use after transsphenoidal surgery (TSS). Studies that did not subdivide ketoconazole data into pre-and posttranssphenoidal surgery were excluded.
Disagreements about data extraction were discussed until a consensus was reached. The original authors were contacted by email to resolve questions or obtain missing data. Study quality was evaluated using a modified Newcastle-Ottawa scale (19).
Data analysis
Rates of complete, partial, and no biochemical control were analyzed across all included studies and the pooled prevalence was calculated. Cochrane's c 2 and I² tests were used to assess heterogeneity between studies, and p = 0.05 was considered significant. Incidence estimates were obtained by random effects models. Meta-regression was performed to analyze the relationship between ketoconazole dose, treatment time, and baseline cortisol level.
Publication bias was assessed with a funnel plot that assesses the incidences in relation to the standard error of each study, which was determined using the Begg and Egger tests. Meta-analysis was performed using R version 4.1.2 and R META package version 4.19.2.
Results
Electronic and manual database searches resulted in 735 studies, of which 652 were excluded after analyzing the titles and abstracts. We selected 83 studies for full-text review. After applying the exclusion criteria, 10 articles remained (totaling 270 patients) for analysis and complete data extraction (10,(20)(21)(22)(23)(24)(25)(26)(27)(28). The flow diagram is shown in Figure 1. No articles using the term fluconazole in the context of CD were found in the searches.
All of the selected studies used normalized 24-h UFC levels as a criterion for biochemical control of hypercortisolism except for one (24), which used serum cortisol level and the suppression test with 2 mg of dexamethasone (Liddle test).
Most patients were women and were treated with ketoconazole for a mean of 31.4 months and a maximum of 45 months. Details of each included study are presented in Table 1. Unpublished data from a conference abstract from a Brazilian cohort were included and were supplemented through direct contact with the authors (27).
The study quality analysis is shown in Table 2. In general, the quality of the articles was adequate. Some data could not be extracted due to uncertainty about when TSS had been performed and ketoconazole therapy had begun. In such cases, the authors were contacted and, if they did not respond by the time of the Flow diagram: Identification and selection of articles for the meta-analysis. analyses, the data were excluded. The study by Huguet et al. (23) was excluded from the analysis of the "no biochemical control" variable for not mentioning non-remission as a possible outcome. Begg and Egger's tests were performed to assess publication bias regarding biochemical control of hypercortisolism. Since the results were not significant, there was no need to perform a trim-and-fill analysis. Funnel Plots (Figures 2, 3) demonstrate the lack of publication bias regarding biochemical control and no biochemical control (p = 0.06 and p = 0.42, respectively).
Control of hypercortisolism (biochemical control)
Ten studies (270 patients) indicated the prevalence of biochemical control of hypercortisolism in patients who underwent TSS and received ketoconazole as a second-line therapy. A total of 151 patients had complete biochemical control (63%; 95% CI, 50-74%; see Figure 4). We performed a meta-analysis without including Correa Silva's unpublished data, and the prevalence of hypercortisolism remission remained at 63%. These charts can be found in the Supplementary Material.
The high variability between studies is partly explained by the clinical differences between cohorts, which explain the 39 to 89% variation in remission rates. The lowest complete remission rate, 39%, was found in Di Somma et al. However, in addition to being the only prospective study, there was a high rate of partial biochemical control (61%), and no patient was classified as no biochemical control. This cohort also had the highest mean baseline cortisol levels (1413 nmol/24h, 9.46 times above the upper reference limit) and the lowest mean final ketoconazole dose (400 mg daily). The highest remission rate, 89%, was found in Sonino et al., a Although the concept of partial response was not addressed directly in most studies, some patients experienced a reduction of > 50% in cortisol levels despite incomplete normalization. This condition was described in five cohorts (10, 21, 26, 27, 28), demonstrating partial benefits from ketoconazole in 59 patients (21.7%).
Adverse effects
Although all of the studies described adverse effects from ketoconazole, only two provided information about them after TSS (26, 28). The following stood out among the main adverse effects: elevated transaminase levels, diarrhea, abdominal pain, skin rash, gynecomastia, and adrenal insufficiency. Medication discontinuation due to intolerance was reported in three studies (10,20,28). Due to insufficient data, it was not possible to perform a meta-analysis of the prevalence of adverse effects. No deaths related to ketoconazole were reported in any study.
Meta-regression
In studies that evaluated hypercortisolism remission, metaregression was used to analyze which variables influenced the occurrence or not of biochemical control. Both the final dose of ketoconazole (six studies with a mean dose of 628 mg/day: range 400 mg to 779 mg/day), the duration of drug treatment (seven studies with a mean duration of 31 months), and the baseline 24-h UFC levels (seven studies with a mean of 4.48 times above the reference value) showed no association with hypercortisolism remission (data not shown).
Discussion
Drug treatment in CD is reserved only for patients with no biochemical control after TSS, in those who are not candidates for surgical treatment, or in those awaiting the effects of radiotherapy (2,4). The available drugs in this context act in several ways: as adrenal blockers (ketoconazole, osilodrostat, metyrapone, mitotane, levoketoconazole, and etomidate), somatostatin receptor ligands (pasireotide), dopamine receptor agonists (cabergoline), or as glucocorticoid receptor blockers (mifepristone) (2,29). These drugs must be prescribed considering aspects such as the potential for remission, potential adverse effects, availability, and cost. Moreover, no single drug has yet been demonstrated as superior to the others (2,30,31).
Comparing our analyses with previous studies, we found that hypercortisolism control in patients who had already undergone TSS was higher than in studies that did not subdivide ketoconazole use into pre-and post-transsphenoidal surgery or in studies evaluating multiple etiologies of hypercortisolism (15, 16, 32).
Our meta-analysis evaluated 10 studies from different countries and ethnic groups regarding CD treatment with ketoconazole due to non-remission or recurrence after TSS. The hypercortisolism biochemical control rate we found after TSS (63%) was greater than some prospective studies evaluating current drugs such as levoketoconazole but was also similar to that found in a systematic review by Pivonello et al. (64%) (14, 32). However, it was higher than that found in the most recent meta-analysis (36 to 46%) (15). These two systematic reviews (14, 15) did not subdivide ketoconazole use into pre-and post-transsphenoidal surgery, which can significantly impact the hypercortisolism control rate. A multicenter study by Castinetti et al. showed greater efficacy in patients who had already undergone TSS (68% control) compared to preoperative use (48.7% control) (10). These findings may be due to the fact that assessing patients with different states of hypercortisolism broadens the sample beyond only CD patients (i.e., probably including patients with ectopic ACTH syndrome and other etiologies) and, thus, the percentage of controlled patients may be lower.
According to the literature, even without complete biochemical control, patients who present some reduction in serum cortisol levels, Forest plot of hypercortisolism non-remission with ketoconazole.
Viecceli et al. 10.3389/fendo.2023.1145775 Frontiers in Endocrinology frontiersin.org partial biochemical control, or improvement in any associated comorbidities are candidates for continuing ketoconazole alone or in a possible association with other medications (2). Our meta-analysis found that such was the case in 59 patients. Although the concept of partial response was not addressed directly in most of the included studies, some individuals experienced a > 50% reduction in cortisol levels but not complete normalization. By analyzing the overall rate of non-responders (20%), we can extrapolate that approximately 80% of patients treated with ketoconazole experienced some improvement in cortisol levels, which in itself demonstrates the medication's efficacy.
Although we considered the hypercortisolism biochemical control rate to be satisfactory with ketoconazole, many patients may lose biochemical control over the course of treatment or have long-term oscillations, and it has been suggested that this can occur in up to 23% of those who achieved initial control using the drug (2, 32), which shows the dynamic nature of their treatment and the constant challenge in clinical practice. This could not be established in our meta-analysis due to the lack of reported data (15, 16, 32). Although tumor size is not necessarily related to cortisol levels in CD, those with macroadenomas have a lower chance of remission after TSS (2,33). Patients who use ketoconazole preoperatively may already have larger lesions, which makes surgery difficult, or active pituitary lesions, which can reduce the ability to achieve control through medication. In our meta-analysis, only two studies described tumor size and correlated it with remission after ketoconazole therapy (10,24).
The hypothesis that patients with lower pre-treatment serum cortisol levels or who used higher doses of ketoconazole would have higher biochemical control rates was not confirmed since we found no relationship between longer duration of use and higher remission rates. The data included in this review do not provide a profile of patients most likely to benefit from ketoconazole treatment. Other reviews of ketoconazole therapy in any context of Cushing's syndrome have found that up to 20% of patients experience adverse effects such as elevated transaminase levels, with the majority being asymptomatic moderate elevation, i.e., < 5 times the upper limit of normality. These hepatic changes do not appear dose-dependent and are usually reversed within 2 to 12 weeks after ketoconazole discontinuation or dose reduction (34). When compared, up to 32% of participants experienced mild adverse effects in the levoketoconazole study, with 13% having to discontinue treatment (14). Our analyses have several limitations since nine of the 10 primary studies that were included in the metaanalysis were retrospective and uncontrolled in design. We could find no randomized clinical trials, and we know that only randomized, controlled trials with an intention to treat analysis can provide accurate estimates of drug efficacy. New therapeutic options are under investigation in clinical trials and will likely bring more robust data about hypercortisolism control in CD.
Despite the limitations, consensus continues to indicate adrenal blockers, including ketoconazole, for patients with moderate CD and no visible lesions in MRI. The recommendation is that drug therapy should be individualized, based on the patient's clinical picture, hypercortisolism severity, and medication availability and cost, so that treatment is optimized and applied for the necessary period of time (2,33,35,36).
Conclusion
Our meta-analysis showed that ketoconazole effectively controlled hypercortisolism in approximately 63% of CD patients when used according to its principal indication, i.e., in patients without remission after TSS. No association was found between hypercortisolism biochemical control and total medication dose, treatment duration, or initial serum cortisol levels. No serious adverse effects or treatment-related deaths were observed in these patients. These findings indicate that based on the current literature available, ketoconazole is an efficacious and safe drug for treating active CD after pituitary surgery.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
Author contributions
CV, SPG and MAC created the research format. CV and ACVM developed the search strategies and independently applied the eligibility criteria, subsequently extracting the data. CV and ACVM performed a peer review of the data and assessed risk of bias. CV and VNH performed the meta-analysis. MAC oversaw all phases of the metaanalysis and arbitrated conflicts of opinion. SPG and TCR participated in the final data review and discussion. All authors contributed to the article and approved the submitted version.
Funding
This work was supported by the "Coordena cão de Aperfei coamento de Pessoal de Ń ıvel Superior" (CAPES), Ministry of Health -Brazil, through a PhD scholarship; and the Research Incentive Fund (FIPE) of Hospital de Cĺ ınicas de Porto Alegre (HCPA) and Programa de Excelencia Academica from CAPES (PROEX).
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2023-05-08T13:06:09.194Z
|
2023-05-08T00:00:00.000
|
{
"year": 2023,
"sha1": "b5d4a52cdaa1c5a7d02ecfe08acfd1e62043f760",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b5d4a52cdaa1c5a7d02ecfe08acfd1e62043f760",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
243892084
|
pes2o/s2orc
|
v3-fos-license
|
Differences in Dietary Patterns among the Polish Elderly: A Challenge for Public Health
The aim of the study was to assess the diversity of dietary patterns within the elderly, in relation to the region of residence, household structure, and socioeconomic status. The questionnaire was conducted in a group of 427 Polish adults aged 60 and older from June to September 2019. The sample was selected by means of the snowball method in two regions. Principal component analysis (PCA) was used to extract and identify three dietary patterns (factors) from the frequency of eating 32 groups of foods. Logistic regression analysis was used to determine the relationship between the identified dietary patterns (DPs), region, household status, and socioeconomic index (SES). Adherence to the identified DPs, i.e., traditional, prudent, and adverse, was associated with socioeconomic status (SES) and living environment, i.e., living alone, with partner, or with family, while the region did not differentiate them. Less people living with their family were characterized by the frequent consumption of traditional food (the upper tertile of this DP), while more of them often consumed food that was typical for both prudent and adverse DPs (the upper tertiles of these DPs). The presence of a partner when living with family did not differentiate the adherence to DPs. A high SES decreased the chances of adhering to the upper tertiles of the “prudent” and “traditional” DPs, while living with family increased the chances of adhering to both the upper and middle tertiles of the “prudent” DP. Identifying the dietary patterns of the elderly contributes to a better understanding of the food intake of the senior citizens living in different social situations, in order to support public policies and nutritional counseling among this age group.
Introduction
Ageing is associated with a progressive physical and cognitive decline. Therefore, adequate nutritional intake is very important in the elderly [1,2] (inadequate nutrition can lead to various dysfunctions, such as decreased immunity, frailty, and noncommunicable diseases (NCDs)). Overweight and obesity significantly increase the risks of these dysfunctions [3]. The most common dietary mistakes in the elderly include the following: poor variety of meals; insufficient intake of vegetables and fruit, dairy products, cereals, fish, and water; excessive intake of sugar and sweets, meat and its products, fats, and foods with high energy density and low nutrient density [4].
Recent approaches to studying health-related behaviors have adopted dietary patterns, rather than investigating individual exposures, i.e., food intake and/or macronutrient intake [5]. Dietary patterns (DPs) represent the whole diet and are considered to be a better alternative to single dietary characteristics when studying the associations between diet and other variables, i.e., chronic diseases and food choice motives [6,7]. To identify DPs, factor analysis (FA) and principle component analysis (PCA) can be applied [7]. The aggregation of dietary data, e.g., frequency of food consumption, into factors based countries other than Poland. Therefore, the aim of the study was to identify the dietary patterns of Polish seniors, and then to assess the diversity of these patterns with regard to the region of residence, household structure, and socioeconomic status.
Study Design and Sample
The research was carried out in two culturally and economically diverse regions in Poland. In 2019, theŚwiętokrzyskie region was the region with the lowest GDP (71.6% of medium GDP per capita), while theŚląskie/Dolnośląskie region, including two voivodeships, i.e.,Śląskie and Dolnośląskie, was characterized by high GDP (102.3% and 109.5% of medium GDP per capita, respectively) [37]. The study was conducted from June to September 2019. The sample was selected using the snowball method. A total of 750 questionnaires were distributed in 16 clubs or senior circles in both regions. People who agreed to participate in the study were asked to help in further recruitment by handing over a questionnaire to people living in their neighborhood who met the age criterion. As a result, 506 questionnaires were collected. Due to the lack of data, 69 questionnaires were excluded from the analysis. The inclusion criterion for the study was the age of 60 years or more, and that each participant represented one household. The study sample consisted of 437 people, with 251 participants from theŚląskie/Dolnośląskie region and 186 participants from thé Swiętokrzyskie region.
Questionnaire
A dietary habits and nutrition beliefs questionnaire (KomPAN) [38,39] was used to assess the frequency of consumption of 32 groups of foods. All participants were asked to record their habitual frequency of consumption for each food group within the last year using the following answers: (1)-less than once a month or never; (2)-1-3 times a month; (3)-once a week; (4)-a few times a week; (5)-once a day; (6)-a few times a day.
To assess the household structure, the question "What is your household composition?" was posed with the following answers: (1)-I live alone; (2)-I live with a partner; (3)-I live with family without a partner; (4)-I live with family and a partner. The questions on sociodemographic characteristics of the study group concerned gender, age, education, and place of residence.
To assess the socio-economic status (SES) of the respondent, the following questions were asked: 1.
Self-reported financial situation-the following two questions addressed this matter: "How do you assess your financial situation?", with the following answers: below average (1 point); average (2 points); above average (3 points), and "How do you evaluate the situation of your household?", with the following answers: I have to save to meet my basic needs (1 point); it is enough for my needs, but I have to save for larger purchases (2 points); it is enough for me without saving (3 points).
2.
Family financial assistance-the following question was asked: "Do you obtain financial assistance from your family including the family you live with?", with the following answers: no, although I have financial problems (1 point); yes, because I have financial problems (2 points); there is no such need because my financial situation is satisfactory (3 points); yes, although I have no financial problems (4 points).
3.
Social financial assistance was addressed by the question "Do you obtain social assistance related to finances?", with the following answers: no, although I have financial problems (1 point); yes, because I have financial problems (2 points); there is no such need because my financial situation is satisfactory (3 points); yes, although I have no financial problems (4 points).
The socioeconomic status (SES) of the elderly was calculated using a procedure similar to the previously developed SES index [40,41]. The SES index was calculated for each participant by summing up the points for each variable, i.e., self-reported financial situation, family and social assistance and education. To assess the reliability of the input data of the SES index, Cronbach's alpha index was used [42]. The Cronbach's alpha coefficient for the variables included in the SES index was 0.683. Groups of participants with low, medium and high SES indexes were distinguished on the basis of the tertile distribution of the SES index.
Statistical Analysis
Qualitative variables are presented as percentages (%). The chi-square test was used to verify the differences between those variables. Principal component analysis (PCA) was used to extract and identify dietary patterns from the frequency of eating 32 groups of foods. As a result, three factors (dietary patterns-DPs) were distinguished. The factors were rotated by Varimax transformation. The number of identified factors was based on the following criteria: components with an eigenvalue of 1, a scree plot test, and the interpretability of the factors. Food items were considered to load on a factor if they had a correlation of a minimum of 0.5 with it. The factorability of data was confirmed with the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett's test of sphericity, both of which achieved statistical significance. The KMO value was 0.798. Bartlett's test had a significance of p < 0.0001. In each DP, three categories (tertiles) were identified, which are described as bottom (1st), middle (2nd) and upper (3rd) tertiles.
Logistic regression analysis was used to assess the relationship between identified dietary patterns (DPs), region, household structure, and socioeconomic index (SES). Odds ratio (OR) values were calculated at the 95% confidence level. The reference group (OR = 1.00) was the bottom tertile of all DPs. A p-value lower than 0.05 was considered significant for all tests. Statistical analysis was performed using the STATISTICA statistical software (version 13.3, PL; StatSoft Inc., Tulsa, OK, USA; StatSoft, Kraków, Poland). Table 1 displays the socio-demographic characteristics of the study sample. In this sample, 66.1% were women and 33.9% were men. People aged 60-74 accounted for three fourths of the sample. The biggest group of respondents lived in the countryside (46.2%). Almost 60% of the respondents lived in theŚląskie/Donośląskie region. More than 2/5 of the respondents lived with a partner, and over a quarter lived with a partner and family. The proportion of respondents characterized by low, medium and high SES was similar, at about 1/3 for each SES level. Table 2 illustrates the correlations between the frequency of eating various food groups and each of the identified DPs. The DPs were categorized into the "traditional" DP (factor 1), the "prudent" DP (factor 2), and the "adverse" DP (factor 3). The "traditional" DP was characterized by a high frequency of eating white bread and bakery products, fried foods, cold meats, smoked sausages, hot dogs, and potatoes. The "prudent" DP was characterized by a high frequency of eating buckwheat, oats, wholegrain pasta or other coarse-ground groats, fermented milk beverages, fresh cheese curd products, fruit, vegetables, vegetable juices or fruit and vegetable juice, and water. The "adverse" DP was characterized by a high frequency of consuming sweetened carbonated or still beverages, energy drinks, instant soups or ready-made soups, tinned (jar) meats, and lard. The characteristics of DPs, with regard to the region, SES index, and household structure, are presented in Table 3. There were no differences in DPs with regard to the region. Less respondents who lived without a partner, but with their family, were characterized by the frequent consumption of Polish traditional food (the upper tertile of the "traditional" DP), while the frequent consumption of food characteristic for the other dietary patterns, i.e., "prudent" and "adverse" DPs, was observed in this group. Similar characteristics concerned people living with a partner and a family. However, in the "adverse" DP, there were no differences in the number of people in the bottom and upper tertiles (33.6% and 34.5%, respectively). More people with a low SES were in the upper tertiles of the "traditional" (44.7%) and "prudent" (50.0%) DPs, and also in the bottom tertile of the "adverse" DP (36.8%); however, the differences were not statistically significant. On the other hand, the biggest percentage of people with a high SES were in the bottom tertiles of the "traditional" (41.0%) and "prudent" (48.9%) DPs, and also in the upper tertile of the "adverse" (36.7%) DP. The same relationship was shown for only two variables forming the SES. The highest proportions of people describing their financial situation with the phrase "above average" were in the lower tercile of the "traditional" DP (21.5%) and the "prudent" DP (20.6%), and in the upper tercile of the "adverse" DP (14.4%). Furthermore, the highest percentage of people with higher education was in the upper tercile of the "traditional" DP (22.8%) and the "prudent" DP (30.1), and in the upper tercile of the "adverse" DP (21.4) ( Table 3).
Dietary Patterns
The results of the logistic regression have demonstrated that the respondents who consumed healthy food most often (the upper tertile of "prudent" DP) were 1.6 times more likely to live without a partner, but with their family. Moreover, they were two times more likely to live with a partner and with their family. The people who represented the upper tertile of the "traditional" and "prudent" DPs were less likely to have a high SES (Table 4). a -statistical differences for the "traditional" DP; b -statistical differences for the "prudent" DP, c -statistical differences for the "adverse" DP; chi-square test, * p < 0.05.
Discussion
Our study has identified three dietary patterns, which is in the range of the factors distinguished in other studies [43]. The identified dietary patterns are as follows: "traditional", "prudent", and "adverse". Similar patterns were also identified in other studies, although they were also named differently, e.g., "health conscious" instead of "prudent" [15]. Traditional patterns most closely reflect the cultural specificity of a country; for example, in a Dutch study, the "traditional" pattern was characterized by a high intake of potatoes, meat, and fat [15], while in French adults, it was based on the consumption of vegetables, vegetable fat, meat, and poultry [44]. The Chinese "traditional" pattern comprised pork, poultry, fish and prawns, eggs, fruit, dark-color vegetables, light-color vegetables, rice, water, yogurt, fungi, peanuts, sunflower seeds, and pastries [45]. On the other hand, the Polish "traditional" DP included white bread and bakery products, fried foods, cold meats, smoked sausages, hot dogs, and potatoes. In our study, this factor accounted for 11.4% of the total variance, while in the Dutch study, it accounted for −7.2% of the total variance [15], and as much as 17.5% in the Chinese study [45]. These differences may result from the specificity of culture, but also from its level of importance in conditioning current eating behaviors [46]. A higher percentage of explained variance could have been expected as older people are more attached to tradition. On the other hand, experiencing some health problems may favor a change in the eating habits of the elderly, and thus reduce the occurrence of the traditional dietary pattern.
The traditional dietary pattern can exhibit both beneficial and negative health characteristics. The "tradition47al" DP in Poland may be detrimental to health, due to the specificity of Polish cuisine, with its significant presence of fried foods, light flour products, potatoes, and fatty meats. Thus, the relatively low percentage of variance explained by this factor can be considered as positive. In the Portuguese study, the "traditional" DP identified with the "Mediterranean" DP (high consumption of vegetables, fruits, dairy, cereals/tubers, bread, fishery, and olive oil) was explained by as much as 59.1% of the variance in the elderly group [47], which confirms the attachment to tradition, and, at the same time, it informs us about the positive health effects resulting from the use of such a diet. In countries such as Portugal, but also Italy and Greece, the "Mediterranean" DP can be treated as both a traditional and a healthy pattern. In other countries, the pattern favorable to health was referred to as a "health conscious" DP, with a high intake of fruits, vegetables, poultry, fish, and alcohol [15], or as a "prudent" DP (i.e., high consumption of fruits, vegetables, lean meats, nuts, and seeds) [44,48]. In the Polish "prudent" DP, there were fruits and vegetables, but also buckwheat, oats, wholegrain pasta or other coarse-ground groats, fermented milk beverages, fresh cheese curd products, vegetable and fruit juices, and water. This factor accounted for 15.3% of the total variance, and thus explained more variance than other DPs, and also more variance than in other studies; for example, this factor accounted for 6.0% in the Dutch group [15], or 5.4% in women and 5.8% in men in the Quebec Longitudinal Study on Nutrition and Successful Aging [49].
Previous studies have shown that the differences in dietary patterns and their occurrence may be explained by the region of residence [18,20], both due to its cultural specificity and economic characteristics. In the study of Czarnocińska et al. [41], carried out in Polish young women, it was shown that women from more affluent regions represented the pattern characterized by a high consumption of vegetables and fruit more often, while a high consumption of fast food and sweets was characteristic of poorer regions. On the other hand, the presence of the "traditional" DP did not show any regional differences among young women. Our study showed no differences in the dietary patterns of elderly people from two regions with different GDP indexes. However, the lack of previous research carried out among older people in Poland makes it difficult to interpret the obtained results unambiguously. Despite the differences in GDP between the two regions, their territorial closeness (the southern part of Poland) could eliminate the effect of cultural differences, and, therefore, no relationship between the region and DPs has been demonstrated. In future research, attention should be paid to the search for other features of a region as potential factors to differentiate between both dietary patterns and food consumption in elderly people. The lack of regional differences in DPs may also suggest that their diversification in the elderly may result from factors that are more characteristic of the individual and their immediate environment, and may not relate to more global indicators.
Limitations in the functioning of the elderly, resulting from deteriorating health, but also from limited social contacts, are associated with disability in everyday activities [50], which may make it difficult for them to meet their own needs. It was confirmed that living with other people correlates with the quality of functioning of older people; for example, Ren and Treiman [51] showed that living only with one's spouse was associated with less satisfaction with life and greater depression compared to living with adult children. In addition, previous studies have found an association between living alone and a poorer diet or increased nutritional risk [52]. Among other things, it was shown that people who lived alone consumed more food outside of their home, and skipped more meals than people who lived with their spouse [53]. They also ate meat, fish, seafood, raw vegetables, and legumes less frequently [54], and older women living alone tended to simplify the dining situation [55]. Nevertheless, in many studies, such a relationship has not been confirmed [34][35][36][53][54][55]. Our study showed no differences in the dietary patterns of people living alone and those living with a partner. However, such a relationship was confirmed for the group of people living with their family. It turned out that older people living with their family had greater chances of adherence to the "prudent" DP, both when the respondent was in a relationship or without a partner. Thus, the presence of adult children and grandchildren may favor a more adequate diet of the elderly. The study of Liu et al. [24] confirmed that people living with relatives consumed more food, and their diet was characterized by a higher quality and more correct frequency of eating meals. In our study group, the fewest people living with their family were in the upper tertile of the "traditional" DP, while most of these people were in the upper tertile of the "prudent" and "adverse" DPs, with the latter DP being mostly characteristic of the respondents who did not have a partner. This may mean that family members, not the elderly, are more important in food choices in these households [24], as evidenced by a smaller share of traditional food and a greater share of both recommended and non-recommended foods for health. Food decision-making processes may be dominated by those members of the household who have greater persuasion power, e.g., greater nutritional awareness that encourages care for the quality of the diet [56], but also special needs and preferences for food, e.g., children [23,57].
It is known that socioeconomic status (SES) is one of the factors that shows a strong relationship with health and diet in high-, middle-and low-income countries [58][59][60][61]. Some studies have confirmed the relationship of socioeconomic status with the identified dietary patterns [62,63], but also with higher scores on the healthy eating index and the Mediterranean diet score [63]. In middle-and low-income countries, a higher SES is associated with a more adequate diet, manifested by a higher consumption of fruit, vegetables, dairy products, and unprocessed meat [19,20,[64][65][66][67]. Despite the fact that Poland is a middle-high income country [25], the obtained results did not confirm the relationship between high socioeconomic status and a healthy diet. It turned out that fewer respondents with a high SES represented the upper tertile of the "prudent" DP, but also the upper tertile of the "traditional" DP, compared to those with a low SES. In addition, their chances of being in the upper tertiles of these DPs were lower compared to those with a low SES (54% and 32%, respectively). On the other hand, more people with a high SES often ate food that was unfavorable to their health (the upper tertile of the "adverse" DP). These findings are confirmed by the results of some studies that have shown that in high-income countries, a higher SES was associated with better nutrition, while in middleincome countries, a higher SES was associated with both healthy and unhealthy dietary patterns [27,64,67]. The socio-economic development of these countries was conducive to changes in diet, leading to the transition from traditional diets to diets rich in fats and sugar [68]. This may explain the lower adherence to the "traditional" DP and greater adherence to the unhealthy pattern (the "adverse" DP) in the study group. Thus, the changes towards a healthy diet, mainly characteristic of highly developed countries, but also of middle-income countries, have not been confirmed in our sample [69]. Nevertheless, previous studies conducted in the Polish population showed that the elderly with a higher SES were characterized by a greater variety of food consumption, and a vaster range of healthy products, including fruit and vegetables, dairy and cereal products, fish, and fruit juices [70][71][72][73]. Thus, future research should investigate potential pathways through which the SES influences both food intake and adherence to dietary patterns in the elderly.
The identified differences in the dietary patterns of the elderly contribute to better understanding food intake, especially when it is related to people's social situation. The study did not show differences in DPs between two regions; however, living with others and the SES showed relationships with DPs. A smaller share of traditional food, and a greater share of both recommended and non-recommended foods for health were observed when living with a family. Therefore, improving the diets of the elderly requires inter alia the involvement of younger people in learning about nutrition for the elderly. The study results can be used to support public policies and nutritional counseling among the elderly, but also among younger people living with the elderly. Interventions aimed at promoting healthier diets amongst the elderly should take account of social factors related to dietary patterns, which may mediate the effects of age-related factors that lead to health deterioration. However, future research is needed to more deeply recognize the factors and mechanisms that determine the diets of older people.
Limitations of the Study
The data used for analysis were collected for the Polish population, and so they could not be generalized to other populations, especially those of different cultural backgrounds. Moreover, the KomPAN questionnaire used in the study was validated in the group of individuals who were up to 65 years of age. The cross-sectional design and collection of data at a single point in time did not permit conclusions to be drawn about causality. The use of a PCA to determine dietary patterns can be a methodological limitation of the study. The "a posteriori" nature of the patterns identified provides a realistic reflection of dietary patterns in our study population; however, the decisions on the extraction of the patterns from the PCA are, to some extent, subjective, and may affect the final dietary patterns that are analyzed. The three patterns identified in this study explain 32.9% of the overall variance, which is not a high rate; however, it is higher than in some other studies [15,74,75]. Another limitation of the study is that the sample was predominantly women, thus further exploration of the DPs in the male population would provide additional insights. The literature suggests that older men, especially those living alone, tend to have poorer cooking skills, associated with a lower quality of diet [76], and may be more affected by changes in their living situation.
Conclusions
Three dietary patterns were identified in the Polish elderly, i.e., the "traditional" DP, the "prudent" DP, and the "adverse" DP. The adherence to these dietary patterns was associated with socioeconomic status and living environment, i.e., alone, with a partner, or with family, while the region did not differentiate them. Less people living with their family were characterized by frequent consumption of traditional food, while more of them often consumed food typical for both "prudent" and "adverse" DPs. The presence of a respondent's partner in the case of living with family did not differentiate the adherence to the DPs. Logistic regression models have shown that a high SES decreased the chances of adhering to the upper tertiles of the "prudent" and "traditional" DPs, while living with family increased the chances of adhering to both the upper and middle tertiles of the "prudent" DP. Identifying the dietary patterns of the elderly population contributes to better understanding the food intake of senior citizens living in different social situations, and can be used to support public policies and nutritional counseling among this age group. Interventions aimed at promoting healthier diets amongst the elderly should take account of underlying social factors that influence dietary patterns, which may mediate the effects of age-related factors that lead to health deterioration. Data Availability Statement: Data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-11-10T16:07:20.110Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c64889427208c8fb5534b8e3c1451fa94f557025",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/13/11/3966/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e224352fd1cfe87a55894b10afa9d37b1839a17",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
251424577
|
pes2o/s2orc
|
v3-fos-license
|
Exploring Social Media Contexts for Cultivating Connected Learning with Black Youth in Urban Communities: The Case of Dreamer Studio
Using the Connected Learning framework as a conceptual lens, this study utilizes digital ethnographic methods to explore outcomes of a Hip-Hop Based Education program developed to provide music related career pathways for Chicago youth. Using the narratives of the participants within the program, I draw on participant observation online and in-depth interviews collected to explore the link between the tenets of Connected Learning and digital participation in this artistic community of practice. I explore participants’ work within social media platforms toward building their creative skill, cultivating a public voice, connecting to mentors, and communicating in ways that strengthens the social bonds within their peer community. This study’s findings affirm prior studies that suggest late adolescence is an important time frame where children are developing social identities online in affinity spaces but in ways that are tied to civic engagement, self-empowerment, and critical skill development for their future pathways. To conclude, I suggest that investigating participant activity on social media platforms as a part of field work can help ethnographers to better connect their impact to the agency and life trajectories of their youth participants.
Introduction and Background
When one arrives at Dreamer Studio, Marvel (affectionately known as Vel to his students) is almost always deep in the midst of mixing audio, sitting in a large black leather chair in front of a large Apple computer. He looms in the center of the room, often directing commentary to his students without turning from the screen in front of him. He's lanky and thin, wearing a blue Adidas tracksuit and gold rosary beads. Formerly a touring artist who was signed to a major record label in early 2000's, Vel now spends basically everyday at Dreamer, helping youths who seek to follow in his steps. Opened in July 2014, the community recording studio is a collaboration with a national nonprofit where he serves as a mentor and a social service agency that owns the loft space that the studio occupies.
The first day in which I visited the studio there were more than 30 young artists, aged between 14 and 22 years old, bound about the space. Vel told me proudly that they all come from different pockets of Chicago, though they're largely clustered in neighborhoods around the south and west sides that are considered among the most dangerous in the city. As I talk with the youth, they tell me that the studio offers them a safe place to meet and collaborate with like-minded musicians from other areas. While many Dreamer artists are just finding their footing as musicians, they are a testament to Vel's hypothesis-that there is a demand for different organizations to provide Black youth with spaces that they can learn from and hone their creative skills. Even with all of his seeming success, it is still an uphill climb. Every year, Vel must pound the pavement and fundraise in order to keep his space up and running. Many times, this required tapping into his personal finances. Vel knows that without Dreamer Studio, it is more than likely that many of the student participants defer their creative abilities for allure of illicit activities or succumb to the violence in and around their communities.
Unfortunately, Vel's struggles to keep his creative space afloat are not atypical. Access to spaces and places that provide pathways to lucrative creative careers in the U.S. have historically been stubbornly tied to race and socioeconomic background (Florida 2019;Watkins and Cho 2018). However, research indicates that through Connected Learning (Ito et al. 2013), modern technological tools are increasingly allowing adolescents the ability to individually navigate skill development for their personal interests, develop connections with peers, and improve their networking skills (Callahan et al. 2019;Watkins 2019). Professional development also has been shown to manifest through online affinity spaces, social media platforms, and their affiliated creator communities, where these youth can build kinship bonds with others who hold similar aspirations and social identities unbounded from their physical location (Gee 2017). These kinship (and often neighborhood) driven communities are what Duffy and colleagues (2021) call creator pods, or social media relationships that focus on audience quality and knowledge generation and not just collecting a large quantity of random followers. Though digital affinity spaces can provide true expertise, they are often exclusionary to those with dissimilar socioeconomic backgrounds and racial identities (Jenkins 2007). In particular, Black youth in low-income communities often lack access to digital tools required for participation or do not know about opportunities to connect with relevant resources and thus, are not given proper pathways to use social media in ways that further hone their creative skills or shape a career based on their interests.
That said, these exclusionary processes can still be disrupted through mentorship in programs where marginalized youth form relationships with peers and mentors who both share their background and are part of a high-value field or career path (Ben-Eliyahu et al. 2014;Raposa et al. 2019). For Vel, reorienting Dreamer Studio to also serve as a digital affinity space is one strategy for allowing his participants to break through exclusionary structures and facilitate music career pathways whether or not the physical studio space is accessible. Through social media, one can now access the world through one's own perspective and interests and eschew the uniform vision constructed by the media (Jenkins 2007). By integrating the social media relationships/ Do-It-Yourself (DIY) skills that successful creatives need/form on their own, Vel's students talked to me about how they utilized Twitter, Instagram, Club-House 1 , and Twitch to self-brand themselves and digitally present themselves to a public to forge both personal and professional relationships. Despite this anecdotal evidence, further research is still needed to elucidate youth perspectives on how programs like Dreamer Studio are looking to social media in fortifying the collectives they cultivate and how their communities of practice, though tied to physical spaces, leverage digital media to generate/gain/share knowledge among one another.
This study explores how Dreamer Studio has produced significant and lasting impacts on its participants' transitions into creative labor beyond the physical studio setting. This paper utilizes methods of digital urban ethnography (Lane 2016) to ask how active participants, mentors, and alumni of the program organized as a creator pod to network and strengthen their artistic community of practice through utilization of social media platforms. During in-depth interviews with participants, they suggested that the self-empowerment taught in the Dreamer program was only the first step towards furthering their own self-started transitions into creative work. Vel, the studio's staff, participants, and alumni explicated and demonstrated this in three key practices: (1) corralling as a pod, (2) collaborative problem solving and, (3) DIY circulation. Ultimately, I argue that participants of Dreamer Studio are a case exemplar of how social media provides a vital avenue for Black youth and their mentors to interact and that, in defining part of the success of Dreamer Studio as a matter of access and opportunity to engage with their passions in expert ways, communication with peers through social media platforms grounded their knowledge and practical know-how acquired through in-person participation at the studio.
21st Century Skills, DIY Careers, and Youth Participation on Social Media Platforms
Social media, video sharing, and music streaming platforms have drastically changed the digital participation strategies of aspiring musicians, as they now can move between and within platforms to self-promote themselves while seeking to professionalize their careers (Haynes and Marshall 2018;Hesmondhalgh 2020;Powers 2015). In particular, social media platforms have been shown to greatly support the work of musicians by providing launching pads for these burgeoning artists to speak directly to fan communities, build personal relationships with them, and let them share in their creative process (Baym 2012). These relationships often now go beyond simple "friending" to direct messaging, portal shows, and live streams of mundane activities that lead to other kinds of intimate interpersonal contact (Rendell 2021). A rapidly professionalizing and monetizing wave of diverse, multicultural, previously amateur creatives from around the world have harnessed these platforms to incubate their own media brands, engage in content innovation, and cultivate often massive, transnational, and cross-cultural fan communities (Baym 2018).
For youth, this platformed presentation of self is largely important to occupational identity development (Watkins and Cho 2018), becoming an entertainer (Cunningham and Craig 2021)and navigating today's creative workforce (Duffy 2018). All of these factors have provided a unique and precarious opportunity to Black youth in urban America, who have been shown to have a strong aspiration for careers involving digital media often while nestled in communities of digital disadvantage due to their race, class, and geography (Perry and Raeburn 2017;Stuart 2020;Watkins 2019). This has led to philanthropic efforts to seriously invest in supporting curricula to empower marginalized students to gain civic media literacy, which broadly means thinking critically about reimagining the social function of media, leveling digital inequities, and the implications of technological advancements on their everyday lives (Mihailidis et al. 2021). In considering the digital participation of urban youth of color, interest-driven practices on social media platforms are online activities these young people find appealing but also central as they build their creative networks, peer relations, and seek cultural capital (Watkins and Cho 2018).
Recent scholarship has argued that formal efforts engineered towards closing the learning pathways gap for Black youth should be more focused on leveraging their creative and cultural capital in the realm of Hip-Hop music (Emdin 2021;Evans 2021;Kramer et al. 2021). For example, African American youth's involvement in Hip-Hop cultural practices have previously been shown to play a significant role in teaching them the importance of collaboration, innovation with technology, as well as their identity development (Love 2015). Hip-Hop communities of practice also provide both a sense of belonging and acceptance (e.g., the development and/ or strengthening of relationships) to youth within peer cultures (Dimitriadis 2009;Helmer 2015;Seidel 2011). Despite a proliferation of social and behavioral research pointing to the positive impact of Hip-Hop music production within programmatic interventions (Petchauer 2015), very few studies have thoroughly attempted to track the impact of Hip-Hop Based Education programs towards pathways development beyond the school buildings or community centers and in the everyday lives of its participants beyond the program.
Thus, there is a need for research that examines how social media platforms are redefining community youth media programs, career training opportunities, and the spaces where career preparation takes place. Since youth are continually re-negotiating their identities and actively reinventing themselves on the platforms they par-ticipate in, there is a need to understand how programs can better accommodate (and evaluate) communities of practice as they traverse digital and physical spaces. Additionally, there is a need to understand what the hybridity of these communities mean for career advancement. Many scholars have claimed that Hip-Hop (as a pedagogy of practice) offers an education where learners can work towards their desired aspirations via mediums or learning experiences that are familiar to them and build upon their already acquired knowledge (Emdin 2020;Hill and Petchauer 2013). It is also apparent that Hip-Hop provides new literacies acquired in social media platforms that are useful to African American youth for cultivating skills and competencies with digital tools and technologies (Evans 2020(Evans , 2021. The findings of that work suggest that measuring success of a youth media program is not only tied to how its participants discriminate and evaluate media content, but how they use media independently and communicate with peers and to gain skills to construct (and distribute) their own alternative media.
Theoretical Framework
The value of cultural production is generally organized to ensure that certain individuals are automatically advantaged, or disadvantaged based on their cultural capital (knowledge, behaviors, and skills that a person can tap into to demonstrate one's cultural competence and social status), or lack thereof (Bourdieu 1986). As an art form deeply rooted in disadvantaged urban communities, Hip-Hop has been depicted as a social identity that has historically been viewed from a deficit standpoint (Rose 1994). However, through the accumulation of digital clout (Hip-Hop-inflected cultural capital), modern youth gain social status and economic mobility through social media platforms and participatory culture (Baym & Evans 2022). Though sometimes associated with deviance and crime, digital tools and technology have emerged as a main source of social capital in low-income urban communities of color (Lane 2016). This form of social capital within Black communities is often context-specific and acquired depending on the situation (Hall 1992). To that point, previous work has written about how Hip-Hop music has served as a site of professional skill development and uncommon pathways of economic mobility for the more creative marginalized Black youth in urban America (Forman 2002;Harkness 2013;Lee 2016;Quinn 2004), often subverting (and finding their clout outside of) the typical musician pathways of getting signed to a major record label (e.g., Arditi 2020). For example, Stuart (2020) detailed the ways in which young Black male rappers in Chicago used Instagram, Facebook, and Twitter to bypass corporate-controlled record labels, subvert structures of gatekeeping, and cultivate a hip hop community of practice that harnessed substantial digital clout. Not surprisingly, social media platforms continue to be central to Hip-Hop's community of practice and, in many ways, has developed as a formidable source of self-empowerment for its most visible cultural producers.
The theoretical framework I suggest is most closely tied to the concept of Hip-Hop Based Education (HHBE) providing Black youth learning pathways for creative careers is Connected Learning. In a Connected Learning context, young people have increased access to a wider ecology of information, technology, and interest-driven learning communities (Ito et al. 2013). Within this framework, peer cultures and online communities provide ways for young people to learn important skills, cultivate relationships, and develop their own identities (Barron et al. 2014;Century et al. 2018). Theoretically, these capabilities should provide more pathways for young people to develop deeper identification with a personal interest, develop creativity, expertise and skill, and connection to professional aspirations (Ito et al. 2013).
In those instances, the framework suggests that knowledge and knowing are associated not only with the teacher, the curriculum, or outside experts but with every peer culture that the youth participate in. That is, learners are seen by themselves and by others as knowledgeable, committed, and accountable participants whose identities are variable, multi-vocal, and interactive (Wenger et al. 2002). Learners are held accountable for contributing to authentic problem solving, knowledge co-creation, and learning. In connected learning, learners are also provided with opportunities to develop interpersonal relationships and to learn with and from others. Thus, these learning environments broaden traditional forms of learner agency and accountability by expanding possibilities for engagement and bringing in new audiences with whom students collaborate and create new knowledge and understanding.
Little has been written to evaluate the role of social media platforms to bolster a program's community of practice, or to assess how those peer cultures might contribute to how participants make their transitions into creative work. Additionally, there is a call from researchers to better understand how these connected learning pathways develop for Black youth and how they make these connections between their personal interests, learning opportunities, racial identity, and real-world contexts (Emdin 2021;Garcia 2013;Watkins and Cho 2018). For instance, Garcia (2013) has argued that participatory media practices are a form of civic engagement that can connect disadvantaged youth of color to an understanding of their place as citizens in larger communities. Taken a step further, one could argue that young Hip-Hop artists are creating their own social movements. As such, the current study suggests that sites of Hip-Hop artistic practices should be recognized and encouraged by youth advocates and researchers seeking to engage in exploring this argument and how social media can play a role in strengthening artistic communities, civic engagement as well as kinship bonds between Black youth and potential mentors.
Aim of the Study
The broader aim of this study is to explore how the Dreamer's program/staff and youth participants came together on social media platforms to meet Vel's goals of building community, circulating information, and furthering skill development of his participants. Furthermore, this study investigates how youth from Dreamer Studio were using social media platforms to engage in Connected Learning and to begin to understand how the platforms themselves shaped different aspects of this process. As a part of an ongoing larger project, this study extends the author's previous work on Connected Learning and Hip-Hop Based Education in the formal classroom (Evans 2021) to understand how Hip-Hop Based Education programs meet the needs of stu-dents in out-of-school time, and how they allow them to learn in contexts beyond the formal confines of their program facilities.
Research Questions
Given that the aim of this study seeks to examine specifically how Dreamer's community of practice aids Black youths transition into creative work, this analysis was guided by the following conceptual research questions (RQ): RQ1 What are the critical elements of Dreamer Studio's social media ecologies and how does participation create Connected Learning for participants?
RQ2 Does the engagement of these youth in Dreamer Studio's social media ecologies present them with the opportunities to gain the skills in ways that they wouldn't otherwise access to?
Methodology, Participants, and Profile of the Sample
As a researcher, youth advocate, and Hip-Hop musician, I have spent upwards of twenty years as a key participant in Chicago's local Hip-Hop scene. As such, I have been afforded the opportunity to serve as a teaching artist in many settings. It was through a teaching experience at Dreamer Studio that I was compelled to do research on bringing about best practices in the community studio setting. My previous relationships with Vel and many of the youth in the program afforded me an access point to the fieldsite in ways that allowed me to build rapport with student participants that appear in this study.
The participants interviewed for this study included 10 (eight male and two female) Black students aged 14 to 24 years old who utilized Dreamer Studio during the 2019-20 school year and Vel, the executive director and owner of the studio. There was no incentive for interviews or participation. To protect anonymity of the participants, pseudonyms were assigned to the name of the studio, all students and the teacher in this study.
Data for this project was collected over a 16-month period and initially relied on in-person fieldwork, including attendance at recording studio sessions, program workshops, live podcasts, and local open mic events. However, I found my biggest resource was observing respondents on social media, particularly Twitter, YouTube, Snapchat, and Instagram, in that chronological order. I was guided to these mediums by students and was allowed to friend them on each platform. This was where I also received updates on meetups that were occurring on social audio/audio sites like ClubHouse and Twitch. I immersed myself in conversations that youths were having on these platforms and then had offline conversations (via Facetime) with students to ask questions to participants like: What does this mean? Why did you say this this way? This is in line with Lane's (2016) take on digital urban ethnography, Patton and colleagues (2020) contextual analysis of social media and Brock's (2018) critical techno-cultural discourse analysis (CTDA) which draws from technology studies, communication studies, and critical race theory in requiring researchers to include perspectives of cultural producers and seek to understand how their culture or lived experiences shapes the technologies they use. Ultimately, I focused my work to virtual participant observations in seeking to understand the creative lives of these students outside of the program.
I initially asked the participants to sit for interviews in auxiliary spaces where they participated in Hip-Hop culture and created music. However, due to the COVID-19 pandemic, I conducted several interviews for this study via Zoom or Apple's Face-Time, recording and transcribing them, before distilling recurrent themes. The analysis of this data was guided by the constant comparative method (Charmaz 2014), in which bits of data were continuously contrasted with one another to develop categories and distill recurrent themes.
Finally, I used evaluation methods associated with the Connected Learning Framework (CL) to inform my coding strategies. In this case study, I did not focus on CL design principles. Instead, I looked at whether the digital practices in which the students were engaged enabled the CL experiences of civic engagement and selfexpression, increased accessibility to knowledge and learning experiences, and/or expanded social support for interests and empowerment for the student (Ito et al. 2013, 12). Employing MHA (Measures of Human Achievement) Labs' 21st Century Skill Building Blocks for participatory media projects (MHA Labs 2012), the initial codes selected to analyze field notes and online student discourse were: personal mindset, planning for success, problem solving, and social awareness. After first reading data openly as an entire data set noting initial field notes, I selected these specific themes for more focused and integrative coding because they aligned with what I assessed to be the goals of the program described to me by the Executive Director of the program.
Critical Elements to Dreamer Studio as an Artistic Community
Participatory culture is a term Jenkins (2007) claims is "emerging as the culture absorbs and responds to the explosion of new media technologies which make it possible for average consumers to archive, annotate, appropriate, and recirculate media content in powerful new ways (25)." Outside of mobile communication, Black youth in Chicago generally have less expansive access to the Internet and its related digital tools that have been shown to be valuable in everyday life and the workplace (Barron et al. 2014;Century et al. 2018). This issue not only limits their future employment options and income potential, but this also hinders their academic success and overall participation with digital media (Robinson et al. 2015).
In this study, participation refers to educational practices and creative processes facilitated by Dreamer's social media ecologies. Overall, I found that Dreamer Studio encouraged its youth to develop the skills, knowledge, and kinship ties needed to be full participants in contemporary culture. I found that studio's participants were actively given pathways for career development through three critical elements: • Corralling as a pod -developing friendships and group memberships, formal and informal, in online communities centered around Hip-Hop, such as Twitter, Tik Tok, and Instagram. • Collaborative problem-solving -working together in teams, formal and informal, to complete tasks, develop and exchange new knowledge (such as through Conversations on ClubHouse). • DIY circulation -Self-distribution and promotion of media among peers and mentors (such as livestreaming on Twitch or posting music to SoundCloud, You-Tube, and/or social videos on Tik Tok).
Unintentionally, it was through the logic of these elements that youth themselves described how Dreamer's online ecologies reframed the concept of artistic community and achieved program success.
A Team Effort: Corralling and Collective Affiliation on Instagram (IG)
In Hip-Hop, the process of career development relies heavily on developing convenient connections, and networking with peers in order to form a creative collective to release and circulate one's creative work (Condry 2006, 88). Baym and Evans (2022) describe this using the term "corralling," which refers to the necessity of the posse in being a rapper, the number of hours required to build/maintain a sufficient collective, the blurred lines of fandom, collaboration, and friendship within platforms and the amount of sheer effort involved in digitally promoting a product or person as a unit. In contrast to the pre-digital era which treated music fans solely as spectators, when they use corralling, musicians engage fans as equals, often mobilizing the most engaged to serve in more official roles within their professional support system. Participants in the Dreamer program continually expressed that they collectively strived to make the studio space into a platform to promote their creative work online, construct and showcase their digital selves. Besides uploading music videos to YouTube and Soundcloud, Instagram was described as the primary platform that allowed them to communicate their artistic lives. The program's Instagram account was started by Ka$h, songwriter/producer who framed himself as the studio's official social media manager. When asked about the value of having this collective account, Ka$h explained: I realized that if we all posted collectively, then the algorithm would push our posts to the top of our friends' timelines back to back. Guerilla marketing just went from overpopulating the street to overpopulating Instagram.
What Ka$h is saying here is that Hip-Hop's typical modes of word of mouth marketing have shifted from passing out flyers in the street to utilizing platforms for metrics, engagement and presentation. He speaks very astutely about how participants would gossip online about the algorithms that drive social media platforms and how to bet-ter gain visibility. In this quote he speaks of "overpopulating" Instagram as a strategy to combat lack of representation in search engine optimization and recommender systems in order to achieve visibility on Instagram timeline. When asked to elaborate on the significance of this work, Ka$h proclaimed: Yeah, so like, Instagram doesn't really support rap artists very well. So, one thing I was doin' recently was tryin' to find everybody's second stream. Like, "cuz, hip hop, rappin", whatever that is, producin' might have been they first one. But I was like, what's your second -your second avenue of talent you have, or interest? How can we use that to promote our collective?
In this moment, Ka$h was explaining that he and the other participants of the program held many conversations about Instagram that went beyond the promotion of their music and how to build a support system to develop themselves as artists. He emphasized that forming a creative pod was the optimal way to amass having the same resources of, say, a corporate tier record label's artist development department.
Lamar, a rapper who was also a journalism student at a local university, also reiterated this teamwork sentiment: I been goin' to Dreamer for like about 5 years, I think since like the first year they opened, that's when I started goin'. I'd say like, out of a five-day week, I'd try to go through the studio like, if not every day, about three or four days a week. And even if it's not to record, even if it's not you know, to lay somethin' myself, I just go online and check in with everybody. See what everybody else has been workin' on. If anybody else needs help with anything, as far as mixin' and masterin', critiquing, marketing, anything like that… Recent research has shown that videoconferencing and social video platforms provide learning opportunities just as powerful as in-person experiential learning (Hassinger-Das et al. 2020). As both Lamar and Ka$h's comments indicated, talking online with one another was not always tied to casual conversation but rather exchanging of knowledge in ways that might develop their creative skill, spawn constructive dialogue on a creative project, or improve their knowledge of local resources. DJ Gemini, 21, explained: A person needs to have a vision for their brand more so than having the talent to do the job. You can make all the songs you want but people have to be in your corner for you to win. We don't just build friendships, we find collaborations (through Dreamer).
One of the primary ways artists who participated in Dreamer's community of practice collectively used social media to choreograph their professionalism was by holding digital networking sessions on Instagram Live and having listening sessions on the studio's Instagram (IG) TV channel. During these events, artists would either solicit critiques for their new music, invite studio professionals to share resources about their facility, or hold conversations about different topics regarding professional advancement.
For example, Gemini used one of these sessions as a way to announce that she was opening a studio of her own that she would rent out for deejays (DJs) to rehearse, for novices to take lessons (in-person or virtually) as well as for private events or podcast tapings. During this session she streamed live from her new rental loft in Logan Square, giving a virtual tour and performing a live DJ set. She concluded her set by announcing that those who are affiliated with Dreamer could rent her space at an exclusive discount by following her on Instagram and commenting "Dreamer 4eva" on her most recent post. When asked about her motivations to announce her studio in this way, she stated: I could have promoted my studio by myself over the next six months and I wouldn't have been able to get the same amount of attention I got for myself online (through Dreamer) in less than an hour. Dreamer is my homebase and if it wasn't for our community, I wouldn't have been inspired to enterprise and build my own space. We need places we can graduate to when we just want to focus on our work beyond the program. The more places, the more opportunities. We are building something where we provide and support each other as resources. We aren't in competition, we are in collaboration.
Lamar elaborated on how the artists in the studio found the most visibility for their work by promoting the Dreamer name as a social movement that anyone could join and feel important within: We all just keep tryin' to strike oil, really. But really just a legacy thing. We want everything to have some type of longevity to it. So, we ride for each other. Most of the people come here, like they still young mind. But the fact that we're able to put all of that to the side and still make music, still do shows together, still find different ways to embrace each other. You know, what I'm sayin'? All of that makes a difference. And all of that helps -and that's what makes it bigger than the music, because… We got a collective story to tell.
When Lamar says "tryin' to strike oil," it seemed he was speaking directly about seeking to gain financially from their creative labor. It was apparent that he and others at Dreamer were pursuing creative work that would qualify as aspirational labor. This meant that social media posting activities were such that participants believed they had the potential to pay off in terms of future economic reward. Additionally, his mention of having "a collective story" indicates that he viewed success for one of artists at Dreamer as a visibility that would raise awareness for their collective identity.
Friendship-driven practices on social media are such that young people of Hip-Hop find appealing but also necessary as they build social communities, peer relations and seek cultural capital (Watkins and Cho 2018). What participants like Lamar illustrated is that one of the strong points of Dreamer's community of practice was that there were participants from a variety of age ranges (and knowledge bases) that came together from different parts of Chicago. Due to this level of diversity of thought and lived experiences, the participants felt a sense of belonging to a movement larger than themselves, and felt safer to be expressive, visible, and knowledgeseeking in online spaces.
Tik Tok and YouTube as Sites of Virtual Learning and Product Circulation
Collectively, participation was a process of trial and error for Dreamer participants seeking to engage themselves online. Even still, social video content produced by participants on both YouTube and Tik Tok served to produce an extended network of intimate strangers that legitimately expanded each other's knowledge base. Ron talked about how watching Tik Tok videos shared by other participants using the Dreamer Studio hashtag taught him the most about how to use certain audio equipment needed to be an effective music producer: So, originally, I was just like, okay, I'm just gonna work, get money to buy equipment and just, you know, self-teach everything. 'Cuz, it's not like the talent wasn't there with the writing, or like I'm sayin', with the production per se, but I was missin' a lot of technical skills, and watching (my friend Re@L's posts) have given me so much knowledge on the way that things go. How to build a home studio and make quality stuff at home, you know?
Beyond technical skills, Vel talked about how participants self-organized the studio's social media initiatives and how having their creative community online was likely the most critical step for his students to professionalize their creative work: I noticed that they were interested in how they could become more visible on social media. It used to just be about having dope music but that's not enough anymore. In this era, having a funny personality on social media can take you much further than any song. That's why Dreamer and the social media community we have developed is so vital to their artistic growth.
To his point, 16-year-old rap artist Meechy was a primary example. A sophomore in high school, Meechy already boasted over 9,000 followers on Instagram when we sat down and talked. When he put out a song on Dreamer's SoundCloud called "Winners Never Lose," he expressed excitement over how the collective rallied around him using platforms like Instagram, Twitch, Tik-Tok, YouTube and Twitter to share the song. The song garnered over 10,000 plays and over 1,000 downloads in 24 hours on the Dreamer SoundCloud page. This culminated in him gaining an opportunity to perform at a local skate competition sponsored by the shoe company Vans. Meechy shared his thoughts on how this transpired: It was major. Just having that social media presence, From startin' a YouTube channel to whatever it is. They learned how to brand me. You know what I'm sayin'? So, post -like pickin' the right hashtag, figuring out the algorithm for who they are supposed to be on Instagram and Facebook and like makin' a website. I saw real tangible results from the collective work of the studio. I feel like everyone treated the success of my song like it was their own.
In another example, when Lamar released his new single on his Spotify, he talked about now being able to secure shows in Los Angeles, Houston, and Atlanta based on a booking agent finding a snippet of his music video on the Dreamer Instagram page: Since I put the single out on Spotify, stuff has just gone to a whole new level. I think we were able to figure out how playlists and algorithms can work for the collective and we've just been reaping the benefits of that. My music is dope but I understand that without an audience, I really can't have a career. Social media is essential to that, so everybody just shares each other's work and gives tips on how to work the system so everyone wins.
Overall, interviewees made it clear that social video platforms allowed them to address questions for skill development that existed within their larger collective while also supporting circulation of their creative works in the wider marketplace of attention. As is clear from the above examples, although not all Dreamer's young people were able to articulate the importance of providing certain kinds of information to the larger collective, they understood the collective was a key to their development as emerging music professionals and entrepreneurs.
Critical Dialogue and Knowledge Exchange on Clubhouse
Though Vel was clearly the adult supervisor and primary teaching artist of the studio, Dreamer's overall community was far more participatory than it was "top down." The physical studio space was able to thrive as a place where these youth rely both on finding their truth and getting honest critique. Their conversations on the social audio platform of ClubHouse, similarly, were brutally honest and there was a dialogic process between the artists and their studio community members in choosing how to pursue production and promotion for their work. For instance, following quote was given by Lamar during a conversation with both staff, alumni and students of Dreamer Studio on the social audio platform of Clubhouse: Like, music doesn't have a user's manual. There is no way to figure out everything you need to do with some type of text. You gotta study; you gotta talk; you gotta collaborate. That's the only way you're gonna figure out what's gonna work, what's not gonna work…We see rappers doin' shows. We see rappers on TV. We see blah, blah, blah. How do they set those up? How do they get those opportunities? Where were they at when it happened?
In this particular quote, Lamar points out that there is an extreme level of mystery to the process of transitioning from aspirational creative laborer to being a paid professional artist. He speaks about parasocial relationships as not providing enough depth for an emerging artist to study and emulate. In that regard, Dreamer's Clubhouse conversations provided him with advice from peers that collectively were going through trials and tribulations of pursuing career pathways in Hip-Hop music. MJ, an aspiring singer/songwriter, agreed with that point: Honestly, I look at it as a family brand, like everybody, everybody in there from different hoods, everybody in there from like come from different backgrounds. This group is like Reddit. Like, any time I need to do something, it's my Reddit or YouTube. It's like kind of a live blog or somethin' for me, so I can see what these people's experiences are, and see if I can like, do somethin' like it, or if I should try to replicate it, or just filter it out.
MJ comments here about having people from "different hoods and backgrounds" provided a "Reddit-like" resource showing that this spatial unbounding allowed them to more readily come together and make productive music, beats, select for opportunities, not only for ourselves, but with each other. As MJ's comments indicate, Dreamer students often referenced the family atmosphere as being paramount to their learning experiences in the program. Given that today's media tools and technologies have infinite amounts of connectivity and information to draw from, students relished the fact that they could all build off each other at any point of the day, in any location that had wi-fi.
In the various testimonials above, participants repeatedly used ClubHouse conversation to be engaged in connected learning. They were involved in a process that asked them to consider what issues were of importance, beyond themselves, to establish a meaningful ecosystem for young Chicago creatives from low-income communities of color. As Black (2006) and Jenkins (2007) have argued, digital cultures like these provide support systems to help youth improve their core competencies as readers and writers of new literacies. For example, through video blogs or live streaming, young people receive feedback on their music and to gain experience in communicating with a larger public, experiences that might once have been restricted to those with access to live concert venues or high-level commercial recording studios.
To that point, DJ Gemini further detailed the importance of these virtual meetups and how they allowed for low barriers to entry into networking opportunities: Our Clubhouse channel is where iron sharpens iron. I had like 2,000 followers at one point and I think many booking agents perceived me to be local and not as skilled as I actually am as a DJ. Members of our Clubhouse looked at my IG and suggested I start from scratch and rebrand online. I deleted all of my posts that night and bought like 5,000 followers. From that point on I started putting my logo as a watermark on all of my pictures, I streamed me at home spinning different mixes and posted edited recap videos of all the events I did. My skill level as a DJ is the same but the perception of those skills is now different.
As Hip-Hop's origins are from America's low-income urban communities of color, work of young women like Gemini is often pursued as aspirational labor (Duffy 2018) with the hopes of creative acclaim, recognition, and financial rewards. Given this context, youth of Dreamer used their artistic practices to harness the power of clout to articulate a sense of self and establishment of a public reputation on their own terms. In Gemini's case, that meant learning to financially invest in the logics of social media platforms (purchasing followers, creating a logo, and professionally editing content) in order to project a level of established presence to new audience members. Through peer dialogue, she came to understand that her impression management demanded treating her social media presence as something as serious as her investment in studio time and received specific directives to make personal improvements. In sum, these conversations on Clubhouse allowed her to compete, collaborate, connect within the larger Hip-Hop community of cultural producers and build a creative economy for her potential career pathway.
During one Clubhouse conversation held during the COVID-19 social lockdown, Antoine elaborated on why having creative conversations online were often more helpful than the conversations he was able to have in the studio: It's like, alright, you know, we're gonna talk about like, career-wise, what moves you can make. How to improve your music, what you maybe can help in these portions of it. Like, if I be like, "Man, you could really put some live instrumentation here." Or, "I like the way you mixed this song. Maybe lay off on the lows a little bit. Maybe to help this stand out some more..."'Just knowing someone has your back 24/7. It gives you the freedom to experiment and still have honest and safe dialogue of what another person might think, good or bad.
As Gemini, Antoine, and others depicted in this section, digital spaces empowered those in the Dreamer community greatly by offering a sounding board unbounded from geography. This is not to say that everyone in the community was thriving due to their involvement in the conversations that were being had because very few students actually had the tangible successes of someone like Meechie or Gemini. However, through their participation in the Dreamer social media ecology, interviewees felt that they had a trusted resource for which they could express their concerns and draw inspiration from. In the end, these emerging artists expressed that while the Internet has opened opportunities for their work, it has also created an overcrowded attention economy for which it is nearly impossible to break through by oneself. As such, interviewees pointed Dreamer's social media ecosystem as a vital source of professional information, creative community and social support.
Discussion and Implications for Digital Ethnography
Similar to cafes and barbershops, the creativity within Hip-Hop recording studios builds connectivity within the communities they are nestled in. The findings of this study suggest Dreamer Studio helped its participants to participate in pursuing their career aspirations through direct experience with online publishing, social networking, and collective action. Their community of practice did this by employing three critical elements: (1) enhancing social affiliations through corralling as a pod, (2) collaborative problem solving for creative works, and (3) DIY circulating of media for external audiences. By facilitating their students to do creative work that they truly care about, the Dreamer Studio collective simultaneously exposes learning equity gaps and gives voice and agency to Black youth that live within those gaps. Participants didn't necessarily come to the studio in hopes of a record deal, they reported that they came to discover how to shape their relationships and skill sets. The participants also expressed that the studio's community of practice showed them how to use social media platforms to deepen their passions, supportive relationships, and access to opportunities. Though it is uncertain what the professional future will hold for these emerging musicians-signing record deals and becoming famous were the aspirations for many of them-Dreamer Studios was certainly helping prepare them for their futures by cultivating a community that can carry each other through the challenges and stresses of connecting their creative works to a larger audience in the crowded and technology-driven marketplace for attention.
Evidence from this study suggests that access to social media ecologies affiliated with creativity are just as useful to youth as their in-person creative experiences at a studio. The implications of these findings suggest that young people can use social media to join active social communities, hone skills related to their personal interests, and develop their creative career aspirations-which are all hallmarks of Connected Learning. Although social media technologies and practices of media production are moving at a rate that outpaces empirical understanding, there are major implications in this study for understanding how young people of color in low-income communities utilize participatory media. Programs like Dreamer Studio suggest that the future of the music industry will be influenced by a more diverse range of voices from young people who will demand professional inclusion via DIY tactics. Because entry into virtual communities is often cheaper and less burdensome than gaining access to opportunities in the physical world, Black youth can use Connected Learning to cultivate a community around creative work that might otherwise be marginalized or ignored. Further understanding the transitions of these youth into creative work is of the utmost importance but it appears that Connected Learning is impacting Black creative youth of Hip-Hop in ways that are unique to their shared racial identity.
Finally, the findings of this study also have great implications for the methodology of ethnography. For some time both digital ethnography and urban ethnography have respectively been seen as separate but important ways of investigating the social lives of young people. The work I have conducted at Dreamer Studio reveals that now more than ever, the investigation of urban life is made more holistic by also considering implications of life in the digital realm and vice versa. Digital media is ubiquitous in the lives of young people and social media platforms are a vital part of how they communicate with each other, self-express their identities, and develop meaningful connections in the physical world. The participants at Dreamer studios used the studio as a social site to develop know-how and experience in working in a professional working environment with like-minded individuals. Even so, the creative pursuits of these young artists was not limited to their time spent within the physical space of the studio. By attending the various virtual hangout sessions on ClubHouse and tuning in to the official livestreams on both Twitch and Instagram, another dimension to this artistic community of practice was revealed to me, deepening my understanding of their world. This study illustrates that in conducting ethnographies of culture, particularly those that center on the worlds of artistic Black youth, it is imperative that we not only look into their social media engagement as a function of labor or resistance but also as an extension of their personal identity, civic engagement, and socioemotional support, which is directly related to re-defining an understanding of Connected Learning in career pathway development. By paying attention to race, class and geography, among other things, the students of Dreamer exemplify how digital ethnographic methods provide contextual value and nuance to contemporary life.
|
2022-08-09T15:18:35.094Z
|
2022-08-05T00:00:00.000
|
{
"year": 2022,
"sha1": "340e5c824edd57f87134b8a8d145c0d74fb53a42",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11133-022-09514-6.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4d7faaf8e44a79bd9a43916f9e771068beb3496",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235333128
|
pes2o/s2orc
|
v3-fos-license
|
Development and validation of a high‐sensitivity assay for measuring p217+tau in plasma
Abstract Introduction Diagnosis of Alzheimer's disease (AD) based on amyloid beta (A), pathologic tau (T), and neurodegeneration (N) biomarkers in peripheral fluids promises to accelerate clinical trials and intercept disease earlier. Methods Qualification of a Simoa plasma p217+tau assay was performed, followed by clinical utility evaluation in a cohort of 227 subjects with broad A and T spectrum. Results The p217+tau plasma assay was accurate, precise, dilution linear, and highly sensitive. All measured samples were within linear range of the assay, presented higher concentration in AD versus healthy controls (P < .0001), and plasma and cerebrospinal fluid levels correlated (r2 = 0.35). The plasma p217+tau results were predictive of central T and A status (area under the curve = 0.90 and 0.90, respectively) with low false +/– rates. Discussion The assay described here exhibits good technical performance and shows potential as a highly accurate peripheral biomarker for A or T status in AD and cognitively normal subjects.
manufacturing of the radiotracers is challenging. Cerebrospinal fluid (CSF) phosphorylated tau (p-tau), and neurofilament light chain (NfL) are commonly used biomarkers for T and N, respectively, and they may complement tau PET and magnetic resonance imaging (MRI). 8,9 However, these tests require lumbar punctures, which complicates testing.
Less invasive and expensive measures for ATN promise to dramatically increase screening and staging of AD and as such many reports of blood-based amyloid, tau, and NfL assays have been reported, with varying levels of clinical relevance. It is often unclear which particular form of amyloid, tau, or NfL is the best analyte in blood and a logical approach is to apply lessons learned from CSF, and then explore plasma or serum assays targeting this form of the analyte.
Tau has > 30 potential phosphorylation sites, and it is not clear which epitope might be the most reflective of disease stage. 10 The most commonly used form is p181tau, perhaps due to its relatively high abundance, yet current assays show only a modest dynamic range from healthy controls (HC) to AD (3-5x) and do not show p181tau elevation until some years after amyloid PET levels begin to increase. [11][12][13] Mass spectrometry (MS)-based methods to scan tau phosphorylation intensity across the AD spectrum have suggested other sites may be further upregulated in disease, including phosphorylation of threonines 212 and 217. [14][15][16][17] Enzyme-linked immunosorbent assay (ELISA) methods to measure p217tau in CSF have been recently reported, confirming that this tau species has a larger percent increase in AD than p181tau, may be perturbed earlier in the disease process, and is particularly specific to AD versus other neurodegenerative conditions. [18][19][20] In particular the CSF p217+tau Simoa assay, which uses the p217+tau-targeted monoclonal antibody (mAb) PT3, paired with HT43 mAb (anti-N-terminal tau), is highly sensitive, precise, accurate, dilution linear, and revealed robust increases in AD subjects. 19 Here we show that our CSF p217+tau assay in its original form is not suitable for use in plasma, due to substantial matrix interference.
A new assay was thus developed to increase sensitivity and stringency, followed by technical validation. The new plasma p217+tau assay measured all samples, HC to AD, in the linear range and with good precision and dilution linearity, indicating it is fit for purpose. Interestingly, the signal is higher in plasma than serum and reveals comparable tau fragmentation as in CSF. The plasma p217+tau signal is higher in AD than HC, correlates moderately with CSF p217+tau, and predicts CSF A and T status with high accuracy. The work presented here suggests that the plasma p217+tau assay is useful for prescreening subjects for AD clinical trials and may even serve as a peripheral pharmacodynamic marker assay.
Human biosamples
Matching plasma, serum, and CSF samples for pilot studies ( Matching plasma and CSF samples from 36 subjects for the plasma extraction and denaturing study ( Figure 10) were obtained from Preci-sionMed (demographics shown in Table S3 in supporting information) with informed consent for use in assay development.
CSF samples for identifying a CSF p217+tau cutoff for central T positivity ( Figure S1 in supporting information) were obtained from Janssen clinical trial ELN11572301/302 (demographics shown in F I G U R E 1 Cerebrospinal fluid (CSF) p217+tau assay fails technical validation in plasma, due to interference and insufficient sensitivity. A panel of five healthy control (HC) and five Alzheimer's disease (AD) plasma sample was tested at 1:4 (A) and 1:16 (B) dilution in the CSF p217+tau "short" assay, revealing most samples were < lower limit of quantitation (LLOQ; average enzyme per bead [AEB] = signal = 0.04). Outlier high-signal samples at 1:4 dilution were reduced to < LLOQ at 1:16 dilution, indicating lack of dilution linearity. Denaturing plasma via acid/heat technique also reduced all outlier high signal (C). Immunoprecipitation of plasma with anti-p217+tau monoclonal antibodies mAb prior to measurement with CSF 217+tau assay reveals higher sample in all AD samples (D) F I G U R E 2 Three-step p217+tau assay abolishes artifact signal in blood. A, A panel of 96 cerebrospinal fluid (CSF) samples was measured with two-step p217+tau assay and three-step p217+tau assay, revealing tight correlation (r 2 = 0.939). B, A panel of 10 serum samples was measured with two-step and three-step p217+tau assays, revealing outlier high signal in at least four samples in two-step assay but not in the three-step assay Table S4 in supporting information) with informed consent for use in study and development of biomarker assays for AD.
All CSF samples were from lumbar collections and stored in 0.5 or 2 mL polypropylene tubes at <-70 • C until use. All plasma was from K 2 EDTA collection tubes and was stored in 0.5 or 2 mL polypropylene tubes at <-70 • C until use. All serum was stored in 0.5 or 2 mL polypropylene tubes at <-70 • C until use.
Plasma tau extraction method
Plasma was mixed with 2.1% perchloric acid (1:4 dilution, hence 1 mL plasma + 3 mL perchloric acid) and incubated for 30 minutes with rocking at room temperature. Samples were centrifuged for 10 minutes at 30,000 x g and supernatants were run through C18 SepPak. The filters were washed with 1% TFA in water, and tau was then eluted with 1% TFA in 70% ethanol. The elutes were made to 30 mM NaCl and taken to dryness in a vacuum concentrator (Explorer Savant, Thermo Fisher). The dried fractions were re-suspended in plasma p217+tau assay buffer and measured with p217+tau "short" assay.
Plasma heat denaturing method
Plasma was mixed 0.2 M NaOAc (at 1:3 dilution; hence 150ul plasma + 300 uL NaOAc) then heated at 95 • C with 400 rpm mixing for 5 minutes in a Thermomixer (Eppendorf), followed by chilling on wet ice for 10 minutes. Samples were then centrifuged for 10 minutes at 14,000 x g. Supernatant was collected and neutralized with Tris Base (7% by volume) before measurement at 1:2 dilution in plasma p217+tau assay.
CSF p217+tau assay fails technical validation in plasma, due to matrix interference and insufficient sensitivity
The assay developed for measuring p217+tau in CSF 21 was used to measure signal in a panel of five AD and five HC plasma samples. Samples were measured at 1:4 and 1:16 dilution in crude plasma. At 1:4 dilution 6/10 samples measured at or below lower limit of quantitation (LLOQ; signal/noise > 2 and coefficient of variation [CV] < 20%); the other four samples presented markedly higher signal (11-745x above LLOQ). At 1:16 dilution all samples measured below LLOQ, this indicating that the high signal at 1:4 dilution in the 4/10 subjects noted above was substantially non-linear and thus can be considered artifact from the plasma matrix ( Figure 1A,B). In comparison dilution from 1:4 to 1:128 of CSF yields dilution linear measurements. 19 Plasma was also measured after acid/heat denaturing technique 23 and compared to the 1:4 crude plasma signal. Tau in general, and CSF p217+tau, are known to be heat stable, 19,24 yet most interfering substances are not. As seen for total tau in d'Abramo et al., 23 the high p217+tau signal in the 4/10 plasma samples was eliminated after denaturing, again indicating this signal is not true tau signal ( Figure 1C).
Intriguingly, the 1:16 crude plasma data and the acid/heat denatured plasma data indicated higher signal in AD versus HC, suggesting that steps to eliminate matrix interference can reveal biomarker-relevant p217+tau signal in plasma. However poor sensitivity precludes using these techniques.
Immunoprecipitation (IP) of p217+tau signal on beads, elution via acid/heat, and then measurement in Simoa CSF p217+tau assay revealed signal above LLOQ in nearly all samples and again higher signal in AD versus HC ( Figure 1D). These results indicated that purification and concentration of the plasma p217+tau signal could yield a useful plasma p217+tau assay.
Three-step p217+tau assay abolishes artifact signal in blood
To eliminate the laborious and imprecise manual IP step, the threestep protocol in Simoa was used. The CSF p217+tau uses two-step protocol (Step 1 = capture mAb agent, sample, and detection mAb agent are mixed then washed, Step 2 = mAb-sample-mAb complex and streptavidin-beta-galactosidase are mixed). The three-step protocol is similar but introduces a wash before incubating with the detection agent, thus in essence the first step is an IP. The incubation time and sample volume input for this first step was also increased (original method = 50 uL plasma, incubated with capture beads for 15 minutes; final method = 172 uL plasma, incubated for 35 minutes) to allow maximal capture of signal. A panel of 96 CSF samples was first tested with both the two-step and the three-step protocol, revealing high correlation (r 2 = 0.94, slope = 0.8) in the two measurements ( Figure 2A). A panel of 10 serum samples was measured with both the two-step and three-step protocol, revealing that the three-step method dramatically reduced or even eliminated the high p217+tau signal seen in 6/10 subjects with two-step method ( Figure 2B). This again indicates that there is negligible matrix interference in CSF, but substantial positive interference in blood products. The three-step method, with time and volume improvements, was thus selected as the blood-based p217+tau assay.
Custom sample diluent further reduces matrix interference
The composition of the sample diluent was evaluated to add further stringency to the measurement. The impact of buffer type (PBS vs. Tris), NaCl concentration, and presence of heterophilic blocker (e.g., blocking human anti-mouse interactions) was measured in terms of impact on artifact serum p217+tau signal and on number of beads detected at the end of the ELISA method. Low number of beads is often indicative of bead clumping and was confirmed here via white light image (data not shown). A panel of three serums was measured, alongside calibrant peptide/diluent as negative control, with eight custom diluent recipes and the Simoa homebrew diluent.
Use of the Simoa homebrew diluent yielded lower bead counts in serums versus calibrant/diluent (1688-2921 vs. 5378). All eight custom diluents improved the beads counts in serum, and substantially reduced artifact signal seen in two serums with Simoa homebrew diluent (Figure 3). Particular improvement was seen when using Tris buffer, lower NaCl concentration, and heterophilic blocker, thus these were selected for the final sample diluent composition.
p217+tau fragmentation in blood may be of similar extent to that in CSF
CSF tau is known to be highly fragmented and the relative biomarker significance of these unique fragments is not known. 25,26 As such, the CSF p217+tau assay was developed in two forms, based on choice of detection mAb PT82 (targets mid-region) or HT43 (anti-N-terminus) to study fragmentation patterns. Use of the former agent measures any p217+tau containing at least the mid-region aa (119-220) and is thus termed p217+tau "short." Use of the latter agent measures p217tau species extending further toward the N-terminus (aa 7-220), and is thus termed p217+tau "long" ( Figure 4A). The p217+tau "short" assay should detect all the same tau molecules as p217+tau "long" and additional shorter forms as well.
Because blood contains many more proteases than CSF we wondered if plasma p217+tau might be more fragmented in the periphery.
To characterize the nature of the blood p217+tau signal, we measured a panel of serum, plasma, or CSF samples from 18 AD subjects using the two forms of the p217+tau assay. The CSF revealed good correlation between the two assays (r 2 = 0.8247) and~2x higher concentrations with the p217+tau "short" assay (slope = 2.83). The serum and plasma samples revealed similar pattern (r 2 = 0.9541 and 0.8479; slope = 2.41 and 2.05 for serum and plasma, respectively), indicating there is not greater fragmentation between the HT43 and PT82 epitopes (aa20-119) in blood products ( Figure 4B,D). Despite the p217+tau "short" assay reporting higher concentrations the p217+tau "long" assay was selected as the final plasma p217+tau assay due to its superior sensitivity (due to lower background signal)
p217+tau is found at~2x higher levels in plasma versus serum
To evaluate if choice of blood product might impact the p217+tau measurement a panel of 26 matching serum and plasma samples was . The diluent impact on (A) bead count and (B) signal was measured, revealing that all custom buffers rescued the low bead count and outlier high signal seen with Simoa homebrew buffer in serum. The ideal diluents were Tris based with HBR9 included. AEB = signal F I G U R E 4 p217+tau fragmentation in blood may be similar extent to that in cerebrospinal fluid (CSF). A, Schematic showing epitopes of the Abs used in the p217+tau "short" (PT3xPT82) and p217+tau "long" (PT3xHT43) assays. A set of matching (B) CSF, (C) serum, and (D) plasma from 18 subjects with mild-to-moderate dementia was measured with the p217+tau "short" and p217+tau "long" assays. In each sample type the p217+tau "short" assay reported ≈2x the concentration of the p217+tau "long" assay, indicating the fragmentation level between these epitopes may be similar in these three sample types F I G U R E 5 p217+tau found at ≈2x higher levels in plasma versus serum. A set of matching (A) serum and (B) plasma from 10 healthy controls and 16 mild-to-moderate dementia subjects was measured with the p217+tau assay. In each sample type significantly higher average concentration was seen in mild-to-moderate dementia subjects versus healthy controls. However, the plasma samples reported~2x the concentration of the serum samples (C) measured. Both assays revealed higher signal in subjects with mild-tomoderate dementia (n = 16) versus healthy controls (n = 10). However, all the HCs measured < LLOQ in serum, and nearly all (9/10) measured in linear range with plasma ( Figures 5A,B). The plasma and serum p217+tau concentration measures correlated well (r 2 = 0.82); however, the plasma measurements were on average~1.9x higher than those in serum ( Figure 5C). Therefore, to further enhance the sensitivity of the blood p217+tau assay the matrix of choice was selected as plasma.
3.6
Plasma p217+tau assay has good dynamic range, sensitivity, dilution linearity, and inter-/intra-run precision To assess linear range a calibrant curve spanning 0.005 to 30 pg/mL was measured over five runs. The curves aligned well and reported increasing signal across the entire range, but occasionally saturated at the top point, suggesting 0.005 to 10 pg/mL as the dynamic range for the method ( Figure 6A). To evaluate dilution linearity a panel of three AD serums and one AD plasma was tested at 1:2, 1:3, 1:4, 1:5, and 1:6 dilution. Dilution-corrected concentrations were all within 20% of 1:3 dilution. As such, 1:2 dilution was selected as ideal, to aid in sensitivity ( Figure 6B). To evaluate inter-run precision a panel of five quality control (QC) samples was made and aliquoted: AD plasma pool, AD serum pool, and three concentrations of calibrant peptide. The panel was measured over five runs, revealing good inter-run precision (5-15% CV; Figure 7A). To evaluate sensitivity and precision in a typical sample analysis a large cohort of plasma samples (n = 227 subjects aka Validation cohort; 69% mild-to-moderate dementia subjects, 31% cognitively normal subjects) was measured in quadruplicate. All samples were detected (presented signal > limit of detection) and with acceptable precision (< 25% CV, mean CV = 7.1%). In fact, 223/227 samples (98.2%) presented with < 20% CV. To better establish a LLOQ a cutoff of 37 fg/mL was set based on the point below which the plasma measurements were more likely to present > 20% CV; 94.7% of all samples measured above this LLOQ ( Figure 7B).
3.7
Plasma p217+tau concentration correlates moderately with CSF p217+tau in amyloid-positive subjects To Figure 8A). Next, CSF p217+tau from the validation cohort described in Figure 7B
Plasma p217+tau assay predicts CSF tau status with high accuracy
To evaluate the utility of the plasma p217+tau assay to predict central tau status we used CSF p181tau as the standard of truth. In the ATN framework Innotest p181tau can be used as the T assay, with p181tau concentration of ≥52 pg/mL = T positive as cutoff. 27,28 To establish a CSF p217+tau cutoff, CSF from a cohort of 286 samples (89% A+/mild-to-moderate dementia patients) was measured with p181tau and p217+tau assays, determining that Innotest p181tau of 52 pg/ml = Janssen p217+tau "long" of 11.4 pg/mL ( Figure S1).
F I G U R E 8
Plasma p217+tau correlates moderately with cerebrospinal fluid (CSF) p217+tau. A, Matching CSF and plasma from a pilot cohort of 16 amyloid-positive subjects with dementia were measured with the respective p217+tau assays, revealing a moderate positive correlation (r 2 = 0.43). B, Matching CSF and plasma from the validation cohort (n = 227 subjects; 69% mild-to-moderate dementia, 31% cognitively normal) was measured with the respective p217+tau assays, confirming the moderate positive correlation (r 2 = 0.35). The validation cohort data was separated into subjects that were amyloid (C) positive (n = 160) or (D) negative (n = 50), revealing that the CSF:plasma correlation was only seen in amyloid-positive subjects (r 2 = 0.27 versus 0.01 for A+ vs. A-subjects). CSF amyloid data was not available for the remaining 17 subjects in the validation cohort so are not present in panels C or D Using CSF p217+tau 11.4 pg/mL as standard of truth cutoff for tau positivity in the validation cohort ( Figure 8B
Plasma:CSF p217+tau correlation is not improved by biochemical purification
To better understand why the CSF:plasma p217+tau correlation is only moderate (r 2 = 0.35, Figure 8B) we again measured a cohort of F I G U R E 9 Plasma p217+tau predicts central tau positivity with high accuracy. A, Using a cerebrospinal fluid (CSF) p217+tau cutoff of 11.4 pg/mL for central tau positivity (based on equivalency to Innotest p181 cutoff of 52 pg/mL; Figure S1) the data in Figure 5B was analyzed by receiver operating characteristic (ROC) curve, revealing that the plasma p217+tau data predicted central tau status with high accuracy (area under the curve (AUC) = 0.9419). B, Youden index analysis of the ROC curve resulted in optimal plasma p217+tau cutoff point of124.6 fg/mL. This cutoff achieved low false positive (10%) and false negative (2%) rates for predicting CSF p217+tau status. The AUC and false positive/negative rates were similar when focusing on subjects with dementia (n = 157) (E,F) versus cognitively normal subjects (n = 70) (C,D) matching CSF and plasma samples (n = 36), but included a set of the plasmas that had tau chemically extracted and a set of the plasmas that had most non-tau analytes denatured by heat. If the correlation from crude plasma is based on matrix interference it would be likely that at least one of these biochemical purification techniques might improve the linear correlation coefficient. Instead, the plasma p217+tau to CSF p217+tau correlations were very similar among these three different plasma measurements (r 2 = 0.6418, 0.6748, 0.5484 for measurements from crude plasma, extracted plasma, and denatured plasma respectively; Figure S2 in supporting information).
DISCUSSION
AD is a devastating condition that has eluded therapeutic intervention attempts for many decades, and is predicted to grow in prevalence. One of the widely held explanations for the numerous clinical trial failures is that earlier intervention in the disease, ideally far before when clinical symptoms present, is needed. However, objectively identifying these subjects is quite difficult. PET imaging and CSF measures of amyloid and tau have recently yielded assays and cutpoints for this identification, but the process is expensive and invasive, and thereby is not efficient for routine trial enrollment. Peripheral measures of amyloid and tau are thus needed to enable early interception of this disease.
CSF p-tau is believed to be one of the best fluid-based staging markers of AD, [8][9][10][11][12][13] and in particular p217tau has emerged as one of the most AD-specific and -enriched species. 14-17 Indeed, measurement of CSF p217tau by MSD or Simoa methods is sufficiently sensitive to detect the analyte even in healthy controls, and revealed good correlation with tau PET, amyloid PET, and brain volume. [18][19][20] Translation of these assays to blood has not been straightforward however, due to even lower levels of p217tau in the periphery and added matrix interference in blood products. Indeed, early efforts at measuring tau in blood indicated substantial artifact signal and lack of clinical utility. 23 Here we describe a highly sensitive, quantitative, robust, and scalable method for measuring p217+tau in human plasma. The method has a dynamic range of 2 to 10,000 fg/mL and reports all plasma samples (including healthy controls) in this linear range. The method has good inter-and intra-test precision (< 20%) and dilution linearity in the range of quantifiable signal (2-6x). The assay uses the Simoa HD-1/X platform, which has become widely adopted at academic labs and contract research organizations globally and is thus easily transferred to other sites and fully automated to minimize labor and imprecision.
The assay relies on the high affinity and specificity of the PT3 mAb, which recognizes tau phosphorylated at threonine residues 212 and 217. 22 Phosphorylation of amino acid 217 is the minimally required epitope, yet phosphorylation of amino acid 212 enhances binding, and thus the recognition is termed "p217+tau." As described for the CSF p217+tau assay, measurement of species containing both p212 and p217 may yield additional diagnostic relevance than measurement of just one of these residues. 19 Indeed, while liquid chromatography/mass spectrometry (LC/MS) studies of paired helical filament (PHF) tau revealed monophosphorylated peptides containing pT217 alone, pT212 was only found in peptides that also contained phosphorylation of S214 or T217. 14 Multiphosphorylated tau has been hypothesized to be more pathological, and indeed pT212/pS214 dual phosphorylated tau is the epitope of the "AD-specific" AT100 mAb. 33 As such it is possible that recognition of the dual epitope pT212/pT217 may afford even greater AD-specificity than pT217 alone.
Two assays for measuring p217+tau were developed, differing only in their detection antibody, to study potential differences in the diagnostic potential of p217+tau fragments of varying lengths. Because both assays (p217+tau "short" and p217+tau "long") reveal nearly identical performance and diagnostic potential, the core reagent PT3 appears to be the key element to the assay power. While the p217+tau "short" (PT3xPT82) assay recognizes more tau fragments than the p217+tau "long" (PT3xHT43) assay the latter is more sensitive and so is the recommended format for the plasma p217+tau assay.
Several reports have shown that CSF tau is highly fragmented, mostly via loss of the C-terminus. Predominantly observed species are comprised of N-terminal, mid-region, and select microtubule binding region (MTBR) epitopes, and additional major cleavage sites are likely in the region from aa 70-120 and at~aa 224. 15,26,34,35,36 Fractionation of CSF prior to assay has shown that CSF p217+tau is also entirely of fragmented nature, lacking C-terminus and potentially also cleaved between aa7-119. 19 Here we show, by comparing the p217+tau "short" and p217+tau "long" assay results, that plasma p217+tau may present a similar level of fragmentation in the aa 7-119 region as in CSF. This finding matches more detailed evaluation of plasma tau using mass spectrometry. 37 This is relevant because there is more protease activity in blood than in CSF, and thus understanding the nature of the signal in blood may be relevant for optimal accuracy and diagnostic potential. Unexpectedly it was identified that the p217+tau assay reports 2x higher concentration in ethylenediaminetetraacetic acid (EDTA) plasma versus serum. This finding may reflect proteolysis occurring during the clotting process but should be studied in future experiments.
Comparison to lithium, heparin, and anticoagulant citrate dextrose plasma should be evaluated as well. Therefore, currently the suggested sample type for the peripheral p217+tau assay is K 2 EDTA plasma.
Clinical relevance of the plasma p217+tau assay for determining central tau positivity was shown via comparison to CSF p217+tau and p181tau. The latter assay is widely used as a marker of tau pathology in the ATN framework, and comparison to a cutoff of 52 pg/mL p181tau (Innotest) revealed that the plasma p217+tau data could predict central T status with very high accuracy (AUC = 0.9419), in line with a recent report on an MSD-based p217tau assay. 38,39 The predictive power was similar in subjects with normal cognition or mild-to-moderate dementia. In addition, the plasma p217+tau assay predicted central amyloid positivity, as determined by CSF Aβ42/40 ratio, with very high accuracy (AUC = 0.8964), in line with prior reports with MSD or LC/MS methods. 37,38 As such the Simoa plasma p217+tau assay described here could be used as a prescreening tool for AD trial enrollment that requires amyloid or tau positivity; indeed, the plasma p217+tau had very low false positive and negative rates (10% and 2%, respectively, for T positivity; 1% and 18%, respectively for A positivity).
Additional study is recommended to confirm these findings and determine ideal cutoffs in various patient populations (e.g., asymptomatic vs. prodromal), particularly via comparison to PET imaging. However, at this point the data presented here and with the MSD p217tau assay indicate that measurement with either of these assays can be an amyloid and tau screening tool, to be confirmed with CSF or PET measures.
In addition to predicting CSF T status in a binary fashion, the plasma p217+tau shows moderate linear correlation with CSF p217+tau (r 2 = 0.35). Interestingly this correlation is only seen in amyloidpositive subjects, in line with the highly AD-specific nature of the p217 epitope in CSF. 15,16,19 While CSF p217+tau has been shown to correlate well with amyloid and tau status, it is unknown whether its concentrations are driven by amyloid or tau pathology. Lack of concordance between CSF and plasma p217+tau in amyloid-negative subjects might reflect lack of pathological regulation of its expression in these subjects or could simply be due to the lack of assay performance in these lowest concentration plasma samples. Further study is needed to delineate which pathology is responsible for the plasma p217+tau levels.
If indeed peripheral p217+tau is of central nervous system (CNS) origin it would be important to understand its transport is passive (e.g., diffusion) or active (e.g., via molecular transporters), and if this transport is impacted across the AD spectrum. As the blood brain barrier (BBB) is thought to be compromised in some AD cases, 40 This suggests that the lack of tighter correlation between the CSF and plasma p217+tau measurements may not be due to matrix interference in the latter. It is possible that peripheral p-tau is not entirely of CNS origin, or is further modified in the periphery. Further study on the nature of peripheral p-tau is warranted.
To our knowledge the plasma p217+tau assay on Simoa HD-X platform presented here is one of only two reported immuno-assays targeting this epitope in plasma, the other being the MSD p217tau assay first reported in 2020. 38,39 A direct comparison of these two assays has not been performed on the same sample set, yet some predictions can be made based on the data in these two papers. (1) The MSD assay targets p217 selectively, while the Simoa assay presented here targets p217 and surrounding phosphorylation events, which could yield a more pathologically relevant readout (see above). Indeed, the strat- is present at~1% to 2% the levels of CSF p217tau. Finally, plasma p217tau is significantly elevated in AD versus HC, suggesting ability to predict central amyloid status as well. The concordance of our findings with those with the alternate p217tau assay confirms the relevance of this blood-based biomarker for AD. Additional study of both assays is recommended, ideally in large sample cohorts tested with both assays.
The full diagnostic and staging potential of the plasma p217+tau assay described here is still unknown. Whether p217+tau is ADspecific or found in other tauopathies should be evaluated further; however, data from the MSD p217tau assay suggests the signal is especially enriched in amyloid-positive subjects, even when looking in subjects with dementia. Indeed, in CSF other tau assays (e.g., t-tau, p181tau) do not show elevated signal in several non-AD tauopathies. 44 We show here that p217+tau is substantially higher in AD versus HC, and thus might be useful as a diagnostic tool. Studies of plasma p217 with MSD assay, or CSF p217+tau by LC/MS or immuno-assay suggests that this marker may be useful for staging or predicting cognitive decline as well. Evaluation of plasma p217+tau concentration dynamics across the full spectrum of healthy controls to demented AD subjects (or with long-term longitudinal sampling) is needed. Comparison of plasma p217+tau concentrations to brain tau and amyloid burden as measured by PET and cognitive scores would allow correlation of this peripheral measurement to central tau levels and cognitive function. Indeed, plasma p217tau, as measured by MSD p217tau or LC/MS, has been shown recently to correlate well with tau PET and amyloid PET, and to predict A and T status with superior accuracy to any other blood-based measurements. 38,39,37 In summary, the measurement of plasma p217+tau appears to be a highly AD-specific, accurate, sensitive, and precise method for determining central amyloid and tau status in a non-invasive manner. Additional characterization and validation of the method is underway, leading to its inclusion in the standard AD clinical trial toolkit in the near future.
|
2021-06-05T05:13:59.214Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "311b45a91e9e022d1e7a2825130b640840fae3b6",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1002/dad2.12204",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "311b45a91e9e022d1e7a2825130b640840fae3b6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255916008
|
pes2o/s2orc
|
v3-fos-license
|
Echinochrome Prevents Sulfide Catabolism-Associated Chronic Heart Failure after Myocardial Infarction in Mice
Abnormal sulfide catabolism, especially the accumulation of hydrogen sulfide (H2S) during hypoxic or inflammatory stresses, is a major cause of redox imbalance-associated cardiac dysfunction. Polyhydroxynaphtoquinone echinochrome A (Ech-A), a natural pigment of marine origin found in the shells and needles of many species of sea urchins, is a potent antioxidant and inhibits acute myocardial ferroptosis after ischemia/reperfusion, but the chronic effect of Ech-A on heart failure is unknown. Reactive sulfur species (RSS), which include catenated sulfur atoms, have been revealed as true biomolecules with high redox reactivity required for intracellular energy metabolism and signal transduction. Here, we report that continuous intraperitoneal administration of Ech-A (2.0 mg/kg/day) prevents RSS catabolism-associated chronic heart failure after myocardial infarction (MI) in mice. Ech-A prevented left ventricular (LV) systolic dysfunction and structural remodeling after MI. Fluorescence imaging revealed that intracellular RSS level was reduced after MI, while H2S/HS− level was increased in LV myocardium, which was attenuated by Ech-A. This result indicates that Ech-A suppresses RSS catabolism to H2S/HS− in LV myocardium after MI. In addition, Ech-A reduced oxidative stress formation by MI. Ech-A suppressed RSS catabolism caused by hypoxia in neonatal rat cardiomyocytes and human iPS cell-derived cardiomyocytes. Ech-A also suppressed RSS catabolism caused by lipopolysaccharide stimulation in macrophages. Thus, Ech-A has the potential to improve chronic heart failure after MI, in part by preventing sulfide catabolism.
Introduction
Ischemic heart disease remains the world's leading cause of death and still shows a tendency to increase in the near future [1]. Myocardial infarction (MI) is the primary clinical manifestation of ischemic heart disease. More than 80% of acute MI is elicited by disruption of atherosclerotic plaque and later thrombosis to induce subsequent occlusion at the coronary artery [2]. Irreversible myocardium ischemia triggers sarcolemmal rupture and causes myocardial cell death [3]. Prolonged loss of myocardium alters loading conditions [4] and increases ventricular wall stress. Consequently, ventricular dilation and myocyte hypertrophy induced by the renin-angiotensin system occur to compensate for the progressive decrease in contractility [5]. Eventually, MI-induced cardiac remodeling will be a predominant cause of arrhythmia, heart failure, and death. Although the survival rate after first infarction has increased to 90%, the long-term prognosis of MI and re-infarction problems are still arduous issues.
Reactive sulfur species (RSS) are defined as redox-active molecules that have catenated sulfur structures, including persulfides (RSSH), polysulfides (RSS(n)SH/RSS(n)SR), and sulfane sulfur. RSS, such as cysteine hydropersulfide (CysSSH) and glutathione persulfide (GSSH), are universally present in living organisms at micromolar levels [6]. Due to the α-effect, the nucleophilicity of adjacent sulfur atom(s) is enhanced [7,8] which renders RSS more active in redox reactions compared to their thiol forms. Therefore, in physiological conditions, RSS can easily reduce or oxidize other molecules [9,10], and thus regulate biological function by Cys modification. RSS such as CysSSH can either enzymatically bond to cysteinyl-tRNA and co-translationally induces protein polysulfidation by incorporating itself into polypeptides or post-translationally modified protein by transpersulfidation [6,11]. Hydrogen sulfide (H 2 S) was previously considered a gaseous transmitter like NO in the last decade [12,13]. However, a recent quantitative study [14] has revealed that cystathionine β-synthase (CBS) and cystathionine γ-lyase (CSE), which were previously committed as major enzymes for synthesizing H 2 S [14,15], was mainly responsible for producing CysSSH. Accumulation of H 2 S anion (H 2 S/HS − ) inhibits mitochondrial cytochrome c oxidase (COX), which leads to severe respiratory dysfunction with insufficient H 2 S metabolism and excretion [16].
Echinochrome A (Ech-A), or 7-ethyl-2,3,5,6,8-pentahydroxy-1,4-naphthoquinone, is a natural pigment mainly extracted from sea urchins. The naphthoquinone pigments include more than 40 compounds with different pharmacological activities [17] and antiradical activities [18]. Ech-A is heretofore the only commercially available compound that has been applied to medical usage [19], which is also the active ingredient of Histochrome ® (Reg. No. in the Russian Federation P N002362/01) [20]. The therapeutic effect of Ech-A has been proven in multiple clinical trials, especially in ophthalmic and cardiac ischemia diseases [20]. Ech-A treatment inhibits the elevation of serum malonic dialdehyde after MI [21]. In clinical studies, Zakirova et al. analyzed the effect of a single intravenous infusion of 100 mg histochrome on MI size and the activity of creatine phosphokinase in 45 patients with acute MI during thrombolytic therapy [22]. Afanas'ev et al. investigated the effect of histochrome on ischemia-triggered ATP depletion in the myocardium of patients with coronary heart disease (CHD). Eight CHD patients received two intravenous injections of 3% histochrome at a dose of 1 mg/kg 24 h prior to operation, and histochrome administration preserved intracellular ATP contents in the myocardium compared with no treatment [23]. In animal models, treatment with Ech-A shows positive outcomes in attenuating cerebral ischemic injury, as well as cardiac ischemia-reperfusion injury [24,25]. The pharmacology potential of Ech-A has been explained by its effective antioxidant [26], anti-inflammatory [27], and anti-fibrotic [28] properties. Ech-A also reportedly protects cardiomyocytes from cardiotoxic agents by alleviating activation of the MAPK signaling pathway and mitochondrial dysfunction [29]. However, the molecular detail of how Ech-A preserves redox homeostasis in the ischemic heart is still unclear.
In this study, we demonstrate whether Ech-A improves chronic heart failure after MI by preventing RSS catabolism to H 2 S. We also demonstrate whether Ech-A inhibits hypoxia-induced RSS catabolism in human iPS cell-derived cardiomyocytes (hiPSC-CMs) and lipopolysaccharide (LPS)-induced RSS catabolism of bone marrow-derived macrophages (BMDMs), suggesting that the mechanism of cardioprotection by Ech-A is through maintenance of RSS metabolism.
Ech-A Prevents Chronic Heart Failure after MI in Mice
We first examined the chronic effect of Ech-A at 3 doses (0.2, 0.6, and 2.0 mg/kg/day) on left ventricular (LV) dysfunction after MI in mice. We performed echocardiography before and every week after the operation to progressively track the cardiac contraction function and morphology variation ( Figure 1B and Table 1). The alleviating effect of Ech-A on the decrease in LV ejection fraction (EF) ( Figure 1C) and fractional shortening (FS) ( Figure 1D) was gradually manifested with increasing the dose of Ech-A and substantially improved in the MI-Ech-A (2.0 mg/kg/day) group compared to the MI-vehicle group at the end point, indicating that the deterioration of cardiac systolic capacity is dose-dependently suppressed by Ech-A treatment. In addition, LVEF and LVFS have no distinction before infusing with Ech-A in either MI group ( Figure S1F,G). The LV anterior wall end diastole (LVAWd) was gradually thinned after MI ( Figure S1C), while it was greatly restored in Ech-A (2.0 mg/kg/day) treated hearts at 5 weeks after MI ( Figure 1E). Furthermore, the elevation of both LV internal diameter end diastole and end systole (LVIDd and LVIDs), as a consequence of LV remodeling, was ameliorated by Ech-A (2.0 mg/kg/day) treatment ( Figure 1F,G), as Ech-A treatment tended to attenuate the increase in the ratio of heart weight to tibia length (HW/TL), supporting the prevention of LV remodeling by Ech-A at 2.0 mg/kg/day ( Figure 1H and Table 2). The change of each parameter over time can be found in Figure S1A-E. These results suggest that Ech-A administration at 2.0 mg/kg/day (i.p.) improves chronic heart failure after MI.
Ech-A Inhibits LV Remodeling and Oxidative Stress after MI
Since Ech-A at 2.0 mg/kg/day ameliorates chronic heart failure, we next examined the effects of Ech-A on LV remodeling after MI. Additionally, the Ech-A-treated MI heart samples used in all following experiments will be from the 2.0 mg/kg/day group. Picrosirius red staining of histological sections revealed a remarkable fibrosis caused in vehicle-treated MI hearts compared to sham-operated hearts. In contrast, treatment with Ech-A significantly reduced the fibrotic area in the non-infarcted (remote) region of the LV myocardium ( Figure 2A,B). Furthermore, significant increases in the myocardial cross-sectional area in the remote region were observed in MI hearts, but they were greatly suppressed by Ech-A treatment ( Figure 2C,D). This result correlated well with that of changes in HW ( Figure 1H and Table 2) and indicates that Ech-A attenuates LV hypertrophy after MI. Fibrosis occurred in the infarct area, which was mainly due to replacing the necrotic cardiomyocyte with extracellular matrix to form the collagen-based scar. Collagen formation is increased in order to distribute elevated LV wall stress more evenly to stabilize the distending forces and prevent further deformation of the heart [30]. We confirmed that the magnitude of the ischemic scar was equivalently induced in both vehicle-treated and Ech-A-treated MI hearts. In addition, 4-hydroxy-2-nonenal (4-HNE) is a highly cytotoxic and reactive α,βunsaturated aldehyde that is produced during lipid peroxidation. 4-HNE is a stable marker for oxidative stress and increases in various pathological models, including coronary artery disease [31]. Ech-A treatment inhibited the production of 4-HNE in MI hearts ( Figure 2E,F). Ech-A also enhanced the oxidative stress resistance of MI hearts, as indicated by increased mRNA expression levels of superoxide dismutase 1 and catalase ( Figure S2A,B). These results suggest that Ech-A treatment improves chronic heart failure after MI by inhibiting oxidative stress and cardiac remodeling.
Mar. Drugs 2023, 21, 52 7 of 18 hearts, as indicated by increased mRNA expression levels of superoxide dismutase 1 and catalase ( Figure S2A,B). These results suggest that Ech-A treatment improves chronic heart failure after MI by inhibiting oxidative stress and cardiac remodeling. . Data were presented as mean ± s.e.m. p-values were calculated using one-way ANOVA followed by Turkey's multiple comparison test.
Ech-A Attenuates RSS Catabolism of the Heart after MI
RSS, such as CysSSH [32] and GSSSG [33], acts as an electron acceptor instead of oxygen, leading to reduced by-production of reactive oxygen species (ROS), while it results in the production of reduced-form sulfides, including H2S/HS − . We next examined whether Ech-A maintains redox homeostasis by preventing RSS catabolism in MI hearts. SSip-1 DA and SF7-AM probes were applied as R-S(n)SH/R-S(n)S-R and H2S/HS − indicators, Data were presented as mean ± s.e.m. p-values were calculated using one-way ANOVA followed by Turkey's multiple comparison test.
Ech-A Attenuates RSS Catabolism of the Heart after MI
RSS, such as CysSSH [32] and GSSSG [33], acts as an electron acceptor instead of oxygen, leading to reduced by-production of reactive oxygen species (ROS), while it results in the production of reduced-form sulfides, including H 2 S/HS − . We next examined whether Ech-A maintains redox homeostasis by preventing RSS catabolism in MI hearts. SSip-1 DA and SF7-AM probes were applied as R-S (n) SH/R-S (n) S-R and H 2 S/HS − indicators, respectively [34,35]. Although we could not detect any positive signals of SSip-1 DA and SF7-AM from the scar area of MI hearts, we could semi-quantitatively measure the magnitude of RSS and H 2 S/HS-fluorescence intensities in the non-infarct area. SSip-1 DA imaging revealed that the reduction in RSS in the remote region of MI hearts was restored by Ech-A treatment ( Figure 3A,C). Correspondingly, SF7-AM imaging results denoted that Ech-A greatly inhibited the increase in H 2 S/HS − in the non-infarct myocardium region ( Figure 3B,D). These results suggest that RSS is catabolized to H 2 S in the LV myocardium after MI, and Ech-A prevents RSS catabolism. respectively [34,35]. Although we could not detect any positive signals of SSip-1 DA and SF7-AM from the scar area of MI hearts, we could semi-quantitatively measure the magnitude of RSS and H2S/HS-fluorescence intensities in the non-infarct area. SSip-1 DA imaging revealed that the reduction in RSS in the remote region of MI hearts was restored by Ech-A treatment ( Figure 3A,C). Correspondingly, SF7-AM imaging results denoted that Ech-A greatly inhibited the increase in H2S/HS − in the non-infarct myocardium region ( Figure 3B,D). These results suggest that RSS is catabolized to H2S in the LV myocardium after MI, and Ech-A prevents RSS catabolism. . Data were presented as mean ± s.e.m. p-values were calculated using one-way ANOVA followed by Turkey's multiple comparison test.
Ech-A Concentration-Dependently Prevents RSS Catabolism Caused by Hypoxic Stress in Neonatal Rat Cardiomyocytes (NRCMs)
We next investigated the relative concentration of RSS and H2S/HS − in NRCMs under hypoxic or normoxic conditions. The fluorescence resonance energy transfer (FRET)based semi-quantitative RSS detection probe (QS10) was used [36]. In parallel to in vivo experimental results, Ech-A treatment significantly suppressed the hypoxic stress-induced decline in intracellular RSS levels in NRCMs ( Figure 4A,C). Comparably, hypoxic stress caused an accumulation of H2S/HS − , which was also suppressed by Ech-A treatment ( Figure 4B,D). Furthermore, Ech-A inhibited RSS decrease or H2S/HS − increase at a halfmaximal inhibitory concentration (IC50) of 4.3 µM ( Figure 4E) or 2.0 µM ( Figure 4F). These results support the in vivo evidence that Ech-A dose-dependently prevents RSS catabo- . Data were presented as mean ± s.e.m. p-values were calculated using one-way ANOVA followed by Turkey's multiple comparison test.
Ech-A Concentration-Dependently Prevents RSS Catabolism Caused by Hypoxic Stress in Neonatal Rat Cardiomyocytes (NRCMs)
We next investigated the relative concentration of RSS and H 2 S/HS − in NRCMs under hypoxic or normoxic conditions. The fluorescence resonance energy transfer (FRET)based semi-quantitative RSS detection probe (QS10) was used [36]. In parallel to in vivo experimental results, Ech-A treatment significantly suppressed the hypoxic stress-induced decline in intracellular RSS levels in NRCMs ( Figure 4A,C). Comparably, hypoxic stress caused an accumulation of H 2 S/HS − , which was also suppressed by Ech-A treatment ( Figure 4B,D). Furthermore, Ech-A inhibited RSS decrease or H 2 S/HS − increase at a halfmaximal inhibitory concentration (IC 50 ) of 4.3 µM ( Figure 4E) or 2.0 µM ( Figure 4F). These results support the in vivo evidence that Ech-A dose-dependently prevents RSS catabolism and H 2 S/HS − accumulation caused by hypoxic stress in cardiomyocytes. Average probe intensity at normoxia or hypoxia condition was defined as 100% or 0% grid line, respectively. Data were presented as mean ± s.e.m. p-values were calculated using one-way ANOVA followed by Šídák's multiple comparisons test.
Ech-A Universally Prevents Cellular RSS Catabolism during Hypoxia and Pseudohypoxia
We further investigated whether hypoxia-induced RSS catabolism and H2S/HS − accumulation is caused in hiPSC-CMs. Exposure of hiPSC-CMs to hypoxic stress dramati- (E,F) Half-maximal inhibitory concentration (IC 50 ) of Ech-A determined by percentage of QS-10 intensity decrease (E) and SF7-AM intensity increase (F). Average probe intensity at normoxia or hypoxia condition was defined as 100% or 0% grid line, respectively. Data were presented as mean ± s.e.m. p-values were calculated using one-way ANOVA followed by Šídák's multiple comparisons test.
Ech-A Universally Prevents Cellular RSS Catabolism during Hypoxia and Pseudohypoxia
We further investigated whether hypoxia-induced RSS catabolism and H 2 S/HS − accumulation is caused in hiPSC-CMs. Exposure of hiPSC-CMs to hypoxic stress dramatically reduced the intracellular RSS level, and this reduction was attenuated by Ech-A treatment ( Figure 5A,B).
The heart contains not only cardiomyocytes but also various cells, including fibroblasts, macrophages, and endothelial cells. Especially, macrophage-derived inflammation is closely related to the progression of cardiac remodeling and heart failure [37,38]. LPS is a major component of Gram-negative bacteria cell walls and induces an inflammatory response in macrophages. Indeed, LPS stimulation decreased intracellular RSS levels in bone marrow-derived macrophages (BMDMs), and Ech-A treatment also suppressed the LPS-induced decline in intracellular RSS levels ( Figure 5C,D). The heart contains not only cardiomyocytes but also various cells, including fibroblasts, macrophages, and endothelial cells. Especially, macrophage-derived inflammation is closely related to the progression of cardiac remodeling and heart failure [37,38]. LPS is a major component of Gram-negative bacteria cell walls and induces an inflammatory response in macrophages. Indeed, LPS stimulation decreased intracellular RSS levels in bone marrowderived macrophages (BMDMs), and Ech-A treatment also suppressed the LPS-induced decline in intracellular RSS levels ( Figure 5C,D).
Discussion
Chronic heart failure due to ischemic heart disease is the world's leading cause of death. We first demonstrated that treatment with Ech-A one week after MI prevented chronic heart failure in mice, without any apparent side-effects. Although several beneficial effects of Ech-A have been already reported previously, the underlying molecular mechanism(s) is still obscure. Focusing on the antioxidant property of Ech-A, we found that Ech-A has the potential to maintain intracellular redox homeostasis under ischemic or inflammatory stress, by preserving sulfide metabolism using three fluorescent probes (Figures 3-5).
During cardiac ischemia or hypoxia, ROS are vastly produced by enzymatic catalyzation [39,40], or reverse the mitochondrial electron transport chain (ETC) [41]. Pathological situations further exacerbate the imbalance between antioxidant systems and ROS, increasing oxidative stress results in membrane protein degradation, DNA damage, and lipid peroxidation. This will lead to autophagosome accumulation, respiratory chain dysfunction, and eventually cause myocardium cell death [42,43]. Antioxidant systems are mainly composed of antioxidant compounds that directly scavenge free radicals and antioxidant enzymes, including superoxide dismutases (SOD) with glutathione peroxidases (GPX) that catabolize ROS. The radical-scavenging property of naphthoquinone pigments relies on their phenolic hydroxyl groups. Since the hydroxyl substituents of naphthoquinone in the C-5 and C-8 ( Figure 1A) positions are prone to form hydrogen bonds with quinone carbonyls, hydroxyl groups at C-2, C-3, and C-7 ( Figure 1A) are critical for its radical scavenging capacity [17]. Ech-A is hence considered a comparable superb antioxidant. Ech-A has been proven to increase SOD activity [44] and elevate glutathione content with GPX activity [45]. The global antioxidant effect of Ech-A is also reflected in reducing inflammation and tissue damage in the liver [45,46], uvea [47], colon [48], and arteries [49]. These results suggest that not only sulfide metabolism but also the antioxidative effect of Ech-A would contribute to cardioprotective effects.
Hypoxic and inflammatory stress universally reduce intracellular RSS levels in human and rat cardiomyocytes and mouse BMDMs. Consistent with this, brain H 2 S/HS − levels are increased in mice under hypoxic conditions [16]. It has been previously reported that electrons from ETC in mitochondria convert Cys-SSH to H 2 S/HS − [6]. Both oxygen and Cys-SSH are cooperatively used as electron acceptors in ETC. In hypoxic conditions, Cys-SSH would be preferentially used instead of oxygen and converted to H 2 S/HS − . Interestingly, Ech-A treatment inhibited the decline in RSS levels after hypoxia. We have no obvious evidence to show how Ech-A inhibits the hypoxic stress-induced RSS catabolism in the present. However, sulfide anabolism and catabolism are precisely controlled enzymatically. For example, mitochondria-localized cysteinyl-tRNA synthetase mediates Cys-SSH formation using L-cysteine as a substrate [6]. CBS and CSE also form Cys-SSH using cystine [14]. Sulfide quinone oxidoreductase (SQOR) catalyzes the oxidation of H 2 S, using glutathione (GSH) as an acceptor to form GSSH. Sulfur dioxygenases such as ETHE1 and persulfide dioxygenase convert GSSH to sulfite, releasing GSH [50]. Chronic Ech-A treatment might change the activity or expression level of these sulfide-metabolizing enzymes. Another possibility is the direct inhibition of Cys-SSH conversion to H 2 S by scavenging electrons. Future studies will be required to identify how Ech-A maintains sulfide metabolism of cells under hypoxic or inflammatory stress.
The relationship between the disturbance of sulfide metabolism and the ischemic vulnerability of the heart is an important open question. Although various physiological roles of externally treated H 2 S/HS − have been reported, H 2 S is generally considered to be a highly toxic gas to most animals. H 2 S induces ROS production and disrupts mitochondrial respiration by inhibiting cytochrome oxidase [51,52]. In mice and rat brains, the expression level of sulfide catabolize enzyme SQOR is significantly lower than that in other tissues, which promotes H 2 S/HS − accumulation [53]. Low expression of SQOR in the brain explains the vulnerability to ischemic (hypoxic) stress by increasing H 2 S production [16]. Although the protective effect of low-dose H 2 S/HS − against chronic heart failure after MI has been reported [54], recent studies have shown that RSS, which is a relatively active metabolite compared to H 2 S, is attracting attention as a key regulator in multiple biological processes [55,56]. Indeed, endogenous H 2 S/HS − level is apparently lower compared to RSS in NRCMs (Figure 3), suggesting that H 2 S/HS − is rapidly converted to RSS in normal cardiomyocytes. Persulfides are chemically highly nucleophilic and redox sensitive, and are therefore comparably more competent at scavenging toxic electrophiles and oxidants than thiol groups [14]. Ech-A treatment protected RSS levels and inhibited oxidative stress in the heart after MI (Figures 2 and 3). Moreover, cysteine persulfide would contribute to energy biogenesis in mitochondrial ETC as an electron acceptor in hypoxia. In addition to lowmolecular-weight persulfides, protein persulfidation would also be involved in the ischemic tolerance of the heart. We previously identified that aberrant mitochondrial fission induced by hyperactivation of dynamin-related protein 1 (Drp1) is a mitochondrial fission factor that mediates myocardial senescence and cardiac dysfunction after MI [57]. Drp1 activity is negatively regulated by polysulfidation [6]. Electrophile-mediated depolysulfidation of Drp1 increased cardiac fragility, and NaHS treatment improved mitochondrial and cardiac function by facilitating Drp1 polysulfidation [11]. These results support the beneficial effect of Ech-A on cardiac remodeling and function after MI by protecting the RSS level.
Furthermore, LPS-induced deprivation of RSS in BMDMs and Ech-A treatment improved it ( Figure 5C,D). LPS promotes hypoxia-inducible factor 1 (HIF-1) activation in THP-1 human myeloid cells [58], reflecting the induction of pseudohypoxic stress. Consistent with this, LPS induced hif-1 with its downstream heme oxygenase-1 gene expression in BMDMs ( Figure S2C,D). These results suggest that HIF-1-related signaling might be involved in the regulation of sulfide catabolism under hypoxia and pseudohypoxia. Treatment with RSS donors, such as Na 2 S 4 and NAC polysulfides, reportedly inhibits hypoxiainduced HIF-1 stabilization and upregulation, increases intracellular RSS levels, and inhibits LPS-induced production of pro-inflammatory cytokines such as tumor necrosis factor-α and interferon-β [55,59,60]. Ech-A has a beneficial effect on various inflammation-related diseases [27]. Therefore, prevention of sulfide catabolism may underlie the anti-inflammatory effect of Ech-A.
Animals
All protocols using mice and rats were reviewed and approved by the ethics committees at the National Institute for Physiological Sciences and were conducted according to the institutional guidelines concerning the care and handling of experimental animals. 9 to 12-week-old male C57/BL6J mice and Sprague-Dawley (SD) rats were purchased from Japan SLC, Inc. (Shizuoka, Japan). All mice were kept in plastic cages in a climatecontrolled animal room with a 12 h light/dark cycle. Mice were treated with vehicle (0.9% sodium chloride) or Ech-A (0.84, 2.5, or 8.4 mg/mL) via constant intraperitoneal infusion with a mini-osmotic pump (Cat No. 2004, Alzet, DURECT Corporation, Cupertino, CA, USA), which gives a sustained delivery of 0.22 µL of liquid medicine per hour. The dosing concentrations of Ech-A in mice were 0.2, 0.6, and 2.0 mg/kg/day, respectively.
MI Surgery and Transthoracic Echocardiography
Myocardial infarction (MI) was artificially induced by a permeant ligation of the left anterior descending (LAD) coronary artery. The LAD artery was ligated 2 to 3 mm distal to the left atrial appendage with an 8-0 silk suture. The intercostal space, pectoralis major, and skin were successively closed with a 5-0 silk suture. A ligation evoking acute myocardial ischemia was regarded as successful when a blanching of the anterior left ventricle distal from the ligature and a notable ST-segment elevation confirmed by echocardiography were observed. Endotracheal intubation was performed prior to thoracotomy to support the mechanical ventilation of the mice. Sham group mice were performed with the same procedure, except for the LAD ligation. MI surgery was performed on 9 to 12-week-old male C57BL/6J mice. All surgical procedures were performed in mice anesthetized with 3.0% isoflurane (Pfizer, New York, NY, USA) mixed with air for 30 s and stable in 2.0% for MI surgery or 1.0% for echocardiography. A mini-osmotic pump filled with vehicle or Ech-A was implanted intraperitoneally into mice 7 days after MI, and sustained dosing was maintained for 4 weeks. Transthoracic echocardiography was performed using the Vevo3100 imaging system (FUJIFILM VisualSonics, Toronto, Canada) before or every week after the operation. An electrocardiogram was simultaneously recorded by the Vevo3100. Left ventricle fractional shortening (LVFS) was acquired from a parasternal short-axis view of motion-mode echocardiography. 4D-mode left ventricle ejection fraction (LVEF) was calculated according to 3D geometry with dynamic motion of the left ventricle myocardium.
Tissue Analysis
At 5 weeks after MI, mouse hearts were removed, washed in PBS to drain out blood, and fixed with 4% PFA in PBS for 12 h at 4 • C. Then, tissues were successively sunk into 10, 20, and 30% sucrose in PBS every 12 h at 4 • C, and then embedded in optimal cutting temperature (O.C.T.) compound (Sakura Finetek, Torrance, CA, USA) and snap-frozen in a bath of 2-methylbutane with crushed dry ice. Heart slices (12 µm thickness) were cut on a cryostat and used for immunofluorescent and polysulfide/hydrogen sulfide staining.
For immunofluorescent staining, after rinsing with PBS, heart slices were permeabilized and blocked with 1% bovine serum albumin and 0.3% Triton X-100 in PBS at room temperature for 1 h. Then, the slices were incubated with primary antibodies against 4-HNE (1:200) overnight at 4 • C. Next, the slices were incubated with CF594 secondary antibody (1:2000) for 1 h at room temperature. Primary and secondary antibodies were diluted in PBS with 0.05% Triton X-100. Samples were washed three times with PBS and then mounted with ProLong Diamond Antifade Mountant (Invitrogen, Eugene, OR, USA). Slides were observed under a BZ-X700 microscope (Keyence, Osaka, Japan).
For polysulfide and sulfide staining, heart slices were stained with or without 10 µM SSip-1 (for detecting polysulfide) in HBSS including 0.1% BSA and 0.01% cremophor EL or 5 µM SF7-AM (for detecting hydrogen sulfide) in HBSS with 0.04% pluronic F-127 for 45 min at room temperature. Samples were washed three times with PBS and then mounted with ProLong Diamond Antifade Mountant (Invitrogen). Samples were observed using a BZ-X700 microscope (Keyence, Osaka, Japan). For MI samples, only the fluorescent intensity in the non-infarct region of the myocardium was quantified. Background fluorescence intensity without probe was subtracted from fluorescence intensity with probe for quantification.
Conclusions
Chronic administration of Ech-A prevented cardiac dysfunction after MI in a mouse model. Fluorescence imaging analysis showed that hypoxic stress promotes the conversion of intracellular RSS into H 2 S/HS − , and this sulfide catabolism is suppressed by Ech-A in rat cardiomyocytes and human iPS cell-derived cardiomyocytes. In this study, we revealed the therapeutic potential of Ech-A for chronic heart failure after MI in mice. Preservation of RSS catabolism and H 2 S/HS − accumulation in cells under hypoxic and inflammatory stress by Ech-A will provide a new strategy for the treatment of chronic heart failure. Future studies focusing on how hypoxic stress induces the imbalance of sulfide catabolism and how Ech-A improves sulfide catabolism are necessary to understand the importance of sulfur metabolism on cardiovascular homeostasis and diseases.
Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/xxx/s1, Figure S1: Temporal changes in echocardiographic parameters; Figure S2: Gene expression levels of antioxidant enzymes in mouse heart and hypoxic markers in BMDM; Table S1: Primer sequences for qPCR.
|
2023-01-17T17:15:26.561Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "48128c974942b2d81bfdb07a7aac795b96163be4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/21/1/52/pdf?version=1673538798",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "133a95f0227e317440c0faded8f9ad2d489f5221",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255845226
|
pes2o/s2orc
|
v3-fos-license
|
Healthcare vulnerability disparities in pancreatic cancer treatment and mortality using the Korean National Sample Cohort: a retrospective cohort study
The gap in treatment and health outcomes after diagnosis of pancreatic cancer is a major public health concern. We aimed to investigate the differences in the health outcomes and treatment of pancreatic cancer patients in healthcare vulnerable and non-vulnerable areas. This retrospective cohort study evaluated data from the Korea National Health Insurance Corporation-National Sample Cohort from 2002 to 2019. The position value for relative comparison index was used to define healthcare vulnerable areas. Cox proportional hazard regression was used to estimate the risk of mortality in pancreatic cancer patients according to healthcare vulnerable areas, and multiple logistic regression was used to estimate the difference in treatment. Among 1,975 patients, 279 (14.1%) and 1,696 (85.9%) lived in the healthcare vulnerable and non-vulnerable areas, respectively. Compared with the non-vulnerable area, pancreatic cancer patients in the vulnerable area had a higher risk of death at 3 months (hazard ratio [HR]: 1.33, 95% confidence interval [CI] = 1.06–1.67) and 6 months (HR: 1.23, 95% CI = 1.03–1.48). In addition, patients with pancreatic cancer in the vulnerable area were less likely to receive treatment than patients in the non-vulnerable area (odds ratio [OR]: 0.70, 95% CI = 0.52–0.94). This trend was further emphasized for chemotherapy (OR: 0.68, 95% CI = 0.48–0.95). Patients with pancreatic cancer belonging to medically disadvantaged areas receive less treatment and have a higher risk of death. This may be a result of the late diagnosis of pancreatic cancer among these patients.
Introduction
Worldwide, pancreatic cancer is the 12 th most common malignancy and the 7 th leading cause of cancer mortality [1]. The prognosis for pancreatic cancer is poor, with long-term survival rates of 9% despite various advances in combination therapy [2]. Surgical resection remains the only potential cure for pancreatic cancer. However, since most tumors are locally advanced or metastatic at the Open Access *Correspondence: jangsi@yuhs.ac 5 Department of Preventive Medicine, Yonsei University College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea Full list of author information is available at the end of the article time of diagnosis, only 15% to 20% of patients are eligible for resection [3]. Several studies have reported that the 5-year survival rate of surgical patients is as high as 30% [4][5][6]. However, surgical methods such as removal of most pancreatic infiltrates are feasible when pancreatic cancer is diagnosed at an early stage. Chemotherapy is preferred when the diagnosis is at later stages [7,8]. Surgery and chemotherapy may not be feasible, especially if pancreatic cancer is discovered in advanced stages [7,8]. Therefore, staging pancreatic cancer at the time of diagnosis is particularly important as it has a significant impact on treatment options and survival [9,10]. Nevertheless, the main concern of the geographically disadvantaged population that has a poor survival rate is the disparities in the diagnosis stage of cancer [11].
Research on disparities in pancreatic cancer outcomes is predominantly reported from western countries, such as the United States, and focused on disparities by race and ethnicity or type of health insurance [12,13]. Significant interest in racial and ethnic imbalances is key in improving outcomes, but patients living in healthcare vulnerable areas, facing socioeconomic problems and travel costs are often overlooked, which may affect both treatment options and pancreatic cancer outcomes [14]. The presence or absence of residence in a healthcare vulnerable area can lead to differences in the stage at which the cancer is diagnosed and differences in treatment availability [15]. Where there are differences in treatment availability, apart from racial disparity, economic status, and insurance coverage [16][17][18], the role of regional disparity variables on pancreatic cancer outcomes remains unclear. Thus, there is limited evidence of regional disparity in early diagnosis and post-diagnosis treatment and health outcomes among patients diagnosed with pancreatic cancer.
In addition, previous studies tend to focus on dichotomizing regional disparities into rural and urban areas [14]. However, patients living in remote areas may have greater difficulties in accessing timely care. Therefore, it is necessary to comprehensively consider the level of healthcare between regions and specifically investigate whether to treat pancreatic cancer after diagnosis and health outcomes.
To define the level of healthcare between regions, we used the position value for relative comparison (PARC) index, a measure that can relatively evaluate the level of health care between regions. The PARC index has been widely used in linear studies to diagnose the level of health care by region [15][16][17]. Therefore, this study aimed to classify healthcare vulnerable and non-vulnerable areas using the PARC index through Korean nationwide claims data and investigate the difference between treatment and health outcomes of pancreatic cancer patients in healthcare vulnerable and non-vulnerable areas. We investigated the following two hypotheses: 1) mortality from all causes will be higher in vulnerable areas than in non-vulnerable areas; patients who have undergone surgery and chemotherapy will have higher mortality from all causes than those who did not; 2) compared to nonvulnerable areas, there will be fewer surgeries and chemotherapy that can indirectly examine early diagnosis in vulnerable areas.
Materials and methods
All data are available in the database of the Korean National Health Insurance Sharing Service (https:// nhiss. nhis. or. kr) and can be accessed upon reasonable request. This study was reviewed and approved by the International Review Board of Yonsei University's Health System (IRB number: Y-2020-0031) and adheres to the tenets of the Declaration of Helsinki. The Korea National Health Insurance Service-National Sample Cohort (NHIS-NSC) data do not contain any identifying information. Due to the retrospective nature of the study, the requirement to obtain informed consent was waived.
Study population and data
The data analyzed in this study were acquired from the Korean National Health Insurance Service National Sample Cohort (NHIS-NSC) of 2002 and 2019, from the National Health Insurance Service (NHIS). The Korean NHIS provides researchers with all claims data collected by the corporation for academic research and policymaking. The data for this study were collected from insurance claims that included demographic information, diagnosis, medications, costs, date of visit, and date of death, if applicable. As of 2002, out of 47,851,928 people, excluding foreigners, 46,605,433 participants were selected for the sample cohort. From the full NHIS databas, the representative sample cohort consisted of 1,025,340 people, which were randomly stratified from 2.2% of the total population of Korea [18]. Follow-up data were available through 2019 and included information on medical claims filed between 2002 and 2019.
During the study period, a total of 3,454 patients were newly diagnosed with pancreatic cancer, according to the International Classification of Diseases (ICD-10 code: C25). First, patients diagnosed with pancreatic cancer between 2002 and 2003 were excluded to ensure a pancreatic cancer-free period of at least 2 years(n = 227). This eliminated the effects of pancreatic cancer that might have occurred prior to the cohort observation period. Second, to increase the accuracy of new cancer diagnoses, patients without the V027, V193, and V194 codes, which are domestic cancer-only self-pay codes, were excluded (n = 1,146) [19]. Finally, participants with missing information on covariates such as age, sex, social security status, disability, and household income level were excluded, including those under 19 years of age at the time of pancreatic cancer diagnosis (n = 106). After this exclusion, 1,975 patients with the first diagnosis of pancreatic cancer were included in the study.
The last date of follow-up was defined as the date of death or December 31, 2019, whichever occurred first. Index date (time 0) was defined as the date of the first pancreatic cancer diagnosis that met the eligibility criteria for a patient with pancreatic cancer (either outpatient care or inpatient care; ICD-10: C25).
Study variables and covariates
In this study, the variable of interest was the healthcare vulnerable region. The PARC index, was used to diagnose the level of healthcare by region in Korea [15][16][17]. PARC is an objective indicator that can identify relative locations compared to other regions concerning medical demand, supply, access, use, and health conditions in each region. The PARC value is between -1 and 1, and when compared with the average value of the entire region, the value is considered best when it is 1, 0 for the average, and -1 for the worst [15]. Thus, a PARC value closer to -1 is associated with a lower than average level of healthcare care in the area, whereas that closer to 1 is associated with higher levels of healthcare care in the area [15]. In this study, when the PARC value was less than -0.33, it was classified as a healthcare vulnerable region.
The primary health outcome variable in this study was all-cause mortality. The primary health outcome variable in this study was all-cause mortality. Though most previous studies on pancreatic cancer report 5-year survival rates [20,21], our study measured mortality at 3 months, 6 months, and 1-year as pancreatic cancer is characterized by an average survival period of 3 to 6 months and a poor prognosis [22,23]. In NHIS, each patient's unique de-identification number was linked to the mortality information of the National Statistical Office [18]. The time from index date to death date was used to define survival time. The secondary outcome variable was treatment according to healthcare vulnerability. As treatment choice would indirectly indicate the time taken for initial diagnosis, the treatment was divided into surgery and chemotherapy. Based on available literature and expert opinions, 8 therapies in cases of pancreatic cancer surgery were included. Gemcitabine and 5-fluorouracil drugs were included (Supplementary Table 1) [24,25].
Possible confounding factors in this study were variables that could affect mortality and treatment availability in patients with pancreatic cancer. This included age, sex, social security status, disability, household income level, Charlson Comorbidity Index (CCI), and year of diagnosis of pancreatic cancer. The CCI score is an index for evaluating a participant's comorbidities that may alter the risk of death, for use in longitudinal studies. The score was calculated by weighting 1 to 6 points for 19 comorbidities. The categories included in the CCI score are myocardial, vascular, lung, endocrine, kidney, gastrointestinal, cancer/immune, and neurological comorbidities [26]. Participants' CCI scores were calculated using ICD-10 codes for each comorbidity [27]. Participants were divided into the following three groups according to their CCI scores: 0, 1-2, and ≥ 3. The age group was classified considering pancreatic cancer's high incidence after the age of 50 years (< 50 years, 50-60 years, 60-70 years, 70-80 years, or > 80 years) [28]. Social security status was classified according to the health insurance premiums of the employee insured or self-employed insured categories, according to the standards of Korea's NHIS. Medical assistance beneficiaries were persons with disabilities with incomes below government-set poverty standards or people who are eligible for free inpatient and outpatient care by the government. Household income level was classified into three categories according to household-level insurance premiums, namely low, mid, and high. Disability was classified into two categories (yes and no) depending on whether the rating was determined.
Statistical analyses
At baseline (time point 0), the frequency and percentage of each categorical variable were investigated, and chi-squared tests were performed to investigate the distribution of mortality according to each variable. Furthermore, we investigated at the distribution of mortality from pancreatic cancer (ICD-10 code: C25). A Cox proportional hazards model investigated the association between healthcare vulnerability and treatment presence and all-cause mortality in patients with the first diagnosis of pancreatic cancer. All Cox proportional hazards models were fully adjusted for the covariates presented in Table 1. The results were presented as hazard ratios (HRs) with 95% confidence intervals (CIs). We also performed stratified analyses using the data on pancreatic cancer treatment, sex, and household income to investigate the association between healthcare vulnerability and mortality in pancreatic cancer patients. Further, to examine whether pancreatitis treatment was according to healthcare vulnerability, a multiple logistic regression analysis was performed after adjusting all the covariates presented in Table 1. Furthermore, multinomial logistic regression analysis was performed to examine whether treatment was performed according to healthcare vulnerability by dividing it into surgery and chemotherapy. The results were reported as an odds ratio (OR) with a 95% CI. All statistical analyses were performed using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA). Statistical significance was set at p < 0.05. Table 2). All-cause mortality after initial diagnosis of pancreatic cancer was noted in 477 (24.2%), 784 (39.7%), and 1,139 (57.7%) at 3 months, 6 months, and 1-year, respectively. Healthcare vulnerability and treatment status of the first diagnosed pancreatic cancer cases showed a significant difference in the mortality rate at 3 months, 6 months, and 1-year. Furthermore, the pancreatic cancer specific mortality rates in patients with pancreatic cancer diagnosed for the first time and were 382 (19.3%), 625 (31.6%), and 895 (45.3%) at 3 months, 6 months, and 1-year, respectively. However, there was a statistically significant difference between healthcare vulnerability and pancreatic cancer specific mortality only at 3 and 6 months (Supplementary Table 3). Table 2 shows the results of survival analysis using the Cox proportional hazards model, which investigated the association between the healthcare vulnerable area and treatment status of pancreatic cancer patients and the allcause mortality. Pancreatic cancer patients in vulnerable areas had higher mortality at 3 months (HR = 1.33, 95% CI = 1.06-1.67), 6 months (HR = 1.23, 95% CI = 1.03-1.48), and 1-year (HR = 1.13, 95% CI = 0.96-1.33) compared to patients in non-vulnerable areas. However, 1-year all-cause mortality was not statistically significant. Moreover, the group that did not receive treatment for pancreatic cancer had higher mortality at 3 months (HR = 4.85, 95% CI = 3.69-6.37), 6 months (HR = 2.91, 95% CI = 2.44-3.46), and 1-year (HR = 1.78, 95% CI = 1.56-2.03) compared to the group that received treatment for pancreatic cancer, and the differences were statistically significant. Table 3 shows the results of subgroups analysis stratified by pancreatic cancer treatment, sex, and household income. Using the healthcare non-vulnerable group as a reference, we found that in the healthcare vulnerable group, those who were not treated for pancreatic cancer (3 months, HR = 1.36, 95% CI = 1.07-1.72; 6 months, HR = 1.23, 95% CI = 1.00-1.51) and male (3 months, HR = 1.62, 95% CI = 1.22-2.15; 6 months, HR = 1.54, 95% CI = 1.22-1.94; 1-year, HR = 1.30, 95% CI = 1.06-1.60) had higher mortality. Further, regarding household income, the highest mortality was observed in the lowincome group (3 months, HR = 1.64, 95% CI = 1.00-2.70; 1-year, HR = 1.45, 95% CI = 1.04-2.04). Table 4 shows the results of multiple logistic regression and multinomial logistic regression analyses, which were performed to identify the association between treatment and healthcare vulnerability after adjusting all the variables in Table 1. Pancreatic cancer patients in vulnerable areas were less likely to receive treatment than patients in non-vulnerable areas (OR = 0.70; 95% CI = 0.52-0.94). Furthermore, pancreatic cancer patients in vulnerable areas were less likely to undergo surgery (OR = 0.74; 95% CI = 0.50-1.11) and chemotherapy (OR = 0.68; 95% CI = 0.48-0.95) than patients in non-vulnerable areas; however, this association was statistically significant only for chemotherapy treatment.
Discussion
Pancreatic cancer remains the leading cause of cancerrelated deaths. However, in pancreatic cancer, the relative burden continues to increase, with limited progress in the prevention and treatment methods. Although pancreatic cancer can affect any patient population, regardless of patient demographics, certain patient groups may have higher mortality rates than those of others owing to the disproportionate burden of delayed cancer treatment [29].
Our findings emphasize the difference between the vulnerable areas and non-vulnerable areas in terms of health outcomes, such as pancreatic cancer treatment and mortality. First, we found that the health outcomes, i.e., mortality in pancreatic cancer patients in vulnerable areas was higher for 6 months than those of patients in non-vulnerable areas. Furthermore, the mortality rate was high in the group that did not receive treatment after the diagnosis of pancreatic cancer, regardless of the region. Second, compared to patients in the non-vulnerable areas, pancreatic cancer patients in vulnerable areas were less likely to receive treatment related to pancreatic cancer, especially chemotherapy. This indirectly indicates that the diagnosis of pancreatic cancer patients in vulnerable areas is delayed compared to those in non-vulnerable areas. Therefore, our results show that pancreatic cancer patients in healthcare vulnerable areas have a higher mortality rate and are less likely to be diagnosed at an early stage than patients in non-vulnerable areas.
Our study found that the regional differences in pancreatic cancer patients related to mortality or treatment are consistent with the results of previous studies [14,[29][30][31]. According to previous studies, pancreatic cancer patients in rural areas had a higher mortality rate than that of pancreatic cancer patients in the urban areas. Furthermore, income level, racism, ethnicity, lifestyle, and insurance status were pointed out as factors related to the death of pancreatic cancer patients in earlier studies [29,30,[32][33][34]. However, recent studies have focused on regional disparities [14,29,30]. Furthermore, In the case of pancreatic cancer, the survival rate after treatment (surgery or chemotherapy) is high, but in actual healthcare vulnerable areas, surgical treatment is comparatively less than that in non-vulnerable areas [4,5]. This has been proven in our study as well.
The regional disparity leading to the mortality among pancreatic cancer patients is complex but can be explained by a few mechanisms. Lack of medical resources and low medical accessibility in healthcare vulnerable areas may be the key reason for the delay in diagnosis that renders surgical treatment or chemotherapy unfeasible [14,30]. In healthcare vulnerable areas, the availability of surgical specialties and centers is low, and pancreatic surgery is technically difficult. Moreover, a surgeon who has not received specialized training in pancreatic cancer surgery may less likely suggest resection [30,35]. In Korea, most tertiary hospitals are concentrated in the metropolitan area, equipped with various professional manpower, high-quality radiation treatment facilities, and high-quality medical services [36]. However, the lack of medical resources in healthcare vulnerable areas lowers the early diagnosis rate of pancreatic cancer among patients living in such areas, resulting in their lower likelihood of receiving treatments such as surgery or chemotherapy [14]. In fact, in the case of pancreatic cancer, interventions, such as resection, is common during early diagnosis, and in the case of delayed diagnosis, only chemotherapy is deemed suitable [37]. Thus, delay in surgical treatment for cancer due to late screening affects mortality [38]. According to that reported in several previous studies, the diagnosis of pancreatic cancer patients in rural areas is usually possible at an advanced stage [14,39]. This lack of medical resources may explain why patients with pancreatic cancer living in healthcare vulnerable areas progress further in cancer stages. Over the last 20 years, Korea has strengthened medical services and accessibility in vulnerable areas through policies such as the Basic Health and Welfare Plan for Rural Areas and Welfare and the establishment of Regional Local Accountable Care hospitals to narrow health imbalances between regions such as access to medical care and equal distribution of medical resources [40,41]. Despite improvements, our findings suggest that regional health disparities remain a potential obstacle. In particular, challenges, such as access to medical care and lack of resources between regions, can delay diagnosis and increase the risk of death [38]. Thus, policymakers should ensure that health care resources are more evenly distributed across regions considering the accessibility of patients living in particularly vulnerable areas. Moreover, health disparities between regions should be constantly considered through routine assessment of health outcomes between regions. In our study, mortality was higher in patients with pancreatic cancer in vulnerable areas than those in nonvulnerable areas, especially the untreated group, males, and low-income groups. Even within the untreated group, the difference in mortality rates between healthcare vulnerable areas and non-vulnerable areas may be the result of differences in health behavior as well as differences in medical resources [32]. In patients with pancreatic cancer, unhealthy behaviors such as smoking, drinking alcohol, and obesity are strongly associated with poor health outcomes [33,42]. In Korea, rural residents were more likely to smoke or be physically active than urban residents, and we can interpret our results through these preceding studies [43] Furthermore, even in Korean male pancreatic cancer patients, the difference in mortality between healthcare vulnerable areas and non-vulnerable areas can be interpreted as a result of differences in health behavior such as smoking and drinking [44]. Finally, within the low-income group of pancreatic cancer patients, the difference in mortality rates between healthcare vulnerable areas and non-vulnerable areas likely the result of health care resources. In previous studies, with similar low income backgrounds, patients who receiving chemotherapy or radiation treatment without surgery were residing more predominantly in the Seoul metropolitan area, where palliative care centers are concentrated [45]. This indicates that even within the low-income group, residence in healthcare vulnerable areas would affect the mortality rate [46].
Our study has several advantages over previous studies. First, we did not categorize the healthcare vulnerable areas into urban or rural areas but, rather applied the PARC index, which used 137 indicators in five areas (medical demand, supply, access, use, and health outcomes) [47]. Thus, the classification of healthcare vulnerable areas is more accurate than in previous studies. Second, to the best of our knowledge, our study is the first in Korea to examine the mortality rate and treatment availability of pancreatic cancer patients according to regional differences using the national data. Previous studies have examined the relationship between mortality using the difference between cancer diagnosis to treatment in lung or gastric cancer patients, but these studies did not consider the regional disparity [38,48]. Although the use of claims data is limited, we used country cohort data representing the general population of Korea. Therefore, the results of this study can be generalized to the Korean individuals or the entire population of other countries with similar demographic characteristics and can provide a background to alleviate regional disparities in pancreatic cancer patients.
Nevertheless, this study has certain limitations. First, the data on cancer staging could not be adjusted owing to data limitations. To overcome these obstacles and enhance the homogeneity of the study population, the study population was selected to include only patients with initially diagnosed pancreatic cancer or patients who had not undergone previous pancreatic cancerrelated surgeries or procedures. Treatment types that could indirectly reflect staging were also separately included in the analysis. However, the effect of other factors affecting treatment cannot be eliminated; hence, additional robust studies are needed to elucidate these associations. Second, because this study used billing data, we could not incorporate several potential covariates into the analysis, including education level, household size and health literacy rate, and smoking and alcohol consumption, which could have an impact on mortality in patients with pancreatic cancer. Therefore, the potential presence of residual confounding factors cannot be completely excluded. Furthermore, regardless of whether pancreatic cancer has occurred or not, living in a healthcare vulnerable area itself may have a low survival rate. Therefore, care should be taken in interpreting the results. Nevertheless, we have incorporated relevant demographic and health-related factors, including disability status, comorbidities, and treatment types, to overcome any of these limitations. Finally, our study could not elucidate clear mechanisms that might support regional differences in treatment or mortality in pancreatic cancer patients. Therefore, in the future, it is necessary to derive robust associations of factors for regional disparities in their relationship.
Conclusions
In conclusion, this study identified the regional disparity between mortality and treatment after cancer diagnosis by dividing pancreatic cancer patients according to their residence in healthcare vulnerable and non-vulnerable areas using the PARC index in a large, nationally representative sample. According to the results of this study, patients with pancreatic cancer in healthcare vulnerable areas were less likely to receive treatment (especially chemotherapy) compared with patients in non-vulnerable areas, and the mortality rate was also higher in such patients. This may be a result of the delayed diagnosis. Therefore, the results of this study highlight the need for further studies to identify inter-regional factors related to the treatment and survival of pancreatic cancer patients.
|
2023-01-16T14:39:03.657Z
|
2022-08-27T00:00:00.000
|
{
"year": 2022,
"sha1": "c45b5818c16c13c06fd0e0dc77144a8077dd2db7",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/counter/pdf/10.1186/s12885-022-10027-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c45b5818c16c13c06fd0e0dc77144a8077dd2db7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
15515576
|
pes2o/s2orc
|
v3-fos-license
|
Semi-Supervised Learning with Heterophily
We propose a novel linear semi-supervised learning formulation that is derived from a solid probabilistic framework: belief propagation. We show that our formulation generalizes a number of label propagation algorithms described in the literature by allowing them to propagate generalized assumptions about influences between classes of neighboring nodes. We call this formulation Semi-Supervised Learning with Heterophily (SSL-H). We also show how the affinity matrix can be learned from observed data with a simple convex optimization framework that is inspired by locally linear embedding. We call this approach Linear Heterophily Estimation (LHE). Experiments on synthetic data show that both approaches combined can learn heterophily of a graph with 1M nodes, 10M edges and few labels in under 1min, and give better labeling accuracies than a baseline method in the case of small fraction of explicitly labeled nodes.
INTRODUCTION
Graph-based Semi-Supervised Learning (SSL) methods define a graph where the nodes are labeled and unlabeled examples in the dataset, and where (potentially weighted) edges reflect the similarity of examples [26]. Given this input, the goal of SSL is to infer the labels of the unlabeled data. Existing methods commonly assume "smoothness" of labels over the graph, i.e. a certain homophily or assortative mixing property in the network that results in "birds of a feather flock together." However, the reverse is often true in actual data and is also called heterophily ("opposites attract"). Previous work has looked into ways to adapt the existing SSL frameworks to handle both similarity and dissimilarity (e.g., [7] and [21]). These methods are limited to expressing binary dependencies between nodes (e.g., more similar or more dissimilar). We are interested in more general constraints among classes and also how to learn them from data. For example, assume a social dating network with three different kinds of classes amongst its users. Class 1 prefers to date users of class 2 (and v.v.), whereas users The problem we are studying in this paper: We are given a partially labeled graph (i.e. the adjacency matrix W and a few classes of nodes) but we we do not know the relative affinities between classes. How can we efficiently and simultaneously learn (i) the relative affinities and (ii) the labels for the unlabeled nodes? of class 3 prefer to date among themselves (see Fig. 1a). 1 We call these relations simply heterophily relations, as they naturally generalize notions of similarity/dissimilarity and assortative or disassortative mixing. 2 In this paper, we show how existing SSL methods (e.g., those based on Local and Blobal Consistency (LGC) [25] or Harmonic Function methods (HF) [27]) can be generalized in a natural way as to allow to propagate heterophily from labeled to unlabeled data. This allows us to propagate label information through a graph in the presence of heterophily, thus, generalizing the commonly implied smoothness assumption between nodes of similar labels. In this regard, this work draws heavily upon and provides a generalization of our recent work on linearizing the update equations of belief propagation [6].
Our key idea relies on well-known and intuitive facts from Markov chains and properties of symmetric doubly stochastic matrices. We illustrate with the help of Fig. 2c: when a stochastic vector x (which, e.g., represents the inferred label distribution of a certain node) is transformed (or "modulated") with a doubly stochastic matrix H, then the re- (c) Transforming a stochastic row vector x with H: x = x H Figure 2: (a) In standard Semi-Supervised Learning (SSL), we learn a labeling function f for unlabeled data (shown as red part of f ) based on partially labeled data (shown as hatched part of f ) by using the known graph structure W and various assumptions of smoothness (= homophily). (b) In our generalization, we allow arbitrary assumptions of coupling strengths between classes of neighboring nodes (e.g., "opposites attract"). (c) We express such arbitrary couplings with the help of a simple linear transformation by a symmetric and doubly stochastic affinity matrix H. In this paper, we show how to simultaneously learn both, the heterophily matrix H and the missing labels from existing data.
sulting vector x is still stochastic. 3 This simple fact allows us to express a node's label distribution based on the label distribution of its neighbors and, furthermore, to express arbitrary similarity or dissimilarity relationships between neighboring nodes. We call such a modulating H matrix interchangeably affinity, coupling, modulation, or heterophily matrix, as it captures the pair-wise preference or coupling strengths between nodes and their classes in a network. This, in turn, allows use to derive a generalized formulation of standard semi-supervised learning approaches and to derive them back by using the identify matrix I k as affinity matrix. The original Linear Belief Propagation (LinBP) formulation is one of several possible linear label propagation approaches. We thus call our first contribution Semi-Supervised Learning with Heterophily (SSL-H).
Our second key contribution is a simple way to learn the heterophily matrix H from the data. Here, we draw an interesting connection to the Locally Linear Embedding (LLE) [15] framework. While the problems in LLE and our setup are different (LLE tries to find an optimal linear embedding across neighbors to reduce the dimensionality of the data; we find an optimal "heterophily explanation" between the classes of neighbors in a network), the mathematical formalism is similar. We thus call our second contribution Linear Heterophily Estimation (LHE). Together, SSL-H and LHE allow us to both (i) learn heterophily and to (ii) label unlabeled data (see Fig. 2b).
Contributions. The two main contributions are thus: 1. Semi-Supervised Learning with Heterophily (SSL-H): We generalize semi-supervised learning to general heterophily assumptions. The commonly used smooth-3 A Markov process is described by a state transition matrix T where Ti,j = P[X (t+1) = i|X (t) = j] is the probability to end up in state i if starting at j in the previous step. In that formalism, T is column stochastic and the column-wise sums are equal to 1: i Ti,j = 1.
ness assumption is the important special case with the identify matrix as affinity matrix. 2. Linear Heterophily Estimation (LHE): We show how heterophily can be learned from existing partially labeled data even at the presence of few labels. This resolves the issue that the propagation matrix would otherwise have to be supplied by domain experts. Outline. Section 2 starts with reviewing existing work that our approaches builds upon. Section 3 gives our first contribution and show how to generalize semi-supervised learning to heterophily. Section 4 gives our second contribution and shows how heterophily can be learned from partially labeled data. Section 5 gives experiments, Section 6 reviews related work, before Section 7 concludes.
BACKGROUND
Here we describe three highly related bodies of work that the methods described in this paper build upon.
Semi-Supervised Learning (SSL)
Semi-supervised learning methods (SSL) derive their formalism usually by motivating a loss function consisting of (i) a fit term to existing labels, e.g. (fi − xi) 2 where xi denotes the given label and fi the learned label, and (ii) a smoothness term or regularizer, e.g. (fi − fj) 2 . We focus our discussion here only on the most important aspects of SSL and refer to [26] and [4] for two excellent surveys on SSL. We follow here the exposition of [1] with two notable restrictions: (i) we assume a symmetric graph structure W = W ; (ii) we allow relabeling of already labeled information ((fi − xi) 2 = 0 possible). We focus on three methods, in particular: 1. The harmonic function method (HF) [27] minimizes the loss function E(f ) = i (fi−xi) 2 + µ 2 i,j Wij(fi− fj) 2 . Figure 3 shows E(f ) by using the Laplacian matrix: L = D − W. 2. The linear neighborhood propagation (LNP) [22] is a variation in which the weights of the neighbors of a node i need to sum up to 1: i Wij = 1. The update equation can thus be written slightly simpler by using α = µ 1+µ . 3. The local and global consistency method (LGC) [25] is similar to HF except for normalizing the labeling function by the square root of the degrees of each node: The energy function can thus be written by using the normalized Laplacian matrix: Ln = D − 1 2 LD − 1 2 , and the update equations by using L * = I − Ln. Multi-class classification with homophily. It is easy to extend existing label propagation algorithms to multiclass classification problems [22] by assigning with a vector to each node, where each entry represents the belief of a node in a particular labeling class. Suppose there are k classes, and the label set becomes L = {1, 2, . . . , k}. Let M be a set of n × k matrices with nonnegative real-valued entries. Any matrix F = [F 1 , F 1 , . . . , F n ] ∈ M corresponds to a specific classification on X that labels i as yi = argmax i≤k Fij. Thus, F can be viewed as the n×k-dimensional label matrix or as a function that assign labels for each data point. Initially, we set F0 = T, where Tij = 1 if xi is labeled as j, and Tij = 0 otherwise, and for unlabeled points, Tuj = 0(1 ≤ j ≤ k).
Closed form
Loss function Previous methods for binary case: HF [27] f LGC [25] f shown in this paper) Previous methods for multi-class case: LGP, respectively (we will discuss FaBP and LinBP separately). Importantly, notice that labels for different classes do not interact with each other. In other words, they have no influence on each other.
Locally Linear Embedding (LLE)
Locally linear embedding (LLE) [15] is a method to derive compact representations of high-dimensional data by building a linear relationship among neighboring points. It is originally an unsupervised learning algorithm. We will later use the formulation for our SSL scenario, namely estimating the heterophily matrix H instead of the embedding W from data. We follow here exactly the exposition in the original paper [11] while using our notation.
The LLE algorithm assumes the data consist of n realvalued vectors xi, each of dimensionality k. LLE then reconstructs each data point from its neighbors by constructing a locally linear combination. Reconstruction errors are measured by the loss function: which adds up the squared distances between all the data points and their reconstructions. The weights Wij summarize the contribution of the j-th data point to the i-th reconstruction. To compute the weights Wij, one minimizes the cost function subject to the constraint of all rows of the weight matrix summing to one: j Wij = 1. The optimal weights Wij are found by solving a least-squares problem and can be computed in closed form.
Consider a particular data point x with neighbors ηj and sum-to-one reconstruction weights w. The reconstruction error ||x − K j=1 wjηj|| 2 is minimized in three steps: First, evaluate inner products between neighbors to compute the neighborhood correlation matrix, C jk = η j η k and its matrix inverse, C −1 . Second, compute the Lagrange multiplier, λ = α/β, that enforces the sum-to-one constraint, where α = 1 − jk C −1 jk (xη k ) and β = jk C −1 jk . Third, compute the reconstruction weights: wj = k C −1 jk (xη k + λ) In the final step of LLE, each high-dimensional observation xi is mapped to a low-dimensional vector f i representing global internal coordinates on the manifold. This is done by choosing d < k-dimensional coordinates f i to minimize the embedding cost function This cost function, like the previous one, is based on locally linear reconstruction errors, but here we fix the weights Wij while optimizing the coordinates f i.
Linearized Belief Propagation (LinBP)
Another widely used methods for semi-supervised reasoning in networked data is Belief Propagation (BP) [18]. Similar to the other SSL methods, BP helps to propagate the information from a few labeled nodes throughout the network by iteratively propagating information between neighboring nodes. In addition, it can handle the case of multiple classes influencing each other. For graphs with loops, however, BP has well-known convergence problems (see [18] for a detailed discussion from a practitioner's point of view). While there is a lot of work on convergence of BP [5,12], exact criteria for convergence are not known [13,Sec. 22], and practical use of BP is still non-trivial [18].
Motivated by this, Koutra et al. [10] proposed to linearize belief propagation for the case of two classes and proposed fast belief propagation (FaBP) as a method to propagate existing knowledge of homophily or heterophily to unlabeled data. This framework allows to specify a homophily factor h (h > 0 for homophily or h < 0 for heterophily) and to then use this algorithm with exact convergence criteria for binary classification (e.g., yes/no or male/female).
In [6], we have recently solved the problem for the multiclass case and proposed Linearized Belief Propagation (LinBP) as an efficient linearization of belief propagation on pairwise Markov random fields. 4 Our observations stem from the insight that the original update equations of belief propagation (here written compactly in matrix notation by using the symbol for the Hadamard product 5 ) can be approximated by linearized equationŝ by "centering" all variables around appropriate values. We call a vector or matrix x "centered around c" if all its entries are close to c and their average is exactly c. If a vector x is centered around c, then the residual vector around c is defined asx = [x1 − c, x2 − c, . . .]. Accordingly, we denote a matrixX as a residual matrix if each column and row vector corresponds to a residual vector. Concretely, we centered the k-dimensional message vectors m are around 1, and centered all the other k-dimensional vectors/matrices around 1/k. Thus, they become probability vectors and their entries have to sum up to 1. As a consequence,Ĥ ∈ R k×k is the residual coupling matrix that makes explicit the relative attraction and repulsion: the sign ofĤji tells us if the class j attracts or repels class i in a neighbor, and the magnitude ofĤji indicates the strength. Subsequently, this centering allows us to rewrite belief propagation in terms of the residuals. Importantly, the derived messages remain centered for any iteration and thus no normalization is necessary.
We have also given an iterative calculation of the final beliefs. Starting with an arbitrary initialization ofF (e.g., all values zero), we repeatedly compute the right hand side of the equations and update the values of F until the process converges: We have shown that the solution for LinBP can be calculated by applying the following iterative update equation:F ←X + WFĤ (LinBP) (5) Thus, the final beliefs of each node can be computed via elegant matrix operations and optimized solvers. In addition, this formalism allows a closed form solution: Proposition 1 (Closed-form [6]). The closed-form solution for Eq. 5 is given by: Furthermore, by analyzing Eq. 6, we could derive exact (sufficient and necessary) conditions for convergence of our update equations based on the spectral radius (ρ) of the adjacency matrix W and the affinity matrix H: Lemma 2 (Convergence [6]). Necessary and sufficient criterium for convergence of LinBP is: In practice, we use the matrix H as a scaled version of an original unscaled but centered matrix:Ĥ = hĤ0 .
GENERALIZING SSL FOR HETEROPHILY
We illustrate here generalizing the update equations for the harmonic functions (HF) method [27] to Harmonic functions with heterophily (HFH).
Derivation from regularization framework. In the following, we partially follow the exposition by Bengio et al. [1]. Assume n nodes with labeled nodes, and WLOG all labeled nodes have index i ≤ . The harmonic function method (HF) [27] then minimizes the loss function Let G be the diagonal matrix with Gi,i = 1 if i ≤ , and Gi,i = 0, otherwise and write L = D − W for the graph Laplacian matrix. Thus, we can alternatively write: We get as derivative: Setting the derivative to 0, we get: Modulating the existing update equations. Intuitively, the previous equations can be seen as a weighted average of the neighbors' current labels, where for labeled examples we overwrite the propagated values with the initial label.
In the following, we write slightly different update equations. Let's call P := D −1 W the row-stochastic adjacency matrix and write the update equation as follows: f ← P f However, we always use the explicit label if available, i.e. the labels of explicit beliefs constrained to be equal x: Another way to write this is by defining P x with: and writing: In other words, we define a new matrix that is the result of deleting all column for the indices of explicit beliefs. An alternative way to write this is by defining a column vector e that has an entry 1 for all indices of explicit beliefs, and 0 otherwise. Then,ē = 1 − e, and P x = 1 ·ē P. New assume we have k different classes, and we propagate each separately. However, the nodes are the same across all classes. Then the multi-class version is The, we get the closed form after convergence: New, we can next introduce "heterophily propagation," i.e. we modulate the vectors before propagating them: We can now use the exact same mathematics we had derived in [6] in order to determine the closed-form as follows: vec F = (I − H ⊗ P x ) −1 vec X Notice that the above update equations will always converge as long as the spectral radius of P x < 1. This is the case if each connected component has at least one explicit belief Constrained LinBP (LinBP x ). We propose here a "constraint" variant of LinBP, where the labels of explicit nodes (sometimes depicted as f l ) are constrained to the explicit beliefs (x l ). We use the same trick as before by defining W x with: We call the resulting formulation LinBP x , where the x notation means the constrained (or "fixed") version of some propagation matrix.
Generalization. More generally, we propose here a general form of update equations and closed-form where the matrix M can be chosen from existing semi-supervised learning methods.
LEARNING HETEROPHILY FROM DATA
In this section, we introduce our framework for learning heterophily from partially labeled data.
Baseline approach: MHE
We first introduce a straight-forward baseline method that we call Myopic Heterophily Estimation (MHE). We call it "myopic" as it tries to infer the relative frequencies between different classes in the network by a straightforward frequency calculation, followed by a transformation into a symmetric doubly-stochastic matrix.
Given partially labeled n × k-matrix X , with X (i, c) = 1, if node i has label c. Recall that some nodes have no label. Then the Matrix n × k-matrix N := WX contains the number of neighbors of a certain class for each node, i.e. N (i, c) determines the number of neighbors from i that are of class c. Next, the matrix H = X N = X WX has as entry H(c, d) the number of nodes of class d that are neighbors of nodes of class c. This matrix is symmetric, but not stochastic. One can make it row-stochastic by dividing each row by the sum of each row. Let's call this matrix H . This matrix, however, is not symmetric anymore.
What are proposing as baseline method is touse the symmetric, doubly stochastic matrix H * (i.e., it fulfills the conditions H = H and H 1 = 1) that is closest to the matrix This problem can be solved efficiently with Algorithm 1 in [24] that finds a Frobenius-norm optimum doubly stochastic approximation to a given matrix Notice that in the case of an incompletely labeled graph, it suffices to consider only the subgraph induced by the labeled nodes. . In constrast, LHE leads to H =
Improved approach: LHE
We next give a simple energy minimization framework for LinBP. This formalization will allows us in a latter step to solve the problem of deriving the optimal heterophily matrix.
Proposition 4 (LinBP loss function). The energy function minimized by LinBP is
The proof follows immediately from our proof of convergence of LinBP [6].
We would next like to compare the loss functions for LLE (Eq. 1), LinBP (Eq. 9), and HF (Fig. 3): Notice several similarities and differences: (i) LLE tries to learn W, whereas LinBP and HF try to learn F. (ii) By ignoring explicit labels X (i.e., ignoring the fit term in HF and ignoring the vector in LinBP), and replacing H with the identity matrix I), all three equations become the same. We thus propose the following two formalizations: 1. First, we propose to estimate the heterophily matrix H from partially labeled data via the following loss function This is justified for three reasons: (i) The difference between X and F is not important for learning of the heterophily matrix. These are two different loss functions and we can focus only on one of them; (ii) We can use the mathematical machinery developed for LLE for our own problem setup; and (iii) This formalism explains heterophily for both LinBP and HF, as we show next. 2. Second, we propose to generalize the regularization term of HF (and analogously for LNP and LGC) as follows: Here, we write Fi: as short notation for the i-th row vector of the label matrix F. This is justified for two reasons: (i) For H = I, we get back the original formulation of HF. 6 (ii) As long as each entry Fi: is stochastic, the equations make sure that all updates and final labeling functions remain stochastic. Thus, we propose in this section to learn the heterophily matrix via the following simple convex optimization problem: subject to H = H Notice that we can transform the simple constraint optimization problem from Eq. 12 into an even simple unconstrained optimization problem. A k × k-dimensional doubly stochastic matrix k(k−1) 2 degrees of freedom, thus k(k+1) 2 constraints which we need to impose as boundary conditions. In other words, we can pose the optimization problem simply over the degrees of freedom of the matrix. For example, for a k = 3, we have 3 degrees of freedom: Thus we have an unconstrained optimization problem We found that the latter formulation is faster in practice.
EXPERIMENTS
We evaluate our method with regard to two questions: (1) How well does our Linear Heterophily Estimation (LHE) work as compared to the baseline method? (2) How well do our combined methods scale with the size of the network?
Quality
We chose to evaluate our technique on carefully controlled synthetic data only, as this allows us to change the ground truth and evaluate the accuracy of our techniques as result of 6 HHF stands for Heterophily Harmonic Functions. systematic changes to problem parameters. We repeated the experiments with various synthetic data sets, and will focus on one particular choice of parameters to illustrate the main conclusions. Figure 5h summarizes our overall experimental setup.
Data generation: W, X, X . We assume k = 3 classes and use the heterophily matrix from the introduction: H = . We create a graph with n = 100 nodes and assign each node one of the 3 classes (we assumed equal prior probabilities of a node being a particular class: . This leads to our n × k indicator ground truth labeling matrix X. We then add for each node exactly m 2 ∈ {1, 2, . . . , 10} number of undirected edges to other nodes (leading to m average degree). The random choice of a neighbor reflects the homophily matrix H as follows: We first randomly choose the type of neighbor to connect to (i.e., a node i of class c1 is connected to another node j of class c2 with probability H(c1, c2)), and then randomly sample a node of that class. This leads to the binary, symmetric adjacency matrix W. We then remove a random fraction p ∈ {0.1, . . . , 0.95} of the labels from X, leading to the partially labeled data X . We varying the parameters and repeat this experiment.
Homophily estimation: H2, H1. We use Eq. 12 to estimate H2 on the partially labeled graph with edges W and labels X . We calculate the Frobenius norm ||H − H2|| as our estimation error. We also estimate H1 on the completely labeled graph with X. This allows us to later analyze and interpret the relative contribution of (i) more accurate estimation of H and (ii) larger fraction of labeled information, to the labeling accuracy.
Label propagation: F . We transform our estimated homophily matrices H2, H1, and the original matrix H into a centered matrix around 0:Ĥ = H (H − [ 1 k ] k ), whereby systematically varying H . We then use Eq. 5 to propagate the labeling information from X throughout the graph.
Quality assessment. For each node in the hold out set, we calculate the label with maximum belief in F and then evaluate labeling accuracy as the percentage of correctly retrieved labels on the hold-out set only. Notice that accuracy of 1 3 corresponds to random assignment of labels. Discussion. Overall, our method works well. For example, for m = 2 (i.e., average degree of a node is 2) and deleting 50% of the labels (p = 0.5), we have an average labeling accuracy of 0.76 for H = 0.01 (Fig. 5a). This is surprising, given that each column of the homophily matrix itself has relative high entropy H([0.8, 0.1, 0.1]) ≈ 0.92 (as compared to the random vector H([ 1 3 , 1 3 , 1 3 ]) ≈ 1.58). Increasing the connectivity of the graph, also increases the labeling accuracy (more each unlabeled node, more edges will lead to labeled neighbors). However, it does not increase the estimation accuracy for H: The average estimation error ||H−H2|| increases with the fraction p of removed labels, but is independent of the number of edges per node m ( Fig. 5e and Fig. 5f).
The impact of the choice of the scaling factor H for the label propagation homophily matrixĤ is interesting. Below a certain threshold, accuracy is constant (e.g., 0.84 for H = 0.01), but then increases slightly up to the point of divergence (e.g., 0.86 for H = 0.3 in Fig. 5g for m = 4, p = 0.5). This result is consistent across different choices of parameters. For example, in the case of m = 4, the average boundary between convergence and divergence (Eq. 7) is max ≈ 0.315 (the graph in Fig. 5g ends at h = 0.3). This suggests that higher choices of H (meaning the information is propagated further through the graph) allow to partially compensate for missing labels. For example, an unlabeled node may have no labeled neighbor, but labeling information is still passed through those neighbor nodes. All figures, except Fig. 5g, are drawn using h = 0.01).
Comparison baseline. We also compared our approach of estimating the matrix with LHE against MHE. Figure 6a shows the result for a graph with n = 100 nodes and m = 2 edges per node as a function of the hold-out fraction. For example for f = 0.9 (i.e. only 10% of nodes are labeled) NHE stops working (we do not have enough neighbors of all classes to estimate a matrix), while LHE combined with LinBP still 55% accuracy of determining the correct labels of the remaining 90 nodes.
Scalability
For the scalability experiments, we implemented our learning framework Python 2.7. We use fmin slsqp in scipy.optimize to minimize our function using Sequential Least SQuares Programming. Figure 6b shows the results for graphs with either average m = 2 or m = 10 edges per node. The time for LHE for 1M nodes and 10M edges is 31sec and LinBP around 7sec for 10 iterations.
RELATED WORK
Our work is related to multiple existing works on semisupervised learning. We previously discussed in detail the harmonic function method (HF) [27], linear neighborhood propagation (LNP) [22], local and global consistency method (LGC) [25], locally linear embedding (LLE) [15], fast belief propagation (FaBP) [10], and Linearized Belief Propagation (LinBP) [6]. In [27] a symmetric weight matrix W is learned but they assume simple homophily and do not account for more complex heterophily relations. In [22] a multi-class classification problems is considered. Though, each of the classes is treated separately and they never interact (mixing/modularization) with other classes. The works [7] and [21] integrate dissimilarity into the label learning process: one can indicate that two neighboring nodes should have different labels. Our proposed approach allows for much richer interaction between different classes. Also, [23] formulate a local learning regularizer (LL-Reg): learned to predict the label of each data point based on the neighbors and their labels.
To the best of our knowledge, the type of linear mixing described in this paper, by using a heterophily matrix H, has only been described before in our recent work on LinBP [6]. We are not aware of any linear learning framework for heterophily from existing data. We refer to [11], [26], [4], and [1] for excellent exposition and comparison of various methods.
CONCLUSIONS
In this paper, we proposed a novel semi-supervised learning formulation that relies on two novel components: First, it allows to use not only similarity and dissimilarity, but any type of mutual coupling strengths between different classes of nodes (we abstract this with the doubly stochastic heterophily matrix). Second, we showed how to estimate the heterophily matrix based on partially labeled data. The learned heterophily can subsequently be used to label the remaining data. We also showed how our framework generalizes a number of existing frameworks and naturally extends them from homophily to heterophily. Finally we provide experiments that illustrate the effectiveness of our method, plus detailed insights about the implications of parameters of the problem on our ability to correctly label data.
|
2014-12-09T12:58:45.000Z
|
2014-12-09T00:00:00.000
|
{
"year": 2014,
"sha1": "444d9ca56459dacfd2af4226ba43cfe58dd2b8e7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "444d9ca56459dacfd2af4226ba43cfe58dd2b8e7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
228830006
|
pes2o/s2orc
|
v3-fos-license
|
Identification and influence factors analysis of blade crack mistuning in hard-coated blisk based on modified component mode mistuning reduced-order model
Blade crack will cause severe mistuning of hard-coated blisks, which will lead to vibration localization. To identify crack mistuning and analyze influence factors, in this study, a mistuning identification method of blade cracks in hard-coated blisks is presented based on modified component mode mistuning reduced-order model, in which the hard-coated blisk with blade crack is decomposed into a substructure of tuned hard-coated blisk and a substructure of coated blade with cracks. Crack mistuning of each coated blade can be obtained by a single identification calculation. After verifying the rationality of this identification method, the influence factors of blade crack mistuning are analyzed. The influence factors include the crack location on the coated blade (cracks occurring only in coating or only in blade substrate or both in blade substrate and coating), crack length, crack position in the radial direction of the blisk, and modal data type of coated blisk used for mistuning identification calculation. The research results show that, with the increase of crack length, the mistuning of crack occurring only in the coating does not increase continuously but decreases firstly and then increases. For the first bending modes, the closer the blade crack is to the blade root, the larger the mistuning is. For the second bending modes, the blade crack located at the position of maximum modal displacement will produce large mistuning. For hard-coated blisk with blade crack, these crack mistuning variation rules are of great significance to the dynamic analysis and the determination of the crack location.
Introduction
As one of the core components of modern aeroengines, the blisk (integral bladed disk) is exposed to a harsh working environment of high temperature, high pressure, and high rotation speed for a long time. 1,2 Under the combined action of shocks and loads of centrifugal, aerodynamic, and thermal, the blisk will produce hightemperature creep, corrosion, high-frequency fatigue damage, and other failure forms. 3,4 In order to improve service life, the hard coatings with vibration reduction, heat insulation, wear resistance, corrosion resistance, and other functions have been widely used on the blisk. [5][6][7][8] As an important form of high-frequency fatigue failure, blade cracks will lead to vibration localization of the hard-coated blisk. [9][10][11][12] The intensification of vibration will accelerate crack propagation, which is likely to cause catastrophic failure. [13][14][15] Therefore, for the cracked blisk or bladed disk, the researches on the dynamics and the crack location method have been widely concerned. The crack on the coated blade will cause severe mistuning of the hard-coated blisk, which is the important cause of vibration localization. However, because the hard coating material and the blisk substrate material are usually different, the hard-coated blisk is a kind of composite structure. For the hard-coated blade, the cracks may occur only in the coating, or only in the blade substrate, or both in coating and blade substrate. Due to different mechanical properties of coating material and blade substrate material, the mistuning caused by the cracks occurring at different parts of the coated blade will also be different. Moreover, with the change of the crack length and the crack position in the radial direction of the blisk, these blade crack mistuning will also change accordingly. Therefore, compared with the blisk of a single material, the variation rules of blade crack mistuning on hardcoated blisk are more complicated. In order to more accurately study the vibration location and other dynamic behaviors of hard-coated blisks with blade cracks, for the hard-coated blisk, it is necessary to investigate blade crack mistuning identification method and crack mistuning influence factors. At the same time, the obtained variation rules of blade crack mistuning can also provide reliable reference data for the determination of crack location and crack length.
In recent years, researches on the blade crack mistuning of the bladed disk or blisk have been carried out. In some researches, the blade crack mistuning is indirectly investigated by analyzing the vibration localization of the bladed disk or blisk with blade crack. For example, Kuang and Huang [16][17][18] approximated blades on the disk as Euler Bernoulli beams and the Galerkin method to derive the equation of motion of the mistuned system with the blade crack for investigating the effects of blade crack on mode localization in rotating bladed disks. Fang et al. 19 developed analytical solutions for the free and forced vibrations of the bladed disk with a single crack and applied the U-transformation approach to study the effect of a crack on the vibratory response of a simplified aeroengine bladed disk model. Wang et al. 20 investigated the effects of multiple cracks on the forced response of centrifugal impellers using a finite element-based component mode synthesis (CMS) method. Zeng et al. 21 introduced typical fatigue cracks into the ANSYS finite element model (FEM) of a rotating compressor blade to discuss the effects of angular acceleration, amplitude of aerodynamic force, and crack parameters on the dynamic characteristics of a cracked compressor blade during run-up process. Tien et al. 22 modified and combined the X-Xr method and the generalized bilinear amplitude approximation technique to present a technique for analyzing the dynamics of mistuned bladed disks with cracks, and the influence of mistuning patterns and cracks on the vibrational response of the bladed disk was discussed. Saito et al. 23 employed a hybrid-interface method of CMS to generate a reduced-order model (ROM), in which the crack surfaces were retained as physical degrees of freedom. Using this reduced-order modeling and analysis framework, the effects of the cracked blade on the system response of an example rotor were investigated for various mistuning levels and rotation speeds. Wang et al. 24 employed a hybrid-interface method of CMS to generate an ROM for the cracked impeller and investigated the effects of mistuning and cracks on the vibration features of centrifugal impellers. Shukla and Harsha 25 used the finite element method and experimental modal analysis technique to understand the vibration behavior of the blade for the varying sizes of cracks. In addition, other researchers directly studied the crack mistuning of the bladed disk or blisk. For example, Hou 26 formulated an analytical model for a bladed disk with a through-crack at the root using lumped-mass beams and studied the mechanisms of cracking-induced mistuning in bladed disks. Jung et al. 27 employed a hybrid-interface method based on CMS to develop ROMs for the presence of mistuning in the tuned system with a cracked blade, and the effects of the cracked blade on the mistuned system were investigated. Zhang et al. 28 performed an experimental and numerical investigation of the crack blade and another noncrack blade, and results of natural frequencies and mode shapes of both blades have been compared to identify the main differences in modal behavior when a crack appears. Sun et al. 29 further investigated the cracking elements method (CEM) in simulating complex crack growth, regarding propagations of existed cracks as well as initiations of new cracks. Salmi et al. 30 used the extended finite element method to investigate the stress intensity factors of 3 D cracks. Through the studies of the above literature, it is found that most of the researches reflect the mistuning effect generated by the blade crack based on the analysis of the vibration location of the blisk or bladed disk. Although a few studies have been carried out focused directly on the blade crack mistuning, there is no method to identify or evaluate blade crack mistuning in each sector of the blisk or bladed disk. Moreover, the research objects of all the above researches are the blisk or bladed disk of a single material, and the researches on blade crack mistuning identification and analysis of mistuning influence factors for hardcoated blisk have not been carried out yet.
Compared with the structure of a single material, the dynamic characteristics of composite structure are more complex. In recent years, studies on the dynamics of composite structures are also carried out. For example, Zhang et al. 31,32 utilized the third-order shear deformation plate theory, von Karman geometry nonlinearity, and Hamilton's principle to study the nonlinear dynamics of the rotating laminated composite cantilever rectangular plate and the rotating tapered cantilever cylindrical panel with the graphene coating layers. Bai et al. 33 proposed a new methodology called extremum response surface method-based improved substructural component modal synthesis to improve the computational efficiency of vibration characteristics and reliability analysis for a detailed numerical model of the mistuned turbine bladed disk. Niu et al. 34 investigated the free vibrations of the rotating pretwisted functionally graded (FG) composite cylindrical panels reinforced with the graphene platelets (GPLs) based on the first-order shear deformation theory (FSDT), Chebyshev-Ritz method. Yao et al. [35][36][37] investigated nonlinear vibrations of the blade with varying rotating speed, nonlinear dynamics of the high-speed rotating plate, nonlinear oscillations, and resonant responses of a compressor blade based on the FSDT, von-Karman nonlinear geometric relationship and Hamilton's principle. Wang and Zhang 38 employed Bolotin method and multiple time scale method to discuss the stability of a spinning blade having periodically time-varying coefficients for both linear model and geometric nonlinear model. He 39 adopted a semi-inverse method to search for the variational formulation from the governing equations and obtained a few new variational principles for the 3 D unsteady flow, which can avoid the Lagrange crisis. Then, He and Sun 40 used the semi-inverse method to the establishment of a variational formulation for the thin film equation and given a detailed derivation process. Li et al. 41 established a damping model of fiber-reinforced composite thin plate with consideration of amplitude-dependent property using the Jones-Nelson nonlinear theory in conjunction with the classical laminated plate theory, polynomial fitting method, and strain energy method. Qin et al. 42 proposed a unified method to analyze free vibrations of laminated FG shallow shells reinforced by GPLs under arbitrary boundary conditions based on the FSDT, artificial spring technique, and Rayleigh Ritz method. The above researches are of great reference significance to this paper.
In order to study the blade crack mistuning of hard-coated blisk, a mistuning identification method for coated blade crack mistuning is urgently needed. Among the previous mistuning identification methods of the blisk or bladed disk, the mistuning identification methods based on finite element ROMs have been widely studied. One kind of method is the mistuning identification method of integral structure-based subset of nominal modes ROM. 43 This kind of method also includes the mistuning identification methods based on the fundamental model of mistuning ROM 44,45 and the modified modal domain analysis ROM. 46 The other kind of method is the mistuning identification method of component separate based CMS ROM. 47,48 The mistuning identification method based on component mode mistuning (CMM) ROM 49 also belongs to this category. Among the above mistuning identification methods, the mistuning identification method based on CMM ROM can identify various mistuning of the blade, such as small mistuning, large mistuning, and geometric mistuning. Therefore, this identification method has strong applicability and has been widely used. [50][51][52][53] Considering the advantages of the CMM mistuning identification method and it focuses on blade mistuning, the identification concept of this method is also used for reference in this study. However, since this method has not been applied to the identification of blade crack mistuning in hard-coated blisk before, the classical identification method based on CMM ROM needs to be improved accordingly in this study.
In this paper, the simplified hard-coated blisk with blade cracks is taken as the research object. A mistuning identification method of blade cracks for hard-coated blisks is presented. Meanwhile, the influence factors of blade crack mistuning are analyzed based on this identification method. This study is organized as follows. In the "Theory" section, the improvement of the classical CMM ROM is introduced, in which the hard-coated blisk with blade crack is decomposed into a substructure of tuned hard-coated blisk and a substructure of the coated blade with cracks. And then, the theoretical derivation of the identification algorithm is completed. In the "Numerical verification for effectiveness of identification method" section, a numerical case is used to verify the rationality of the identification method. In the "Influence factors analysis of crack mistuning" section, the influence factors of crack mistuning are studied based on the identification method proposed in the "Theory" section. The mistuning variation rules of three kinds of cracks (cracks occurring only in coating or cracks occurring only in blade substrate or cracks occurring both in hard coating and blade substrate) are compared and analyzed. For each kind of crack, the mistuning variation rules with the variation of crack lengths and the mistuning variation rules with the variation of location in the radial direction of blisk are all studied. Moreover, the variation rules of crack mistuning calculated by the first and second bending mode family data are also analyzed. Finally, some conclusions are listed in the "Conclusions" section.
Theory
In this study, the modal data of the hard-coated blisk with blade cracks are expected to be used to identify the blade crack mistuning. Therefore, the undamped free vibration state of the hard-coated blisk with blade cracks is taken as the research object. Firstly, an undamped free vibration model of the hard-coated blisk with blade cracks is established based on the finite element method. Then, the FEM is reduced according to the reduced-order principle of the CMM ROM. Finally, the calculation formula of blade crack mistuning is obtained according to the reduced-order FEM.
Based on the finite element method, the free vibration frequency domain equation of the hard-coated blisk with blade cracks can be expressed as where M and K are, respectively, the mass and stiffness matrices of the coated blisk with blade cracks, x is the free vibration frequency of the coated blisk with blade cracks, and U is the displacement vector of free vibration response of the coated blisk with blade cracks.
Since the response of a bladed disk is much more sensitive to mistuning in blades than that in the disk, only blade mistuning is considered in CMM ROM, and the mistuned system is represented by the full tuned system and virtual mistuning components. For the hard-coated blisk with blade crack, CMM ROM is improved, that is, the hard-coated blisk with blade crack is decomposed into a substructure of tuned hard-coated blisk and a substructure of the coated blade with cracks. The substructure decomposition method is shown in Figure 1. Herein, blade cracks may occur only in the hard coating or only in the blade substrate or both in the hard coating and the blade substrate (as shown in Figure 2).
In this paper, the mistuning caused by cracks of the coated blade is studied. Because the mass of the coated blade is not significantly changed by the crack, the mass mistuning is not considered in this study. According to the substructure decomposition in Figure 1, K can be written as where K ca is the stiffness matrix of tuned coated blisk, and K cr is the mistuned stiffness matrix caused by cracks. Then, equation (1) becomes The response of coated blisk with blade cracks in the tuned modal space can be expressed as where H is the mass normalized mode-shape matrix of the tuned coated blisk, and W is the modal coordinates vector. Substituting equation (4) into equation (3), it is converted into where H T MH ¼ I, H T K ca H ¼ X, and I is the identity matrix; X is the generalized stiffness matrix of tuned coated blisk. So equation (5) can be further expressed as Since only the mistuning caused by coated blade cracks is considered in this study, the mistuning is considered to be only located on the coated blades according to the CMM ROM. Then, H T K cr H can be expressed as where j ¼ 1; 2; � � � ; J, J is the total number of coated blades, K cr j is the crack mistuned stiffness matrix of jth coated blade, and H j is the mode-shape matrix of thejth blade of tuned coated blisk and can be defined as where H cb is the mass normalized mode-shape matrix of the tuned cantilevered coated blade, and C j is the modal participation factor of the jth blade of tuned coated blisk.
The following formula can be obtained from equation (8) where K cb is the stiffness matrix of the tuned cantilevered coated blade. In equation (9), H cb K cb H cb ¼X cb , where X cb is the generalized stiffness matrix of the tuned cantilevered coated blade.
Then, C j can be computed by the following formula Therefore, H T K cr H can be written as According to the further reduction mode of CMM ROM, the off-diagonal terms representing the coupling between cantilevered-blade modes can be neglected. In equation (11), H cb ð Þ T K cr j H cb can be written as where k j:n is the frequency eigenvalue deviations relative to tuned cantilevered coated blade caused by stiffness mistuning of jth coated blade with crack, N is the number of the retained cantilevered-blade normal mode orders, and n ¼ 1; 2; � � � ; N.
In this study, k j:n is used as the mistuning identification parameter to quantify the stiffness mistuning produced by the crack.
Then, H T K cr H can be further expressed as Therefore, equation (6) can be written as Further, the parameter k j:n can be computed by where C j:n is the nth dominating modal participation factor for the jth blade of tuned coated blisk, that is, it is the nth row of C j . Since equation (15) is derived from the free vibration equation, the modal data of the coated blisk with blade cracks can be used to calculate the identification parameter k j:n .
Numerical verification for effectiveness of identification method
In this section, the blade crack mistuning of two numerical cases is identified by the identification algorithm proposed in "Theory" section. The two numerical cases are two FEMs of hard-coated blisk with blade cracks established by ANSYS software. Modal data are obtained by modal analysis of two FEMs. The blade crack mistuning is calculated through the modal data of the two numerical hard-coated blisks with blade cracks. In the two numerical cases, except for the difference in crack length, the geometry structures of coatings, blade, and disk on each sector are the same, and the distances between the cracks on each blade and the center of the coated blisk are the same. Therefore, the change of crack mistuning is only related to the crack length. Furthermore, the effectiveness of the identification method is verified by comparing the variation trend of the mistuning identification results and that of the crack lengths, that is, the identification method is effective if the two change trends have a good consistency.
Crack mistuning identification
In order to verify the effectiveness of the identification method, two numerical hard-coated blisks are used. The numerical hard-coated blisks all have eight blades. The upper and lower surfaces of the blade are all coated with hard coating. The numerical hard-coated blisks are shown in Figure 3, where the coatings and blade substrate are penetrated by blade cracks in the Z-direction. The distances between the cracks on each blade and the center of the hard-coated blisk (L 1 ) are all 107.5 mm. The widths of the cracks in the radial direction of the hard-coated blisk (L 2 ) are all 0.01 mm. The lengths of the cracks on each blade in the circumferential direction of the hard-coated blisk (L 3 ) are different and listed in Table 1. The crack lengths distribution of the numerical coated blisk 1 is random, and the crack lengths of the numerical coated blisk 2 increase linearly with the change of the blade number. The purpose of formulating these two crack distribution schemes is to verify the rationality of identification methods more fully. The material property parameters of the blisk substrate and hard coating are listed in Table 2. The geometry dimensions of the corresponding tuned hard-coated blisk are shown in Table 3. Herein, the numerical hard-coated blisks are created by ANSYS software. The upper and lower surfaces of disk lug boss are completely constrained, and modal analyses are carried out. Then, the natural frequencies and modal displacements of the midpoints at the tips of the coated blades are extracted. The constraints of numerical hardcoated blisks and the locations of the vibration pickup points are shown in Figure 4.
In order to complete the mistuning identification, the modal data of the tuned structures are needed. The tuned structures include a tuned cantilever coated blade and a tuned hard-coated blisk. According to the geometries of Table 3, FEMs of the tuned structures are created and shown in Figure 5. The end face of the tuned cantilever coated blade is completely restrained, and upper and lower surfaces of the lug boss of tuned hard-coated blisk are also completely restrained. Then, the modal analyses of the two tuned structures are performed. The natural frequencies and modal shapes of the tuned cantilever coated blade and the tuned hard-coated blisk are extracted.
Finally, according to the identification algorithm proposed in "Theory" section, the crack mistuning of the coated blades of the numerical hard-coated blisks is calculated. Herein, the modal data of the 1-4 modal families of the cracked hard-coated blisk are selected for identification calculation. The 1-4 modes include the first bending mode family, the first swing mode family, the second bending mode family, and the first torsion mode family. Their natural frequencies are shown in Figure 6, and the mode shapes in each mode family are shown in Figure 7.
The identification results of crack mistuning of numerical hard-coated blisk 1 and numerical hard-coated blisk 2 (k j:n ) are, respectively, listed in Tables 4 and 5. It is worth noting that the crack mistuning results are different according to the modal data of different mode families of the cracked hard-coated blisk. In order to more clearly reflect the crack mistuning state of each coated blade, the frequency eigenvalue deviation rates are calculated. The calculation formula is as follows where k n is the frequency eigenvalue deviation rate of nth coated blade with crack, and x cb is the natural frequency of the tuned cantilevered coated blade. When calculating the frequency deviation rate, corresponding to the frequency eigenvalue deviation k j:n calculated by the modal data of the 1-4 modal families, the 1-4 order natural frequencies of the cantilevered coated blade are used correspondingly. The frequency eigenvalue deviation rates are listed in Tables 6 and 7.
Discussion on identification results
In order to verify the correctness of mistuning identification results, the frequency eigenvalue deviation rates (listed in Tables 6 and 7) are compared with the lengths of the cracks in the circumferential direction of the hard- coated blisk (L 3 ). Comparison results of numerical hard-coated blisk 1 and numerical hard-coated blisk 2 are shown in Figures 8 and 9. For No.1-No.8 blades of the numerically hard-coated blisk 1, the variation trend of frequency eigenvalue deviation rates calculated by the modal data of first and second bending mode families has a good consistency with that of crack lengths (as shown in Figure 8(a) and (c)). Moreover, the frequency eigenvalue deviation rates of coated blades with the same crack length are approximately equal. However, the variation trend of frequency eigenvalue deviation rates calculated by the modal data of first swing mode family and first torsion mode family is in poor agreement with that of crack lengths (as shown in Figure 8(b) and (d)). For the numerical hard-coated blisk 2, the same phenomenon is shown. Figure 9(a) and (c) shows that the variation trend of the frequency eigenvalue deviation rates calculated by the modal data of the first bending mode family and the second bending mode family is in good agreement with that of the crack lengths. Figure 9(b) and (d) shows that the variation trend of frequency eigenvalue deviation rates calculated by the modal data of first swing mode family and first torsion mode family is not consistent with that of crack lengths. Through the above comparison results, it is shown that the identification method is not sensitive to the modal data of first swing mode family and first torsion mode family of the cracked hard-coated blisk, that is, the crack mistuning of the coated blade can be effectively identified by the modal data of the first bending mode family and the second bending mode family.
In addition, compared with the irregular distribution of the crack lengths of numerical hard-coated blisk 1, the crack lengths of the numerical hard-coated blisk 2 are increased linearly from No.1 blade to No.8 blade. However, through Figure 9(a) and (c), it can be seen that the crack mistuning is not linearly increased from No.1 blade to No.8 blade. Therefore, the relationship between crack mistuning and crack length is not liner, which also shows the complexity of the mistuning caused by the crack.
Through the above discussion, it is shown that although this identification method can simultaneously obtain the crack mistuning of each blade for all four modes, the identification method only has a good identification effect for first and second bending modes.
Mistuning identification of cracks with different lengths at different locations
In this section, the relationship between crack length and mistuning is further studied. At the same time, the influence of crack location on the mistuning is also studied. There are two meanings of "crack location." One is the location of the crack in the radial direction of the hard-coated blisk, that is, the location of the crack reflected by L 1 in Figure 3. The other is the place where the crack exists (as shown in Figure 2), that is, the cracks occurring only in the hard coating, only in the blade substrate, and both in the hard coating and the blade substrate. In order to obtain the relationship between the length and location of the cracks and the mistuning, the mistuning of coated blisks with three types of cracks is identified. For each type of crack shown in Figure 2, the lengths of the cracks in the circumferential direction of the coated blisk are taken as 16 values, i.e. L 3 ¼1; 2; 3; � � � ; 16 mm. The location dimensions of the cracks in the radial direction of the coated blisk are also taken as 16 values, i.e. L 1 ¼70; 75; 80; � � � ; 145mm. In order to reduce the number of FEMs of crack coated blisk, two FEMs are established for each type of crack coated blisk, which are shown in Figure 10. In the two FEMs, the crack lengths L 3 on each blade are equal. In Figure 10(a), the crack location dimensions are L 1 ¼70; 80; 90; � � � ; 140 mm. In Figure 10(b), the crack location dimensions are L 1 ¼75; 85; 95; � � � ; 145mm. Through the modal data of the two FEMs, the mistuning of the cracks with uniform length at 16 locations on the coated blades can be calculated. Then, according to L 3 ¼1; 2; 3; � � � ; 16mm, the crack lengths of the two FEMs are changed respectively, and 32 FEMs of crack coated blisk are established. Through the modal data of these 32 FEMs, the mistuning of cracks of Frequency eigenvalue deviation rate (%) Crack length (mm) 24 16 lengths at 16 locations on the coated blade can be obtained. In the "Numerical verification for effectiveness of identification method" section, it is found that the crack mistuning can be effectively identified by the mode data of the first and second bending mode families of the cracked coated blisk. Therefore, the mistuning of the three types of cracks is all identified by the modal data of the first and second bending modal families. Then, the frequency eigenvalue deviation rates are calculated by equation (16) and shown in Figures 11 to 13. Note that since most of the deviation rates are negative, in order to show the variation trend of the deviation rates more clearly, the z-axis in the coordinate system is displayed in the opposite direction in Figures 11 to 13.
Discussion on mistuning variation rules of three types of cracks
Through the comparative analysis of Figures 11 to 13, it is found that there is a great difference in the stiffness mistuning caused by the three types of cracks. For cracks of equal length at the same location, in the three types of crack mistuning, the least mistuning is generated by the crack occurring only in the coating, the mistuning is slightly larger by the crack occurring only in the blade substrate, and the largest mistuning is generated by the crack occurring both in the coating and the blade substrate. Moreover, it is important to note that the mistuning of the cracks occurring both in hard coating and blade substrate is not equal to the sum of the other two kinds of crack mistuning. Compared with the mistuning of the other two kinds of cracks, the mistuning of the cracks occurring both in hard coating and blade substrate increases greatly. This shows that the cracks in the coating and the substrate at the same location will seriously reduce the stiffness of the coated blade. Moreover, the phenomenon that the mistuning of cracks occurring only in the blade substrate is much smaller than that of the cracks occurring both in hard coating and blade substrate indicates that the hard coating can effectively control the mistuning of the coated blade even if the cracks only exist in blade substrate. In addition, the mistuning variation trends of the three kinds of cracks with the change of crack locations and lengths are different. For the stiffness mistuning calculated by the first bending mode family, the mistuning variation trend of the crack occurring only in the coating is different from that of the other two kinds of cracks. For the stiffness mistuning calculated by the second bending mode family, the mistuning variation trend of the cracks occurring both in the coating and the blade substrate is different from that of the other two kinds of cracks. Specific variation rules will be discussed in "Discussion on mistuning variation rules of cracks with different lengths" section and "Discussion on mistuning variation rules of cracks at different locations in the radial direction of the hard-coated blisk" section.
Discussion on mistuning variation rules of cracks with different lengths
In order to further analyze the variation rule of mistuning with crack length, variation trends of crack mistuning varying with crack lengths at 16 locations are compared. The mistuning calculated by the first bending mode data is shown in Figure 14, and the mistuning calculated by the second bending mode data is shown in Figure 15. In these figures, the curve fittings of the discrete points are carried out by using the sixth-degree polynomial to obtain the trend lines of the crack mistuning variation. With the increase of crack length, the magnitudes of these negative mistuning all decrease first and then increase, and the minimum value appears when the crack length is 9 mm. It is worth noting that the mistuning at 145 mm becomes positive mistuning (0.0042%) when the crack length is 9 mm, while the minimum negative mistuning (0.00095%) occurs when the crack length is 7 mm. The mistuning variation trends of cracks occurring only in the hard coating are not consistent with that of our preliminary assumption. Figure 14(b) shows mistuning variation trends of cracks occurring only in blade substrate calculated by the first bending mode data. Except for a small amount of positive crack mistuning at the location of 145 mm, all the mistuning at other locations is negative. Two kinds of variation trends are presented here. One is the mistuning variation trend of the crack location ranging from 110 mm to145 mm. The negative mistuning first decreases and then increases with the increase of the crack length, and the minimum values occur when the crack length is 9 mm. These variation trends are very similar to that of cracks occurring only in the hard coating. The other is the mistuning variation trend of the crack locations ranging from 70 mm to 105 mm. The magnitude of negative mistuning keeps increasing with the increase of the crack length. Meanwhile, it is worth noting that the increase of crack length is linear, but the increase of crack mistuning is not linear. Figure 14(c) shows mistuning variation trends of cracks occurring both in blade substrate and coating calculated by the first bending mode data. The positive crack mistuning occurred at six locations ranging from 120 mm to 145 mm. There are also two mistuning variation trends here. One is the crack mistuning variation trend at six locations ranging from 120 mm to 145 mm. That is, with the increase of the crack length, the magnitude of negative crack mistuning decreases and turned into negative positive mistuning, and then the magnitude of positive crack mistuning increases. The other is the crack mistuning variation trend at 10 locations ranging from 70 mm to 115 mm. With the linear increase of the crack length, the crack mistuning also shows a nonlinear increasing trend. Figure 15(a) shows mistuning variation trends of cracks occurring only in hard coating calculated by the second bending mode data. It is shown that the crack mistuning at locations of 145 mm, 140 mm, 135 mm, and 70 mm is all positive mistuning, while the crack mistuning at other locations is all negative mistuning. Although the variation trend of the crack mistuning trend line at 16 locations is basically the same, the magnitude of mistuning shows two kinds of variation trends, that is, with the increase of the crack length, the magnitude of positive mistuning first increases and then decreases, while the magnitude of negative mistuning first decreases and then increases. Figure 15(b) shows mistuning variation trends of cracks occurring only in blade substrate calculated by the second bending mode data. The crack mistuning at locations of 145 mm, 140 mm, and 135 mm is all positive mistuning, and their variation rules are not obvious. There is both positive and negative mistuning at the location of 70 mm. With the increase of the crack length, the magnitude of positive mistuning decreases firstly and turns into negative mistuning, and then the magnitude of negative mistuning increases. In addition, the crack mistuning at the other 12 locations is all negative mistuning, and the same variation trend is shown, that is, the magnitude of negative mistuning keeps increasing with the increase of the crack length. Figure 15(c) shows mistuning variation trends of cracks occurring both in blade substrate and coating calculated by the second bending mode data. The crack mistuning at locations of 145 mm and 140 mm is all positive mistuning, and the magnitude of positive mistuning keeps increasing with the increase of crack length. In addition, the crack mistuning at the other 14 locations is negative. With the linear increase of the crack length, the magnitude of negative mistuning presents a nonlinear increasing trend.
Through the above analysis, the following variation rules of crack mistuning with crack length are found: 1. For the first bending and second bend mode families, the mistuning magnitude of the crack occurring only in hard coating does not increase with the increase of the crack length. Therefore, the crack mistuning on the hard-coated blade does not always increase with the increase of crack length. In spite of this, the mistuning of the cracks occurring only in hard coating at all locations varies according to the same regular trend. 3. With the variation of crack length, the variation trends of three kinds of crack mistuning are all regular, and these variation rules can provide a reference for the determination of the crack length of the coated blade.
Discussion on mistuning variation rules of cracks at different locations in the radial direction of the hardcoated blisk
In order to further analyze the variation rule of mistuning with the crack location in the radial direction of the hard-coated blisk, mistuning trends of cracks of 16 kinds of lengths varying with crack locations are compared. The mistuning calculated by the first bending mode data is shown in Figure 16, and the mistuning calculated by the second bending mode data is shown in Figure 17. In the figures, the curve fittings of the discrete points are also carried out by using the sixth-degree polynomial to obtain the trend lines of the crack mistuning variation. Figure 16(a) shows mistuning variation trends of cracks occurring only in hard coating calculated by the first bending mode data. As the crack moves from the blade root to the blade tip, the crack mistuning of 16 kinds of lengths showed the same variation trend, that is, the magnitude of the negative mistuning of the cracks increased slightly and then decreased continuously. Figure 16(b) shows mistuning variation trends of cracks occurring only in blade substrate calculated by the first bending mode data. As the cracks moved from the blade root to the blade tip, the crack mistuning of 16 kinds of lengths also showed the same variation trend, that is, the magnitude of the negative mistuning of the cracks decreased continuously. Figure 16(c) shows mistuning variation trends of cracks occurring both in blade substrate and coating calculated by the first bending mode data. Herein, the variation trends of mistuning are relatively complex. As cracks move from blade root to blade tip, the negative mistuning shows a decreasing trend firstly. And then, at the location of 120 mm, except for crack mistuning of the location of 1 mm and 2 mm is still negative and continues to decrease, the mistuning of other crack lengths begins to turn into positive mistuning and presents an increasing trend. Figure 17(a) shows mistuning variation trends of cracks occurring only in hard coating calculated by the second bending mode data. It can be observed from this figure that the mistuning of cracks of 16 kinds of lengths has the same variation trend. The crack at the root of the blade produces positive mistuning. With the change of crack location, the positive mistuning decreases and turns into negative mistuning. Then, the negative mistuning increases with the change of crack location. After the negative mistuning reaches its maximum value at the location of 105 mm, it starts to decrease and turns into positive mistuning. Subsequently, positive mistuning continues to increase. Figure 17(b) shows mistuning variation trends of cracks occurring only in blade substrate calculated by the second bending mode data. As shown in Figure 17(b), the mistuning variation trends of the cracks of 16 kinds of lengths are roughly the same, except that the mistuning variation trends of the cracks near the blade root are slightly different. Three kinds of mistuning variation trends are shown here. One is the crack mistuning variation trend of six lengths with a crack length ranging from 1 mm to 6 mm. The mistuning of these cracks is transformed from positive mistuning to negative mistuning, and then the negative mistuning begins to increase. The second is the mistuning variation trend of 5 crack lengths ranging from 7 mm to 11 mm, the crack mistuning is negative mistuning initially and shows the increasing trend. The third is the crack mistuning variation trend of five crack lengths ranging from 12 mm to 16 mm. Initially, the mistuning of these cracks is also negative mistuning, and it shows a trend of decreasing first and then increasing. After the location of 80 mm, the mistuning of cracks of 16 kinds of length has the same change trend, that is, as the crack moves towards the blade tip, the magnitude of negative mistuning continues to increase, and the maximum value appears at the location of 105 mm, then the negative mistuning starts to decrease and turns into positive mistuning. After that, the positive mistuning continues to increase. Figure 17(c) shows mistuning variation trends of cracks occurring both in blade substrate and coating calculated by the second bending mode data. The mistuning of cracks 16 kinds of lengths varies with the same variation rule. The mistuning generated by the crack at the blade root is negative mistuning. As the crack moves towards the blade tip, the magnitude of negative mistuning decreases firstly and then increases. After that, it decreases again and turns into positive mistuning. However, it is important to note that the largest negative mistuning occurs in different locations. For cracks of 12 lengths ranging from 1 mm to 12 mm, the largest negative mistuning appears at the location of 110 mm, and for cracks of 4 lengths ranging from 13 mm to16 mm, the largest negative mistuning appears at the location of 70 mm (blade root). Through the above analysis, it is found that crack mistuning is very sensitive to the change of crack location on the blade, and the following variation rules of crack mistuning with crack location are obtained: 1. For the first bending mode family, the closer the three types of cracks are to the blade root, the greater the mistuning of the cracks will be (as shown in Figure 18(a)). Therefore, for the first bending mode of the hardcoated blisk, enough attention should be paid to the crack at the blade root. 2. For the second bending mode family, the mistuning of cracks occurring only in blade substrate or only in the coating is maximized when the crack is located in the middle of the blade (as shown in Figure 18(b)). For cracks occurring both in blade substrate and coating, the mistuning of cracks of 12 lengths ranging from 1 mm to 12 mm is also maximized when the crack is located in the middle of the blade (as shown in Figure 18(b)). However, for cracks of four lengths ranging from 13 mm to 16 mm, although the maximum mistuning occurs when the crack is located at the blade root, a relatively large mistuning also occurs when the crack is located in the middle of the blade. Therefore, for the second bending mode family, the cracks in the middle of the blade should be paid more attention. At the same time, enough attention should be paid for the relatively long cracks at the blade root. 3. According to the comparison of Figure 18(b), for the second bending mode family, the maximum modal displacement of the coated blade occurs in the middle of the blade, which is also the location where the maximum or relatively large crack mistuning occurs. Therefore, under the second bending mode, it is surmised that the crack located at the maximum modal displacement of the blade will produce a relatively large crack mistuning. 4. With the variation of crack location, the variation trend of three kinds of crack mistuning is all regular, which can provide a reference for the determination of the crack location of coated blade.
Conclusions
As a composite structure, the blade crack mistuning state of the hared-coated blisk is very complex. In this paper, the classical mistuning identification method based on CMM ROM is improved to study the blade crack mistuning of hard-coated blisk, and the following conclusions are obtained. 1. The effectiveness of the mistuning identification method is verified by numerical cases. It is found that the method proposed in this paper can simultaneously obtain the crack mistuning of each blade. However, it should be noted that this identification method can effectively identify the crack mistuning of the first and second bending modes, but it is not applicable for the first swing mode and first torsion mode. 2. By comparing the mistuning of three kinds of cracks (cracks occurring only in the hard coating, cracks occurring only in blade substrate, and cracks occurring both in hard coating and blade substrate), it is found that the mistuning of the cracks occurring both in hard coating and blade substrate is much greater than that of the cracks occurring only in blade substrate, which indicates that the hard coating can effectively control the mistuning degree of the whole hard-coated blade when the cracks occur only in blade substrate. 3. For the first bending and second bend mode families, only the mistuning of the crack occurring only in coating decreased first and then increased with the linear increase of crack length. For the other two kinds of cracks, the mistuning of the crack at most locations tends to increase nonlinearly with the linear increase of crack length. 4. For the first bending mode family, the closer the three types of cracks are to the blade root, the greater the mistuning of the cracks will be. Therefore, for the first bending mode of the hard-coated blisk, enough attention should be paid to the crack at the blade root. 5. For the second bending mode family, the mistuning of cracks occurring only in the blade substrate or only in the coating is maximized when the crack is located in the middle of the blade. For cracks occurring both in blade substrate and coating, the mistuning of cracks of most kinds of lengths is also maximized when the crack is located in the middle of the blade. However, for cracks of other few kinds of lengths, although the maximum mistuning occurs when the crack is located at the blade root, a relatively large mistuning also occurs when the crack is located in the middle of the blade. Therefore, for the second bending mode family, the cracks in the middle of the blade should be paid more attention. At the same time, enough attention should be paid for the relatively long cracks at the blade root. 6. For the three kinds of cracks, with the change of crack length and location in the radial direction of hardcoated blisk, the variation of crack mistuning can be followed by the rule, which can provide effective reference data for determination of the blade crack length and location of the hard-coated blisk.
Declaration of Competing Interest
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural
|
2020-11-05T09:07:52.786Z
|
2020-11-03T00:00:00.000
|
{
"year": 2020,
"sha1": "eb80d2b3af98a8ecf77f3172dc2a75023d85ee50",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1461348420971382",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "0f278ec330cd05e326d8b7a373d50f30e141808a",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
248919327
|
pes2o/s2orc
|
v3-fos-license
|
Multisubject Task-Related fMRI Data Processing via a Two-Stage Generalized Canonical Correlation Analysis
Functional magnetic resonance imaging (fMRI) is one of the most popular methods for studying the human brain. Task-related fMRI data processing aims to determine which brain areas are activated when a specific task is performed and is usually based on the Blood Oxygen Level Dependent (BOLD) signal. The background BOLD signal also reflects systematic fluctuations in regional brain activity which are attributed to the existence of resting-state brain networks. We propose a new fMRI data generating model which takes into consideration the existence of common task-related and resting-state components. We first estimate the common task-related temporal component, via two successive stages of generalized canonical correlation analysis and, then, we estimate the common task-related spatial component, leading to a task-related activation map. The experimental tests of our method with synthetic data reveal that we are able to obtain very accurate temporal and spatial estimates even at very low Signal to Noise Ratio (SNR), which is usually the case in fMRI data processing. The tests with real-world fMRI data show significant advantages over standard procedures based on General Linear Models (GLMs).
Multi-subject task-related fMRI data processing via a two-stage generalized canonical correlation analysis Paris A. Karakasis, Student Member, IEEE, Athanasios P. Liavas, Member, IEEE, Nicholas D. Sidiropoulos, Fellow, IEEE, Panagiotis G. Simos, and Efrosini Papadaki Abstract-Functional magnetic resonance imaging (fMRI) is one of the most popular methods for studying the human brain. Task-related fMRI data processing aims to determine which brain areas are activated when a specific task is performed and is usually based on the Blood Oxygen Level Dependent (BOLD) signal. The background BOLD signal also reflects systematic fluctuations in regional brain activity which are attributed to the existence of resting-state brain networks. We propose a new fMRI data generating model which takes into consideration the existence of common task-related and resting-state components. We first estimate the common task-related temporal component, via two successive stages of generalized canonical correlation analysis and, then, we estimate the common task-related spatial component, leading to a task-related activation map. The experimental tests of our method with synthetic data reveal that we are able to obtain very accurate temporal and spatial estimates even at very low Signal to Noise Ratio (SNR), which is usually the case in fMRI data processing. The tests with real-world fMRI data show significant advantages over standard procedures based on General Linear Models (GLMs).
I. INTRODUCTION
F UNCTIONAL magnetic resonance imaging (fMRI) is a popular approach for studying the human brain. It provides a non-invasive way to measure brain activity, by detecting local changes of blood oxygen level dependent (BOLD) signal in the brain, over time. The aim of task-related fMRI data analysis is to determine which brain areas are activated when a specific task is performed and is usually based on the BOLD signal. Hence, we can create brain activation maps related to specific tasks, which is very useful for understanding the functioning of the human brain.
In task-related studies [1], [2], it has been noted the presence of spontaneous modulation of the BOLD signal, which cannot be attributed to the experimental excitation or any other explicit input or output and is usually considered as "noise." However, in addition to physiological and magnetic noise, background BOLD signal reflects systematic fluctuations in regional brain activity. In particular, BOLD fluctuations are correlated between functionally related brain regions, forming resting-state brain networks. This baseline activity continues during task performance, showing a similar neuro-anatomical distribution to that observed at rest [3]- [6]. In [3], it has been suggested that measured neuronal responses consist of an approximately linear superposition of task-evoked neuronal activity and ongoing spontaneous activity.
Various unsupervised multivariate statistical method have been used in fMRI data processing. Their aim is to provide information about functional brain connectivity, by describing brain responses in terms of spatial and temporal activation patterns, under no prior assumptions about their form [7]. Common multivariate methods which have been used in fMRI data processing are Principal Component Analysis (PCA) [8], [9], Independent Component Analysis (ICA) [10]- [13], and tensor factorization based models [14]- [21].
Canonical correlation analysis (CCA) is a well known statistical method [22] which can be considered as a generalization of the PCA. It takes as input two random vectors and computes two basis vectors such that the correlation between the respective projections of the random vectors onto the basis vectors is maximized [23]. After considering the subspace spanned from this set of basis vectors, CCA can be also considered as a method for the estimation of a linear subspace which is "common" to the two sets of random variables [24].
Generalized CCA (gCCA) for more than two random vectors dates back to [25]- [29]. In [30], five different formulations of gCCA are presented with all of them boiling down to the classical CCA when the number of random vectors is equal to two [31]. Among the different formulations of gCCA appearing in [30], only the solutions of MAX-VAR and MIN-VAR can be obtained directly via eigen-decomposition; the other formulations lead to nonconvex optimization problems, Given a set of M random vectors, the initial goal of gCCA was to extract several M -dimensional random vectors, known as canonical variates, consisting of unit variance linear compounds (combinations) of the M random vectors [30]. MAX-VAR assumes that each linear compound participating in the same canonical variate is a potentially noisy and scaled version of the same common random variable, while different canonical variates are associated with uncorrelated common random variables. On the other hand, MIN-VAR assumes that each linear compound participating in the same canonical variate is a potentially noisy version of a linear compound of the same common random vector of M − 1 elements. In this setting, linear compounds of different canonical variates must be uncorrelated. MAX-VAR is easier to interpret, thus it has recently become very popular. Moreover, the simple and elegant MAX-VAR formulation presented in [29] is equivalent to one described above and offers a solution via eigendecomposition. This is the formulation we adopt in this work.
A. Problem Definition
We consider the problem of multi-subject task-related fMRI data processing with only one type of stimulus. Our aim is to accurately determine, in a data-driven manner, which brain areas are activated when the stimulus is applied and construct the associated brain activation map.
B. Related Work
In [32], the authors use gCCA to separate different temporal sources in fMRI data. They assume that a set of common temporal responses to external stimulation is present in the subjects being studied, and show that they may be extracted using gCCA. In contrast, the underlying assumption in [33] is that there are multiple subjects that share an unknown spatial response (or spatial map) to the common experimental excitation, but may show different temporal responses to external stimulation. Under the same assumption, the authors of [34] propose the application of gCCA for the estimation of a common spatial subspace that is spanned from the common, across subjects, spatial components, and use this estimate in order to form a preprocessing step before applying the ICA method. Finally, the estimation of "common" subspaces from multiple datasets, via CCA and gCCA based methods, has been considered in [24], [35].
A different line of research has been spanned by approaches based on the Deep Neural Network (DNN). For example, the authors of [36] present a Deep Convolutional Autoencoder based approach, while in [37] the authors propose a Restricted Boltzmann Machine based approach. Moreover, Deep Recurrent Neural Network and Generative Adversarial Network based approaches have been proposed in [38] and [39], respectively. The latter three works are able to decompose the fMRI data into different spatio-temporal components. However, none of these approaches can (1) identify which of the components are common or (2) determine how many common components exist in the dataset. To resolve these issues, these methods exploit prior information.
C. Our Contribution
We propose a new data generating model which takes into consideration both the common task-related spatial component and the common resting-state spatial components. We adopt the assumptions of [33], with respect to the common spatial maps, and assume the existence of one common temporal component, which is related to the common experimental excitation. We use gCCA and estimate the subspace that is spanned by the common spatial components, both taskrelated and resting-state. Based on this estimate, we perform a second gCCA and compute the common task-related temporal component. Finally, we use the estimated common task-related temporal component to compute an estimate of the associated common task-related spatial component and construct the respective activation map. The experimental tests of our method with synthetic data reveal that we are able to obtain very accurate temporal and spatial parameter estimates even at very low Signal to Noise Ratio (SNR). The tests with real-world fMRI data validate our data model assumptions and show significant advantage of our approach over standard procedures based on General Linear Models (GLMs).
We note that an early version of this work has appeared in [40]. In this manuscript, we extend the work of [40] by proposing a data-driven method which effectively estimates the dimension of the common spatial subspace. Moreover, forming an estimate of the common subspace (as presented in [40]) and its dimension can be computationally demanding.
In Appendix A, we show that, under mild conditions, it is possible to overcome the computational bottleneck without affecting the quality of the resulting estimates.
In order to demonstrate the efficiency of our method, we extend the experimental section of [40] by including experiments with synthetic data. The existence of ground truth in these cases enables us to test the effectiveness of our approach at each step, as well as in total. Furthermore, in this work, we provide a detailed comparison between the proposed approach and the conventional GLM, which enables us to highlight the differences between the two methods. At last, in this extended version, we apply our method to three additional realworld datasets; the results can be found in the Supplementary Material of this manuscript and they are in agreement with the results of the main part of the paper, demonstrating the effectiveness of our approach.
D. Notation
Scalars, vectors, and matrices are denoted by small, small bold, and capital bold letters, for example, x, x, X. Sets are denoted by blackboard bold capital letters, for example, U. R denotes the set of real numbers. R I×J denotes the set of (I × J) real matrices. Inequality X ≥ 0 means that matrix X has nonnegative elements and R I×J + denotes the set of (I × J) real matrices with nonnegative elements. x 2 denotes the Euclidean norm of vector x, while X 2 and X F denote, respectively, the spectral and the Frobenius norm of matrix X. The transpose and the pseudoinverse of matrix X are denoted by X T and X † . The linear space spanned by the columns of matrix X is denoted by col(X). The orthogonal projector onto a linear subspace S is denoted by P S . Finally, we use the Matlab-like expressions X(:, l) and X(k, :), which denote, respectively, the l-th column and the k-th row of matrix X.
II. FMRI DATA GENERATING MODEL
We assume that the fMRI data have been arranged in matrices, where each matrix row contains the time-series associated with a particular voxel. We denote these matrices as {X k } K k=1 , where X k ∈ R N ×M denotes the data of the k-th subject, N denotes the number of voxels and M denotes the number of time points (note that, in general, N M ). Let R be a positive integer smaller than M .
For each matrix X k , for k = 1, . . . , K, we adopt the model where 1) a ∈ R N + and s ∈ R M denote, respectively, the common, to all subjects, task-related spatial and temporal component, and λ k ∈ R + denotes the intensity of the common rank-one term for the k-th subject; 2) A ∈ R N ×(R−1) + , whose columns are the common, to all subjects, spatial components related with the spontaneous fMRI activity; 3) S k ∈ R M ×(R−1) , whose columns are the temporal components associated with the spontaneous fMRI activity and, in general, vary across subjects. We assume that that is, there is no subspace that is common to all col (S k ), for k ∈ {1, . . . , K} (we shall investigate the validity of this important assumption later); 4) E k ∈ R N ×M denotes the "unmodelled fMRI signal" of the k-th subject and is considered as (strong) additive noise. We assume that terms E k , for k = 1, . . . , K, are statistically independent from each other. We propose model (1) based on both the existing literature [1]- [6], [11], [13], [33] and the detailed examination of our real-world data.
We note that, in [41], where we consider the resting-state scenario, we have assumed that matrix A is nonnegative and orthogonal, implying a brain parcellation structure. However, in this work, our focus is on the task-related scenario and this assumption is not necessary.
Our aim is to obtain an accurate estimate of the common spatial term a, which will lead to a precise activation brain map and, thus, to the accurate localization of the stimulated brain areas.
In order to use simpler notation, we define the matrix of the common spatial components and the matrices of the temporal components We further assume that matrices W and Z k , for k = 1, . . . , K, are full-column rank. Using this notation, matrix X k , defined in (1), can be expressed as
III. ESTIMATION OF THE COMMON SPATIAL AND TEMPORAL COMPONENTS
Our approach towards the construction of the task-related brain excitation map consists of three stages: 1) we use X k , for k = 1, . . . , K, and obtain an orthonormal basis for an estimate of the common spatial subspace, col(W), by solving a gCCA problem; 2) we use the solution of the first stage and estimate the unique common time component, s, by solving a second gCCA problem; 3) we use the estimate of s, and obtain an estimate of a, which is our final target.
A. Estimation of the common spatial subspace
We assume that the dimension, R, of the common spatial subspace, col(W), is known (later, we shall present a criterion for the estimation of R from the data).
In order to estimate an orthonormal basis for the common spatial subspace, col(W), we adopt the MAX-VAR formulation of the gCCA [27] and solve the optimization problem where Q k ∈ R M ×R , for k = 1, . . . , K, and G ∈ R N ×R . The solution Q o k , for k = 1, . . . , K, and G o of problem (6) can be computed as follows. For a fixed G, the optimal Q k can be expressed as If we substitute this value into (6), then the problem becomes We define with eigenvalue decomposition given by M = U M Λ M U T M . An optimal solution G o is given by [42] G o = U M (:, 1 : R) .
The fact that MAX-VAR gCCA indeed identifies the common subspace of the views in the noiseless case has been proved in [24] and [43] for the two view and multiview cases, respectively. Notice that M is a N × N matrix. Hence, for the case of whole brain data analysis (where N is very large), the formation of matrix M and the computation of its eigenvalue decomposition may be prohibitive. and efficiently compute matrix G o , bypassing the computation of M.
If the fMRI data matrices X k were noiseless, that is, if E k = 0, for k = 1, . . . , K, then the solution of problem (6) would result to G o such that (see (5)) This implies that for some (R × R) invertible matrix P. Furthermore, in this case and for all k ∈ {1, . . . , K}, matrices Q o k and Z k would span the same subspace, namely, This holds because where The fact that E k , for k = 1, . . . , K, are nonzero makes (11) and (13) approximate equalities.
B. Estimation of the common temporal component
Having computed G o , we proceed to the estimation of s by assuming that (13) is exact. We shall test the accuracy of our assumption and the effectiveness of our approach in the section with the experimental results.
Based on assumption (2) and definition (4), we have that, in the noiseless case, which, using (13), leads to We obtain an estimate of s by solving the MAX-VAR problem subject to g 2 = 1.
If we denote the optimal g in (18) by g o , we have that Since (13) defines a family of approximate equalities, equalities (17) and (19) are approximate.
C. Estimation of the common spatial component Until now, we have obtained the estimate of an orthonormal basis of the common spatial subspace, G o , and the estimate of the common temporal component, g o . We can proceed to the estimation of the common spatial component, a, and the intensity vector, λ, by following various paths. A simple approach is to consider the problem min a≥0,λ≥0 However, in this case, we do not exploit the fact that a ∈ col(G o ). We can enforce this constraint if we compute the projected data and solve the problem whose solution, (a o , λ o ), is our final estimate of (a, λ).
D. On the Dimension of Common Subspaces
In Subsection III-A, we assumed that we know the true dimension, R, of the common spatial subspace, col(W), and derived the estimate G o of an orthonormal basis of col(W). Of course, in general, the value of R is unknown, thus, we must estimate it from the data. In the sequel, we provide a procedure which gives us very useful information about the value of R [41]. We note that the same procedure can be used for the verification of assumption (2).
Let the hypothesized dimension of the common spatial subspace be denoted byR. At first, we assume thatR = R. Let K 1 and K 2 be a random partition of the set of the subjects K = {1, . . . , K}. In the noiseless case, if we solve problem (6) twice, for k ∈ K 1 and k ∈ K 2 , and call the resulting orthonormal bases G o 1 and G o 2 , respectively, then it holds true that That is, the common spatial subspaces coincide. If we start adding noise and repeat the process, then the estimated subspaces, col(G o 1 ) and col(G o 2 ), will start to move away from each other. One way to measure the distance between a pair of linear subspaces S 1 and S 2 is to compute their gap, defined as [44, p. 93] ρ g,2 (S 1 , S 2 ) := P S1 − P S2 2 .
IfR = R and E k 2 = O( ), for k = 1, . . . , K, where is a small positive number, then we expect that IfR > R, then, if we solve (6) for K 1 and K 2 , then, besides the R-dimensional common subspace, col(W), we try to model "common" noise subspace. Since the noise terms E k are statistically independent across subjects and N M , we do not expect to find any common noise subspace in the datasets associated with K 1 and K 2 . Thus, ifR > R, we expect that Finally, ifR < R and the rank-one terms which constitute the products WZ T k are of almost "equal strength," then we expect that because col(G o 1 ) and col(G o 2 ) will "randomly" captureR out of R dimensions of the common spatial subspace.
Thus, the gap between col(G o 1 ) and col(G o 2 ) provides valuable information about the true dimension of the common subspace, col(W). Accurate expressions for the gap lie beyond the scope of this manuscript, require tools from matrix perturbation theory, and pose stringent assumptions on the size of the noise, which may not be fulfilled in our case. We shall test the usefulness of our claims in Subsection IV-B.
A. Synthetic Data
First, we test the effectiveness of our approach using synthetic data. We remind that our main goal is the estimation of the common task-related spatial component, a. We generate random data {X k } K k=1 according to the model where matrices βAS T k K k=1 and {βE k } K k=1 act as noise terms. We define the SNR as Of course, the relative power of terms AS k and E k , for k = 1, . .
where c is a given positive real number. After applying our method to the real-world fMRI datasets examined in subsection IV-B, we were able to estimate all the factors that appear in relation (28). Based on these estimates, we observed that the value of c was approximately equal to 0.33 for all the considered real-world datasets. This observation motivated us to use this value of c in our experiments with synthetic datasets. We fix λ, a, and s, and compute the mean (over 100 independent realizations of A, S k , and E k , for k = 1, . . . , K) correlation coefficients between the true and the estimated quantities. In the remaining of this section, the fixed components λ, a, and s are denoted as λ true , a true , and s true , respectively.
At first, we test the accuracy of our estimate of the common temporal component, s. We perform the two-stage gCCA and compute our estimate, s est , via (18). In Fig. 1, we depict the accuracy of our estimate versus SNR. We observe that we attain very high estimation accuracy even at SNR as low as −30 db.
Then, we use s est and test the accuracy of our estimates of a and λ computed by solving problems (20) and (22). We denote the methods that are based on solving problems (20) and (22) as method 1 (M1) and method 2 (M2), respectively. We solve these problems via an Alternating Optimization (AO) approach. Since AO is sensitive to local minima, we solve the problems for 5 random initializations and take into account only the solution attaining the best fit. We depict the results in Figs. 2 and 3, where we observe that M2 outperforms M1 for very low SNR (−30 db to −20 db).
In summary, we observe that our approach attains very accurate estimates even at very low SNR, which is usually the case with fMRI data processing.
B. Real-World Data
In this subsection, we test our approach using real-world task-related fMRI data. More specifically, we process four datasets, recorded at the University of Crete General Hospital, from a group of 25 healthy adults performing four visual tasks, which were identical in all but one aspect (the precise kinematics of an observed person-directed action). Next, we quote a short description of the experiment design and the preprocessing pipeline that was applied to the data. A more extensive description of the experiments can be found in the Supplementary Material. Then, we present the results obtained by analyzing the data using our method.
1) Experiment design: The fMRI block design consists of four action observation conditions, each involving four "active" 35 sec blocks alternating with four 35 sec baseline blocks. Within each "active" block, a video clip illustrating a two-movement action sequence was presented 6 times, while the stimulus set-up was identical across blocks and conditions. The data employed in the main analysis reported here were derived from the first experimental condition (or, briefly, condition (i)), examining the effects of an action with the same goal but different kinematics. The results concerning the other three experimental conditions are presented in the Supplementary Material.
2) Image acquisition and pre-processing: Scanning was performed on an upgraded 1. thickness and no interslice gap. The time-series recorded in each condition comprised 80 volumes (time points). In our analysis, we ignore the first 5 volumes of each time-series, as is customary in fMRI studies. Image preprocessing was performed in SPM8. 1 Initially, EPI scans were spatially realigned to the first image of the first time-series using second-degree B-spline interpolation algorithms and motion-corrected through rigid body transformations. Next, images were spatially normalized to a common brain space (MNI template) and smoothed using an isotropic Gaussian filter (FWHM=8 mm). At last, all voxel time series, from all subjects, were centered and de-drifted (subtraction of the mean value and linear terms).
We note that the SPM platform is able to provide a time response component, based on the activation onsets and offsets, which is expected to appear in the activated brain voxels. This response is the same for all four experimental conditions since, as we mentioned, the stimulus layout, in all conditions, is the same. From now on, we denote this response as s exp .
3) Results: Next, we present the results that we obtained from the analysis of the dataset from condition (i). As mentioned before, the results from conditions (ii), (iii), and (iv) are presented in the Supplementary Material.
In Fig. 4, we plot the estimated common temporal component s est for various values of the common spatial subspace dimension, R, as well as the normalized, to unit 2-norm, expected response s exp , for condition (i). A more direct comparison between the normalized expected response with each one of the estimated common temporal components can be made via Fig. 5, where we plot the absolute correlation coefficients between s exp and the estimated common temporal component s est , that emerged for all possible values of R.
We observe that, for different values of R, the estimated common temporal components, s est , are very much alike. This implies that our estimate is not sensitive to the true common subspace dimension, which is unknown, in general. Thus, we can get useful results over a wide range of values of R. Moreover, the estimated common temporal components, s est , are quite similar to the expected signal, s exp , since their correlation coefficient takes values at about 0.8 and even higher in some cases. We conclude that our method effectively estimates the common temporal component, without any prior knowledge about its shape.
In order to test the usefulness of the gap function towards the estimation of the common spatial subspace dimension, R, we randomly partition the subjects into two sets K 1 and K 2 and compute the gap of the subspaces S 1 and S 2 spanned from the bases computed by the gCCA for assumed dimension R = 1, . . . , 73, 2 as described in Subsection III-D. In Fig. 6, we plot the gap function, ρ g,2 (S 1 , S 2 ), as a function ofR. We observe that the value of the gap becomes practically equal to 1 forR 26, which (1) implies that a low common spatial 1 Statistical Parametric Mapping software, SPM: Welcome Department of Imaging Neuroscience, London, UK; available at: http://www.fil.ion.ucl.ac.uk/spm/. 2 The length of each voxel's time-series is 75 and the maximum rank of each X k after de-drifting is 73. subspace dimension model is appropriate and (2) provides an estimate for the dimension R.
In order to test the validity of assumption (2), we again partition the considered dataset, subject-wise, into two random subsets, K 1 and K 2 , and proceed as follows. First, we solve problem (6) , we solve two problems analogous to (18), for common time subspace dimensionsr = 1, . . . , 30, i.e. all the possible nontrivial dimensions of the common time subspace. We denote the estimates of the common time subspaces that emerged from the data in K 1 and K 2 , as T 1 and T 2 , respectively. In Fig. 7, we plot the resulting gap, ρ g,2 (T 1 , T 2 ), as a function of the assumed dimensionr. We observe that, forr > 1, the gap is practically equal to 1, indicating that the common temporal subspace dimension is equal to one, validating our assumption Fig. 9: Map a est computed for condition (i) and "common" subspace dimension equal to 30. The map was thresholded such that the 10% of the voxels with the largest voxel score of a est are shown. A standard Z-transform is not meaningful, since a est satisfies nonnegativity constraints. (16) and, thus, (2).
In Fig. 8, we plot the intensities across subjects, λ est , that resulted from the solution of (22), for R = 30. We observe that the task-related common rank-one term appears in all but one subject.
In Fig. 9, we depict the thresholded spatial map a est that emerged from the solution of (22) (only the top 10% of the values have been included). In Fig. 10, we depict the contrast map obtained by applying the conventional General Linear Model (GLM) with a priori knowledge of the timing of the experimental (video clip observation) and reference blocks (static hand viewing) in SPM (at a standard threshold of p < 0.001 uncorrected) to the original data.
We observe that, contrary to the GLM based analysis of the original data, our method successfully captures all clusters of activation voxels in key components of the brain network putatively involved in evaluating the kinematic characteristics and intentions of the observed actions of other subjects, including the inferior frontal gyrus (IFG), ventral Premotor Area (PMv), and primary somatosensory area (SI).
We claim that the main reason for this phenomenon is the combination of the successful denoising achieved by (i) projecting the original data matrices {X k } K k=1 onto the common spatial subspace spanned by G o and (ii) computing a high quality estimate of the task-related temporal component s est via solving problem (18).
In order to support this claim, in Fig. 11, we plot the spatial map that emerged after applying the conventional General Linear Model with a priori knowledge of the timing of the stimulus in SPM (at the same standard threshold of p < 0.001 uncorrected set in creating the corresponding activation map from the original data shown in Fig. 10) to the denoised data matrices {X o k } K k=1 . We observe that the denoised data significantly increase the sensitivity of the GLM in detecting activations in several regions that are putative key components of the frontoparietal network supporting mental simulation.
Finally, in order to compare, in a quantitative manner, the proposed method and GLM we consider the following setup. We apply the GLM with the expected temporal response (s exp ) to the original and the denoised data, as defined in relation (21). We compute the average beta maps across all subjects for both datasets and keep the 10% of the voxels that attained the largest average beta scores. In Table I, we present the percentages of pairwise spatial overlaps between the two restricted average beta maps described above and the map that emerges after considering only the 10% of the voxels that attained the largest values of the proposed estimate, a est , for all the considered real-world datasets (conditions (i)-(iv)). Moreover, in the last column of Table I, we present the percentages of the spatial overlaps between all three of the considered maps.
Based on Table I, we can observe that, for all conditions, the following hold: (i) all the restricted maps have a spatial overlap of approximately 74%, (ii) denoising leads to a difference of approximately 20% between the GLM-based restricted maps, (iii) there is a significant difference between the vanilla GLMbased map (original data) and the map resulting from the proposed method, (iv) the difference mentioned in (iii) becomes consistently smaller when we compare the maps derived from the denoised data with GLM and the proposed method.
V. CONCLUSION
We considered the problem of multi-subject task-related fMRI analysis with one stimulus. We proposed a data generating model which takes into account both task-related and resting-state common spatial components. We used two successive gCCAs and computed an estimate of the common temporal component, which was then used for the construction of the map of the activated brain regions. We used synthetic data and observed that our estimates are very accurate even at very low SNR. We applied our method to real-world datasets and compared the results with those of a standard GLM procedure.
We observed that the denoised data (after the projection onto the common spatial subspace) lead to improved GLM results, thus, supporting our data generation model and its denoising properties.
APPENDIX A COMPRESSION
Let X k ∈ R N ×M , for k = 1, . . . , K, be a set of full column rank matrices. As we showed in section III-A, in order to solve problem (6), we have to compute and decompose a (N × N ) matrix, which may be very demanding for large values of N that could emerge in a whole-brain analysis setting. Inspired I: Quantitative comparison in terms of spatial overlap between the proposed method and the GLM, after applying it to the original and the denoised data. For GLM, the average beta maps were computed across all subjects and the resulting maps were restricted to the voxels attaining the 10% of the largest values. For the proposed method, the considered map emerged after restricting the proposed estimate a est only to the 10% of its largest values.
by the compression technique presented in [16], in this section we show that, if KM N , then there is a way to circumvent this problem.
Let Y ∈ R N ×KM be defined as and consider a factorization of Y in the form such that U Y ∈ R N ×KM is a columnwise orthonormal matrix, i.e. U T Y U Y = I KM , and V Y ∈ R KM ×KM . Then, it holds true that Furthermore, we have that
|
2022-05-21T06:23:22.740Z
|
2022-05-19T00:00:00.000
|
{
"year": 2022,
"sha1": "c0fe1de118d087e8f14c1c1dc5fabdbc0ade306d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "81bc8195f12839cf8d8fb7eae65f594ead0f6a3d",
"s2fieldsofstudy": [
"Computer Science",
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science"
]
}
|
235462273
|
pes2o/s2orc
|
v3-fos-license
|
Measles Vaccination Elicits a Polyfunctional Antibody Response, Which Decays More Rapidly in Early Vaccinated Children
Abstract Background Measles outbreaks are reported worldwide and pose a serious threat, especially to young unvaccinated infants. Early measles vaccination given to infants under 12 months of age can induce protective antibody levels, but the long-term antibody functionalities are unknown. Methods Measles-specific antibody functionality was tested using a systems serology approach for children who received an early measles vaccination at 6–8 or 9–12 months, followed by a regular dose at 14 months of age, and children who only received the vaccination at 14 months. Antibody functionalities comprised complement deposition, cellular cytotoxicity, and neutrophil and cellular phagocytosis. We used Pearson’s r correlations between all effector functions to investigate the coordination of the response. Results Children receiving early measles vaccination at 6–8 or 9–12 months of age show polyfunctional antibody responses. Despite significant lower levels of antibodies in these early-vaccinated children, Fc effector functions were comparable with regular-timed vaccinees at 14 months. However, 3-year follow-up revealed significant decreased polyfunctionality in children who received a first vaccination at 6–8 months of age, but not in children who received the early vaccination at 9–12 months. Conclusions Antibodies elicited in early-vaccinated children are equally polyfunctional to those elicited from children who received vaccination at 14 months. However, these antibody functionalities decay more rapidly than those induced later in life, which may lead to suboptimal, long-term protection.
Since the introduction of childhood measles vaccination, the number of measles cases has dropped significantly. However, after years of decline in measles incidence, the number of measles cases has been steadily increasing the past couple of years, due to insufficient measles vaccination coverage by routine immunization programs in certain countries [1]. Since 2016, the number of reported measles cases increased with 556% to almost 870 000 in 2019, which are the most reported cases since 1996 [1]. Unvaccinated infants are the main risk group for measles infection and related complications [2], stressing the need for effective vaccination early in life [3].
Measles-neutralizing antibodies are considered to be the main correlate of protection for clinical measles infection [4,5]. These antibodies are induced by natural infection or vaccination and block entry of measles virus into target cells. At birth, infants are protected by neutralizing maternal antibodies that are transferred from mother to child through the placenta. In countries with high vaccination coverage over the past decades, maternal immunity against measles is now primarily induced by vaccination. Studies have shown that measles-specific maternal antibodies of children from vaccinated mothers drop below protective levels after 3-4 months of age [6,7]. This leaves infants unprotected until their first vaccination, which generally occurs between 12 and 15 months of age. Infants aged 6-12 months may receive an early vaccination in case of a measles outbreak.
We previously reported that children who received an early vaccination under 12 months of age showed a more rapid decline of neutralizing antibodies within the 3-year follow-up [8]. Although most children still had a titer that was considered to be protective (>0.12), long-term modeling showed that the group that received an early vaccination at 6-8 months dropped below this protective level of neutralizing antibodies. These differences in decay of neutralizing antibodies in the early vaccines indicate unexplained changes in the development of a functional antibody response. Although the neutralizing capacity of antibodies can prevent infection, other effector functions involved in the measlesspecific immune response can be crucial in preventing dissemination of the virus. Indeed, measles-specific antibodies were shown to induce complement-mediated lysis after virus infection [9,10], and measles-specific antibodies that induce antibody-dependent cellular cytotoxicity (ADCC) have been proposed to play a role in clearing measles virus during infection and recovery [10]. Because dissemination of measles virus is predominantly mediated via cell-to-cell transmission [11,12], a broader range of antibody-dependent effector functions have to develop to combat the virus. However, detailed knowledge on effector functions of measles-specific antibodies is relatively unknown.
In the current study, we performed a broad antibody effector function analysis and investigated whether the (poly) functionality of the antibody response was affected by the timing of the first vaccination. Here, we used a systems serology approach, ie, we assessed various aspects of measlesspecific antibody functionalities. This included measuring a broad range of Fc-mediated effector functions, including antibody-dependent complement deposition (ADCD), and ADCC/natural killer (NK) cell degranulation, as well as antibody-dependent neutrophil phagocytosis (ADNP) and antibody-dependent cellular phagocytosis (ADCP) by monocytes. In addition, measles-specific antibody isotypes and subclasses were measured.
A representative selection of a previously described cohort of children who received an early vaccination was used [8]. This cohort consisted of children receiving their first early measles vaccination between 6 to 8 months or 9 to 12 months of age, followed by a regular-timed dose at 14 months. In addition, a control group that was only vaccinated at 14 months of age was included in this study [8]. By in-depth characterization of the functional antibody response to measles vaccination at different ages, we investigated whether reduced levels of measles-specific antibodies resulted in functional changes in the Fc-binding properties of the antibodies as well. This first study into a multitude of measles-specific antibody effector functions will help to provide a better understanding of factors that influence the antibody response to measles vaccination and may improve future vaccination strategies.
Study Population
Samples were selected from a previously described cohort of children who received an early additional measles, mumps and rubella (MMR-0) vaccination, evenly distributed between ages 6 and 12 months (6 months, n = 4; 7 months, n = 5; 8 months, n = 4; 9 months, n = 2; 10 months, n = 5; 11 months, n = 4) during a measles outbreak in the Netherlands in 2013-2014 [8]. All children also received regular-timed MMR at 14 months of age (MMR-1). A control group of children (n = 10) only vaccinated at 14 months of age was also included. Children were selected to be representative of the complete cohort, based on age of first MMR vaccination. Furthermore, plasma samples 1 and 3 years after MMR-1 had to be available.
Antibody-Dependent Phagocytosis
Plasma samples were diluted 1:100 and added to antigencoated, green fluorescent microspheres for 2 hours at 37°C to form immune complexes. Nonspecific unbound antibodies were washed away. For ADCP assay, THP-1 cells (monocyte cell line; American Type Culture Collection [ATCC]) were added at a concentration of 1.25 × 10 5 cells/mL for 16 hours at 37°C. For ADNP assay, primary human leukocytes were isolated from whole blood using ACK red blood cell lysis buffer (150 mM NH 4 Cl, 10 mM KHCO 3 , 0.1 mM Na 2 EDTA) and washed with iced-cold PBS. Fresh leukocytes were added in R10 media at a concentration of 2.5 × 10 5 cells/mL and incubated for 1 hour at 37°C. Afterward, cells were washed, and neutrophils were stained with Pacific Blue CD66b antibody (BioLegend). Cells were fixed in 4% paraformaldehyde for acquisition on the flow cytometer. The phagocytic score was calculated as the percentage of cells that had taken up beads multiplied by the geometric mean fluorescence intensity (gMFI) of bead-positive cells divided by 10 000 as measured by flow cytometry (LSR FortessaX20; BD).
Antibody-Dependent Complement Deposition
For the ADCD assay, plasma samples were diluted 1:10 and added to antigen-coated red fluorescent microspheres for 1 hour at 37°C. Nonspecific, unbound antibodies were washed away. Lyophilized guinea pig complement, reconstituted in icecold water, was added in Boston BioProduct Veronal buffer for 20 minutes at 37°C. Complement heat inactivated at 56°C for 30 minutes was used as a control. Afterward, beads were washed in 15 mM EDTA, and complement deposition was detected by fluorescein isothiocyanate (FITC)-conjugated goat-antiguinea pig C3 (MP Biomedicals) antibody staining. Complement deposition was measured by flow cytometry as the FITC gMFI of C3 on complement coated-antigen bound beads.
Enzyme-linked immunosorbent assay plates were coated with measles antigen and incubated with samples diluted 1:20, or PBS as negative control, for 2 hours at 37°C. Natural killer cells were isolated from buffy coats collected from blood bank donors. CD107a-PE/Cy5 (BD), brefeldin A (Sigma), GolgiStop (BD), and NK cells were added to the antigen-coated plate and incubated for 5 hours at 37°C. Next, cells were transferred to a V-bottom plate and stained with CD56-PE/Cy7, CD16-APC/ Cy7, and CD3-AlexaFluor700 (all BD) for 15 minutes at room temperature. Cells were washed and fixed in Fixation Medium A (Life Technologies) before intracellular staining with MIP-1β-PE and IFN-γ-APC (BD) in Permeabilization Medium B (Life Technologies) for 15 minutes at room temperature. The percentages of positive NK cells for CD107a, IFN-γ, and MIP-1β were determined by flow cytometry.
Subclass and Isotype Detection
Measles antigen-coated MagPlex microspheres (Luminex) through a 2-step carbodiimide reaction were added to 1:10 diluted plasma samples, with a final dilution of 1:100 when 5 µL of sample was added to 45 µL antigen-coated beads, and incubated for 2 hours shaking at room temperature. Next, samples were washed and secondary detection antibodies against immunoglobulin (Ig)G, IgG1, IgG2, IgG3, IgG4, IgM, IgA1, and IgA2 were diluted 1:154 in assay buffer (PBS, 0.1% Tween20, 2% BSA) and incubated on a shaker for 1 hour at room temperature. Samples were washed and measured with a Bio-plex Luminex reader. Relative levels of isotype and subclasses were calculated as the median fluorescence intensity of PE. Background was determined by a no-antibody control.
Statistical Analysis
For comparing different groups per functional assay, differences between log-transformed geometric mean concentrations were tested by Brown-Forsythe and Welch ANOVA tests and corrected for multiple comparisons using Tamhane's T2 multiple comparisons test, with individual variances computed for each comparison. To investigate coordination of the responses, Pearson's r correlations were (1) calculated for each group between the different functions measured and (2) further classified as very weak (±0-0.19), weak (±0.2-0.39), moderate (±0.4-0.59), strong (±0.6-0.79), and very strong (±0.8-1). To assess the quality of the response, we determined (per group and time point) the percentage of children who showed a strong, ie, above the median, functional response and combined this into a cumulative functional response.
Baseline Characteristics
In total, 34 children were selected from a previously described cohort [8]. Children had received an early MMR dose (MMR-0) between ages 6 and 8 months (n = 13) or between ages 9 and 12 months (n = 11). A control group did not receive an early MMR dose (n = 10). All children received a regular-timed MMR dose at 14 months of age (MMR-1). No differences between the selection and the full cohort were observed in their baseline characteristics (Table 1). We previously described a reduced measles virus-specific neutralizing antibody response after an early MMR dose, which persisted up to 3 years after MMR-1 vaccination at 14 months of age (Table 1 and [8]). Measlesspecific IgG in plasma showed similar response kinetics compared with neutralizing antibody levels (Supplementary Figure 1).
Measles-Specific Antibody Effector Functions
Antibodies induce a variety of effector functions besides blocking viral entry into cells through neutralization. We first examined the different Fc-effector functions mediated by measles-specific antibodies 1 and 3 years after regular MMR-1 vaccination at 14 months of age ( Figure 1), ie, at 26 month and 50 months of age. Using our systems serology platform, we measured ADNP, ADCP, ADCD, and antibody-induced NK activation via markers of activation and degranulation: CD107a, IFN-γ, and MIP-1β. First measles vaccination at 14 months induced a functional antibody response that elicited all assessed Fc-effector functions. Kinetics showed a subtle decline in ADNP and ADCD between 1 and 3 years after vaccination (ADNP, P = .032; ADCD, P = .061) ( Figure 1A and C), whereas ADCP appeared to increase slightly (P = .081) ( Figure 1B). All NK degranulation markers remained stable over time ( Figure 1D and F).
Fc-Effector Function of Antibodies
Next, we examined measles-specific Fc-effector functions in all 3 vaccination groups ( Figure 2). Again, these effector functions were determined 1 and 3 years after MMR-1, ie, at 26 months and 50 months of age. For this, individual Fc-effector outcomes were normalized for the level of measles-specific IgG (Supplementary Figure 1). No evidence for lower effector functions in the early-vaccinated groups was observed, except for a lower percentage of CD107a + NK cells in early-vaccinated children 3 years after MMR-1 (P = .009) (( Figure 2D). Other NK cell read-outs were not different between the groups ( Figure 2E and F). One year after regular MMR-1 at 14 months of age, antibodies from children with a first MMR dose between 6 and 8 months of age induced higher ADCP (P = .036) than antibodies from children with a first dose at 14 months of age, although this difference was no longer present 3 years after MMR-1 ( Figure 2B). Thus, despite overall lower levels of antibodies in children who received an early measles vaccination ( Supplementary Figure 1), the first MMR dose did not notably influence the Fc-effector capacity on a single antibody level.
Coordination of Antibody Fc-Effector Functions Shifts Over Time
To investigate the relationship between the different Fc-effector functions within and between the 3 different vaccination groups, we calculated a Pearson's r correlation for each group between the different functions measured ( Figure 3); a greater number of strong correlations represents a more coordinated response, ie, inducing multiple effector functions simultaneously. In 13 of 15 correlations, early-vaccinated children showed stronger correlations between different Fc-functions than children from the control group 1 year after regular MMR-1 vaccination at 14 months of age ( Figure 3A). In contrast, 3 years after MMR-1 vaccination, early vaccination resulted in weaker correlations between Fc-effector functions (6-8 group vs control, 11 of 15 correlations are lower; 9-12 group vs control, 7 of 15 correlations are lower; 6-8 vs 9-12 group, 10 of 15 correlations are lower) ( Figure 3B). Finally, we classified these correlations according to strength in 5 classes: very weak, weak, moderate, strong, and very strong ( Figure 3C and D). One year after MMR-1, children with a first MMR dose between ages 6 and 8 months showed more correlations of higher strength (ie, [very] strong), compared to children with a first dose at a later age ( Figure 3C). In contrast, children with a first dose at 14 months of age showed more (very) weak correlations compared to children vaccinated at a younger age ( Figure 3C). It is interesting to note that 3 years after MMR-1, this pattern had shifted towards a stronger coordinated response for children from the control group ( Figure 3D). Overall, the functional response after 2 doses of MMR, with the first given between 6 and 8 months of age, became less coordinated over time, whereas the functional response after regular MMR vaccination at 14 months became more coordinated after 3 years.
Different Dynamics in Antibody Polyfunctionality Over Time
Although the correlation of functions demonstrates the coordination between the responses, this analysis alone did not capture the magnitude of the functionality. Therefore, we sought to determine the overall functional patterns across the vaccine groups over time. To this end, we combined the strong responses above the median for each function per vaccination group and compared the number of children with strong versus weak responses between 1 and 3 years based on the age of first MMR dose ( Figure 4A). Children who received a first MMR dose between ages 6 and 8 months of age started with a strong cumulative functional response 1 year after MMR-1 ( Figure 4A, left panel). However, this cumulative response declined significantly 3 years after MMR-1. In contrast, the cumulative functional response for children with a first MMR dose between ages 9 and 12 months or at 14 months of age increased 3 years after MMR-1 ( Figure 4A, middle and right panels). It is notable that the most dramatic changes occurred for markers of NK degranulation. Next, we compared the functional responses for each child relative to the median of that response, with more functions above the median indicating a higher polyfunctional antibody response, an additional measure to reflect the quality of the response ( Figure 4B and C). If mounting 5 to 6 Fc-mediated effector functions to a high level (ie, above the median) defines a strong polyfunctional response, then at 1 year after MMR-1, 40%-60% of children with polyfunctional responses were among the group who received their first dose at 6-8 months ( Figure 4B). However, at 3 years, almost 80% of children with polyfunctional responses received a single measles vaccination at 14 months of age ( Figure 4C). Children who received a first dose between 9 and 12 months of age also increased in polyfunctionality between 1 and 3 years after the second dose ( Figure 4C). These data suggest that the strength of the response for each function as well as total function wanes for children who received a first dose at 6-8 months of age, but this increases among those who received either a first dose at 9-12 months of age or only 1 dose at 14 months.
Antibody Subclasses and Isotypes
To define whether a shift in the quality of the immune response explains differences in the functional evolution of the measles-specific response across age groups, we measured relative levels of measles-specific antibody isotypes and subclasses (IgG1-4, IgA1-2, and IgM) and normalized against the dominant subclass IgG1. No differences or shifts in subclass and isotype distribution were observed (Supplementary Figure 2). Next, Pearson's r correlations between antibody subclasses and isotypes and the functional responses were calculated (Supplementary Figure 3), and the geometric mean of the absolute Pearson's r was taken to investigate the differences between the vaccination groups. The overall strength of the correlations between the vaccination groups 1 and 3 years after MMR-1 were not different ( Figure 5A and B). Again, to better quantify the correlations, we classified them according to strength. Here, we also observed no major differences between the vaccination groups 1 and 3 years ( Figure 5C and D) after MMR-1. The majority of correlations between antibody subclasses/isotypes and antibody functions were (very) weak.
DISCUSSION
This is the first study to show that measles vaccination at childhood leads to the induction of antibodies that can display a broad spectrum of Fc-effector functions, including antibodydependent complement deposition, antibody-dependent phagocytosis, and ADCC. For other viruses, complement activation and phagocytosis have been described to play a significant role in the immune response and clearance of multiple other viruses [14][15][16][17][18][19][20][21][22], but, to our knowledge, we are the first to describe measles-specific, antibody-dependent complement activation and phagocytosis upon vaccination. For ADCC, we focused on 3 NK cell activation markers: CD107a surface expression, IFN-γ, and MIP-1β. All 3 ADCCrelated functions were detected after vaccination, even up to 3 years, and followed a similar pattern. The difference we observed in the induction of CD107a expression between children vaccinated before 9 months of age and the control group 3 years after MMR-1 may be due to the lower sensitivity of the assay, because CD107a expression of some children vaccinated between 6 and 8 months was at background levels (data not shown).
Most notably, the antibody response in children who received an early vaccination at 6-12 months of age, showed comparable capacities to mediate Fc-effector functions as antibodies induced by regular-timed measles vaccination at 14 months, despite having lower measles-specific IgG concentrations. More in-depth analysis revealed striking differences in polyfunctionality and coordination of the functional antibody response between vaccination groups. One year after receiving the MMR-1 vaccine at 14 months, children who had also received an early dose showed a more coordinated and polyfunctional antibody response. However, 3 years after MMR-1 vaccination, the antibody response shifted, with a decrease in the coordination and polyfunctionality of the response for early-vaccinated children and an increase for children only vaccinated at 14 months. Our data showed a large drop in measles-specific IgG levels for early-vaccinated children over time, whereas this decrease was less significant for children only vaccinated at 14 months [8]. However, we observed variation in the long-term effects of the different antibody-mediated effector functions upon early vaccination. Natural killer-mediated effector functions are significantly reduced 3 years after MMR-1, whereas less decay was observed in phagocytosis and complement deposition. Structural characteristics of the antibody Fc-domain may influence the functional antibody response after vaccination. The different subclasses of IgG have distinct binding affinities to Fc-receptor, to exert their function [23,24]. IgG3 has the strongest binding affinity, but it also a short halftime. IgG1 also binds stronger to Fc receptors than IgG2 and IgG4. Variant in subclass distribution might thus affect polyfunctionality [23]. However, we did not observe variation in subclass distribution. Another possibility is variation in glycosylation of the antibody Fc over time, which may explain the differences. Altered glycosylation of a single asparagine (Asn297) on the IgG heavy chain can influence the affinity of the antibody for the Fc receptor and subsequently affect the functional response. Investigating the potential change in glycosylation patterns of measles-specific antibodies after vaccination would be of great interest, and it requires further studies. Finally, expression of Fc-receptor on target cells could further steer the ultimate effect of the polyfunctional antibody response. However, the cellular component of the children was not studied here, but it should be taken into consideration to determine the effectiveness of polyfunctional antibody response. Thus, early measles vaccination induces a strong and immunologically versatile antibody response in the short term, but this response gradually deteriorates. The immature immune system particularly in young children (6-8 months) may be influencing this long-term response [25]. Our data suggest that measles vaccination of children between 6 and 8 months induces a B cell response, leading to functional antibodies, but that maturation and differentiation to memory B cells and long-lived plasma cells is maybe less sustained at this young age. Nevertheless, our data provide evidence that early vaccination in an outbreak setting may confer effective short-term protection.
CONCLUSIONS
Overall, our results show that antibodies in early-vaccinated children are as capable of inducing Fc-effector functions as antibodies from children who received 1 regular vaccination at 14 months of age. The consequences of the long-term decline in polyfunctionality still must be elucidated, eg, whether these children are at higher risk to get infected by measles virus, or that exposure to the virus sufficiently boost the vaccine-induced memory response. Nonetheless, the spike in functionality in the short term for young vaccinees is promising for children who receive early measles vaccination in endemic locations [26,27] or in outbreak situations before the first recommended dose. Most importantly, this work has expanded the scope of understanding into the antibody-mediated measles response in both the short and long term after vaccination across multiple age groups. Financial support. This study was funded by the Dutch Ministry of Health, Welfare and Sport.
Potential conflict of interest. All authors: No reported conflicts. All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest.
|
2021-06-18T06:16:22.876Z
|
2021-06-16T00:00:00.000
|
{
"year": 2021,
"sha1": "d3541f8e9734ae7b8cbe898eb944b886828cd556",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jid/advance-article-pdf/doi/10.1093/infdis/jiab318/40424509/jiab318.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "afb54da1db1d5c0899ed56321f14c2e7d8c824e7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56188460
|
pes2o/s2orc
|
v3-fos-license
|
Triple antiviral therapy with telaprevir after liver transplantation : a case series
License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php Transplant Research and Risk Management 2014:6 73–78 Transplant Research and Risk Management Dovepress
Introduction
][8][9][10] However, the management of HCV reinfection after LT is complicated by drug interactions, tolerability, and pretreatment. 11,12Therefore, an individualized treatment regimen is often required.We present three cases of TVR-based antiviral triple therapy in two therapy naïve and one inefficiently pretreated HCV genotype 1 reinfected patient after LT.
The patient was therapy naïve and revealed the unfavorable IL28B C/T genotype.Immunosuppression included 50 mg cyclosporine A (CsA) twice a day (BID; CsA blood level: 78 ng/mL) and 500 mg/BID mycophenolate mofetil (MMF).Comedication was calcium, vitamin D3, and 50 µg L-thyroxin per day.The patient presented in a good state of general health with a body mass index (BMI) of 25.6 (170 cm, 74 kg).For personal use only.
Triple therapy was started with 135 µg peg-IFN-alpha-2a once a week, 800 mg RBV/day, and 750 mg TVR thrice daily.IFN and RBV doses were reduced to 90 µg/week and 400 mg/day in week 12 (Table 1).The CsA dosage was adapted to 25 mg/day (CsA blood level: 49 ng/mL) (Table 1) and blood levels were stable during the course of therapy (Figure 1A).The viral load declined three log 10 after 4 weeks of triple therapy and became negative in week 8, resulting in SVR 12. Bilirubin blood level normalized during the course of the triple therapy unrelated to any stenting (Figure 1A).Therefore, it is likely that the elevation of bilirubin blood levels was caused by HCV-activity rather than cholestasis.Side effects included dyspnea and anemia, leading to blood transfusion once every other week and the application of 30 µg erythropoietin (EPO) once a week.The decline in Hblevels from 12.1 mg/dL initially to a minimum of 7.2 mg/dL is shown in Figure 1A.Red blood cell, leukocyte and platelet counts as well as CsA blood levels were checked weekly.At the end of PI therapy the CsA dosage was set back to 50 mg BID (CsA blood level: 91 ng/mL).
Side diagnoses included psoriasis vulgaris and schizophrenia.Aripiprazole was discontinued 3 months before starting antiviral therapy.Immunosuppression was switched from 4 mg BID tacrolimus (Tac) to 100 mg BID CsA due to better controllability (CsA blood level, 52 ng/mL).MMF at 500 mg BID was continued.The patient presented in a good state of general health with a BMI of 29.4 (176 cm, 91 kg).
We started a 4-week lead-in period with 180 µg peg-IFN-alpha-2a once a week and 1,200 mg RBV/day to consider the virological response, tolerability and compliance.Within the lead-in, the viral load decreased one log 10 and serum transaminases normalized (Figure 1B).Therapy was well tolerated with good compliance.We started 750 mg three times a day (TID) TVR in week 5. CsA, RBV, and IFN doses were reduced to 25 mg BID, 800 mg/day and 135 µg/week, respectively (CsA blood level: 36 ng/mL) (Table 1).The viral load became negative in week 6 resulting in SVR 12. CsA blood levels were stable (Figure 1B).Side effects included mild anemia and a stage II rash, which was manageable by the application of external steroids.Red blood cell, leukocyte, and platelet counts as well as CsA blood levels were checked once a week.At the end of PI therapy the CsA dosage was set back to 100 mg BID (CsA blood level, 109 ng/mL).
Case 3
A 49-year-old Caucasian male patient underwent LT in 2008 due to HCC, based on chronic hepatitis C genotype 1b-associated liver cirrhosis since 1998.Histological analysis of liver biopsy 3 years after LT proved HCV reinfection with plurisaptate fibrosis (Desmet stage F3).The viral load measured 1.16×10 6 IU/mL (COBAS TaqMan HCV test: lower limit of quantification, 25 IU/mL; lower limit of detection, 10 IU/mL).Serum transaminases and bilirubin blood levels were elevated with frequent variations (ALT 124 U/L, ULN ,50 IU/L, AST 96 U/L, ULN 35 IU/L, bilirubin 1.1 mg/dL, ULN 1.2 mg/dL).All other liver values were in the normal range (alkaline phosphatase, gamma-glutamyl transferase, INR and albumin).Hb, leukocyte, and thrombocyte counts were 13.6 mg/dL, 2.3/nL, and 117/nL, respectively (Figure 1C).The patient was pretreated with dual antiviral therapy twice, prior to and after LT, both resulting in non-response.His IL28B genotype was T/T.Side diagnoses included hypertension.Comedication with amlodipine 5 mg/day was discontinued to avoid drug interactions.Due to better controllability, immunosuppression with 8 mg BID Tac was switched to 75 mg BID CsA (CsA blood level, 50 ng/mL).MMF was continued at 500 mg BID.The patient presented in a good state of general health with a BMI of 28.4 (184 cm, 96 kg), justifying another attempt at antiviral therapy.For personal use only.
Powered by TCPDF (www.tcpdf.org) We initiated triple therapy with 180 µg peg-IFN-alpha-2a once a week, 1,000 mg RBV/day and 750 mg TVR TID.The CsA dose was adjusted to 25 mg/day during PI therapy (CsA blood level, 48 ng/mL) (Table 1).RBV and peg-IFN doses were reduced to 800 mg/day, and 90 µg/week, respectively in week 8 (Table 1).Viral load became negative in week 4, transaminases improved significantly and bilirubin blood levels normalized (Figure 1C).Unfortunately a breakthrough occurred in week 24 and therapy was discontinued.Side effects included reflux symptoms, hypertension, mild anemia, moderate leukopenia, and erythema nodosum along both lower legs (Figure 2) with serological proof of atypical anti-neutrophil cytoplasmic antibodies.All side effects were manageable: hypertension and reflux symptoms by application of 3,125 mg BID carvedilol and 20 mg BID esomeprazole; leukopenia improved on administration of 30 million IU granulocyte colony-stimulating factor (G-CSF).The anemia and erythema nodosum abated when therapy was discontinued.The decline in white blood cell count from an initial 2.3/nL to a minimum of 1.2/nL is presented in Figure 1C.Red blood cell, leukocyte, and platelet counts as well as CsA blood levels were checked once a week.At the end of PI therapy the CsA dosage was set back to 100 mg BID (CsA blood level, 45 ng/mL).
Rejection or impairment of liver function did not occur in any of the cases during triple therapy.No additional liver biopsies were performed.
Discussion
Antiviral therapy of recurrent hepatitis C after LT is an issue of great interest, particularly as the time course of fibrosis progression is accelerated under immunosuppression. 2 Improved antiviral therapy is urgently required, as SVR rates for dual therapy in HCV-positive patients after LT are quite poor. 3,4][8][9][10] But management of hepatitis C post LT is complicated by drug interactions, side effects and pretreatment. 11,12owever, little data exist on HCV combination therapy after LT.Coilly et al presented preliminary results of triple therapy with BOC or TVR in terms of efficacy and safety in LT recipients and reported their experience with BOC in five liver transplant genotype 1 patients, with an estimated oral clearance reduction of 50% with CsA and up to 80% with Tac. 13,14oilly et al reported the first multicenter study of 29 patients with HCV genotype 1 recurrence treated with triple therapy after LT in five French liver transplant centers, with a complete early virological response in 71% of patients treated with BOC and 73% of patients treated with TVR. 15 However the final results of this study showed SVR 12 in only 20% of TVR and 71% of BOC treated patients. 16The American three center study by Pungpapong et al showed SVR 24 in 67% of TVR and 45% of BOC treated patients. 17Regarding combination therapy with SOF after LT, to date there exist only a few case reports. 18,19One larger study presented during the 2014 European liver congress with compassionate use of SOF in 104 patients after LT reported 62% SVR 12. 20 We report successful TVR-based triple therapy of recurrent HCV infection after LT in two therapy naïve HCV genotype 1 infected patients.In one pretreated genotype 1 reinfected patient after LT occurred under antiviral triple therapy with TVR.
In all three cases IL28B polymorphism was determined as a predictor of response to therapy.Previous studies have shown that, after LT, patients harboring IL28B C alleles are more susceptible to successful antiviral therapy. 21However, in the presented cases the patients had the unfavorable genotypes C/T and T/T.
During triple therapy, the immunosuppressive agent should be changed to CsA whenever possible 17 due to better controllability.A four-fold dose reduction in CsA is recommended with TVR. 15 Anemia is reported to be the most important adverse event of antiviral therapy in LT patients, mainly due to RBV-induced hemolysis. 13In the presented cases anemia was manageable by RBV dose reduction, application of EPO and blood transfusions, which did not affect SVR.Leukopenia could be controlled by the administration of G-CSF and no serious infection occurred.Red blood cell counts, leukocytes, platelet counts, and CsA blood levels were checked once a week.Therefore, very early therapeutic interventions, involving dose reductions, especially of RBV and CsA, were possible.
However, monitoring of RBV plasma concentration could predict decreases in hemoglobin and avoid transfusions, since increased plasma concentrations of RBV as a result of renal dysfunction in HCV patients treated with TVR were found. 22,23TVR drug monitoring could also be important after LT in order to maximize SVR rates and minimize side effects and drug-drug interactions. 24n the presented cases none of the patients developed kidney failure under triple therapy.Chronic HCV infection has been described in association with various skin disorders and some skin lesions have been reported as side effects of IFN. 25 In particular, erythema nodosum seems to be related to IFN therapy. 26Dermatological toxicity is also known as a specific side effect of PI.In case two a grade II rash occurred and patient three developed erythema nodosum.Both improved at the end of therapy.Overall, therapy with TVR was well tolerated without major complications.
Conclusion
TVR-based antiviral triple therapy in two therapy naïve HCV genotype 1 reinfected patients after LT was well tolerated without major complications, resulting in SVR 12.In one former non-responder a breakthrough occurred after 24 weeks of triple therapy.Doses of immunosuppressive drugs need to be adapted.Continuous monitoring of red blood cell counts, leukocytes, platelet counts, and blood levels of immunosuppressive agents should be performed regarding dose reductions.This case series emphasizes that triple therapy with TVR is an effective treatment for therapy naïve HCV genotype 1 reinfected patients after liver transplantation.But therapeutic options for pretreated patients need to be improved, and larger studies with second generation PIs and RdRpIs in combination with peg-IFN and RBV and all-oral peg-IFN-free regimens as well as upcoming direct acting antivirals are needed.
Figure 2
Figure 2 erythema nodosum under HCV-triple therapy.Note: in case three, erythema nodosum occurred as a side effect under Pi-based antiviral triple therapy.Abbreviations: HCV, hepatitis C virus; Pi, protease inhibitors.
|
2018-12-06T13:22:22.818Z
|
2014-09-09T00:00:00.000
|
{
"year": 2014,
"sha1": "aae64b0d1f18dcb217147083fa57387b2c5be1a5",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=21542",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c8d1a9849191e22e1b8cccadd878804f8aa9f9e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231934742
|
pes2o/s2orc
|
v3-fos-license
|
Point-of-care testing of plasma free hemoglobin and hematocrit for mechanical circulatory support
Hematological analysis is essential for patients who are supported by a mechanical circulatory support (MCS). The laboratory methods used to analyze blood components are conventional and accurate, but they require a mandatory turn-around-time for laboratory results, and because of toxic substances, can also be hazardous to analysis workers. Here, a simple and rapid point-of-care device is developed for the measurement of plasma free hemoglobin (PFHb) and hematocrit (Hct), based on colorimetry. The device consists of camera module, minimized centrifuge system, and the custom software that includes the motor control algorithm for the centrifuge system, and the image processing algorithm for measuring the color components of blood from the images. We show that our device measured PFHb with a detection limit of 0.75 mg/dL in the range of (0–100) mg/dL, and Hct with a detection limit of 2.14% in the range of (20–50)%. Our device had a high correlation with the measurement method generally used in clinical laboratories (PFHb R = 0.999, Hct R = 0.739), and the quantitative analysis resulted in precision of 1.44 mg/dL for PFHb value of 14.5 mg/dL, 1.36 mg/dL for PFHb value of 53 mg/dL, and 1.24% for Hct 30%. Also, the device can be measured without any pre-processing when compared to the clinical laboratory method, so results can be obtained within 5 min (about an 1 h for the clinical laboratory method). Therefore, we conclude that the device can be used for point-of-care measurement of PFHb and Hct for MCS.
Also, the measurement of PFHb and Hb in clinical laboratories requires a pre-analytical process for each assay. PFHb requires centrifugation because it requires the separation of blood cells and plasma before measurement, which takes about 10 min 18 . A common method for Hb measurements is spectroscopy using the cyanide methemoglobin method. This method can be measured at high sensitivity, even in small quantities, but to produce cyanide hemoglobin, chemical substances such as potassium cyanide (KCN) must be used to lysis red blood cells for hemoglobin release and induce cyanide. Since KCN is a toxic chemical that can lead to death even in small quantities, it has the potential to be harmful to the health of the user (the analyst in the laboratory) 18,19 .
Since the parameters PFHb, Hct, and Hb are the most common and necessary for the diagnosis of bloodrelated diseases, various studies have been carried out on how to measure these parameters 20 . Recently, there has been a study on developing a hemolysis diagnosis device using a mobile phone that measures PFHb level using color difference values 18 . It has the advantage of being able to diagnose on-site and show the results quickly with a mobile phone, but it takes more than 10 min to separate plasma and blood cells. In addition, the measurement results may vary depending on the camera performance of the mobile phone and ambient light. Detection methods for Hb have been developed based on the photothermal response of iron oxide in the Hb 21 . This method can measure Hb using very small amounts of blood without the specific reagents with high accuracy, but it requires the use of an expensive laser module that produces a photothermal effect. In addition, it can be difficult to measure PFHb, because generally the level of the PFHb is lower than the measurement limit (0.12 g/ dL) of this method. Since the mentioned studies can only measure a single parameter, they are not suitable for patients supported by the MCS systems.
Therefore in this study, we present a device design and method for analyzing the level of hematological parameters of PFHb, Hct, and Hb that can be applied to the MCS. The device is developed using colorimetric analysis, based on the fact that as the level of PFHb increases, the color of the plasma becomes redder 22,23 . Also, a customized channel cartridge was used to measure all three parameters with a small amount of blood (35 µL), and the centrifuge system was integrated into the device, so that centrifugation and analysis could be performed simultaneously. To verify the performance of the device and its suitability for use in the MCS environment, we performed the device validation with blood samples obtained during ECMO animal experiments.
Results
Device for point-of-care measurement. To develop a portable and stand-alone device for use in MCS environments, we designed a device that integrates the centrifuge and the main control part ( Fig. 1a and Supplementary Fig S1). The centrifuge part consists of: (1) a BLDC motor, which is the main part of the centrifuge system; (2) a holder to help fix the customized channel cartridge, and to facilitate the acquisition of microchannel images; (3) a light source that provides a constant light intensity inside the device; and (4) a camera module The left side shows the touch screen of the main control system. The centrifuge system is on the right side, and the camera module is placed on a cover above the centrifuge system to obtain a channel image of the cartridge. (d) The centrifuge system consists of motor, motor holder, rotor, and cartridge holder. A customized channel cartridge is fitted to the cartridge holder. www.nature.com/scientificreports/ (Pi camera ver.2.1) that is compatible with the Raspberry Pi 3. The main control part of the device consists of (1) a user-friendly touch screen to operate the device, and (2) a Raspberry Pi3 to control the motor, camera module, and image processing 24 . The overall dimensions of the device are 290 mm (L) × 115 mm (W) × 130 mm (H), and 1.1 kg in weight, and the entire housing was fabricated using a 3D printer (Stratasys F123 Series, Stratasys, Israel) (Fig. 1b,c). The channel cartridge is designed for centrifugation and imaging for a small amount of blood (Fig. 1d). To operate the device, first the chip is mounted on the holder, and the user presses the start button on the touch screen, whereupon the centrifugation, acquisition of the image, and the image processing from the camera are sequentially performed by custom software. The software includes a motor control algorithm for centrifugation, and an image processing algorithm for analysis. This integrated system allows the analysis to be performed in a short time, without complicated procedures.
Image processing algorithm for blood analysis. Figure 2 shows the operating flowchart of the entire system. The image analysis process begins with the segmentation of the region of interest (ROI) in the acquired image after the centrifugation. Afterwards, the PFHb measurement process and the Hct measurement process are performed simultaneously in the program. The ROI at the initial step of image processing is segmented to cover the entire channel of the cartridge. Red, green, and blue (RGB) values are extracted for each pixel, and the channel is divided into the region of red blood cells (RBCs) and plasma based on the specific threshold of red color. In order to measure the PFHb level, the second ROI (2nd ROI) with the size of (20 × 100) pixels is segmented from the previously distinguished plasma region. RGB values are extracted from this 2nd ROI, and then converted to CIELab values. The CIELab is a color space defined by the International Commission of Illumination, which makes it possible to closely match the color difference that the human eye can detect, and the color difference expressed in numerical values in the color space 25 . The color components are represented by L *, a *, b *, where, L * is the brightness, a * is the degree of red and green, and b * is the degree of yellow and blue. L.a.b. values were extracted from the region of black marker and the plasma in the 2nd ROI, respectively 18,26 . The chroma difference (△C) was calculated using the a * and b * values of the 2 nd ROI, and converted to the PFHb levels by the calibration curve (Eq. 1): where, a m and b m are the CIELab components of the region of black markers, and a s and b s are the CIELab components of the region of the plasma in the 2nd ROI. To exclude the change of the color components caused by the brightness, the difference between the average brightness value (L) of the black marker obtained from the calibration images and the brightness value of the black marker obtained from each sample image was added as an offset value to the chroma difference.
To measure the Hct, the coordinate values of the lowest and the highest row in the pixels of the RBCs region are obtained, and the distance between the rows (L1) represents the volume of the RBCs. The distance between the highest row of the plasma region and the lowest row of the red blood cell region is expressed as the length (L2), which represents the total blood volume in the channel of the cartridge. The Hct level is calculated as the ratio of L1 and L2 as in Eq. (2): www.nature.com/scientificreports/ where, Hct is the percentage of the RBCs volume in the whole blood volume, which is generally the same as three times the levels of the hemoglobin level. Therefore, we used the Hct method to calculate the hemoglobin level (Eq. 3): 27 Standard curve for the quantification of plasma color. The method to measure the level of PFHb is based on the phenomenon that as the hemolysis becomes more severe, the redness of the color of plasma increases. To obtain a change of the color intensity according to the degree of hemolysis, we induced a severe hemolysis to the blood sample of swine. The stressed blood was centrifuged to collect only the plasma, and we adjusted different levels of the PFHb by diluting the collected plasma to obtain the required range in the clinical setting. Images of each diluted plasma sample were obtained using our device, and a relationship was obtained by comparing the color intensity of the plasma extracted from the image with the PFHb level measured by the actual lab test. Figure 3a shows the change of intensities in RGB channels according to PFHb levels. It shows that the intensity of the red channel is higher than that of the green and blue. Also, Fig. 3b shows the change of intensities in L,a,b color space according to the PFHb levels. The intensity of light is similar to that of the RGB channel, but a * and b * spaces show a different graph of change, compared to the RGB channel. Therefore, because the intensity of the light can be interpreted to have a significant effect on the RGB channel, the chroma difference that reflects only the changes in a* and b* spaces is adopted as a standard curve, to exclude the change of light. As shown in the results, since the gradient change of the color channel does not have linearity, and varies based on the specific level of PHFb (20 mg/dL), different calibration curves were obtained based on this. Figure 3c Figure 4a is an image of the channel cartridge obtained after centrifugation and the analysis algorithm started using this image according to the procedure in Fig. 2. Blood samples were obtained at one hour intervals during ECMO experiments, and collected blood samples (n = 12) were compared with the developed device and the lab test results. Our device took about 5 min to get the results of the analysis. Also, we found that the results measured by our device were very similar to those of the lab test. Regression analysis and Bland-Altman analysis were performed on the PFHb and Hct to verify the reliability of our device 28 . The results of PFHb measured by our device and lab test correlated well (R value of 0.999, n = 12) (Fig. 4b). In addition, Bland-Altman analysis of the PFHb (Fig. 4c) showed the mean bias is -0.66, a 95% confidence interval of (− 2.44 to 1.12) %. The correlation coefficient for Hct is 0.739 (Fig. 4d), and the result is also shown in the 95% confidence interval of all data in the Bland-Altman analysis (Fig. 4e). These results show that more than 95% of the difference between the results of the device and the lab test is within this performance criterion. Table 1 also showed the comparison result of Hb levels in our device with those of the lab tests. Also, the detection limit (LOD) of the developed device was obtained from the regression curve, and the LODs were 0.75 mg/dL for PFHb, 2.14% for Hct.
The precision was quantified by measuring each blood sample three times repeated, and each blood sample corresponding to the value of PFHb (14.5 mg/dL, 53 mg/dL) and Hct (30%) was measured separately by our device and the reference methods. Table 2 shows the precision results, our device measured 15.13 mg/dL (for PFHb value of 14.5 mg/dL), and the standard deviation (SD) and the coefficient of variation (CV) were 1.44 mg/ dL and 9.49%, respectively. For PFHb value of 53 mg/dL, the device measured 53.08 mg/dL, and SD and CV www.nature.com/scientificreports/ were 1.36 and 2.56, respectively. In addition, for Hct 30%, an average 29.13% were measured, and SD and CV were 1.24% and 4.26%, respectively. Therefore, from these results, the proposed device showed that it is possible to measure various blood parameters with in precision in the MCS environment.
Discussion
We have described the device that automatically analyzes blood based on colorimetric methods, and found that the advantage is a portable blood analysis device that can be used to immediately measure in the MCS environment.
Patients supported with the MCS system are at great risk of developing hematological problems, such as hemolysis, bleeding, and thrombosis. Due to the importance of blood transfusion, there is device that continuously measure total hemoglobin in the clinical setting 29,30 . However, there is no device can simultaneously measure these diverse hematological parameters, including PFHb, which is necessary for the diagnosis of hemolysis. In addition, the method performed in the clinical setting takes a long time to obtain the results of the analysis, and a specialist has to be present who can use the analytical device. Therefore, in this study, we aimed to develop a device that could rapidly analyze the hematological parameters of patients with the MCS system by considering user convenience and the time consumed for analysis.
To measure hematological parameters in the clinical, the blood sample must be treated by chemical methods or centrifugation, and more than (50-200) µL of blood is required to analyze each lab test. We therefore embedded a small centrifuge system into the device to simplify the complicated procedure, and enable immediate analysis on site. Our device also has the advantage that it can be used without any preprocessing procedures, because it simply uses the color values of the image with separated blood. In addition, the device can measure and estimate all three hematological parameters presented in this study with only 35 µL of blood. The measurement range of PFHb is (0-100) mg/dL with a LOD of 0.75 mg/dL; Hct can be measured in the range (20-50) % with a LOD of 2.14%; and Hb can be estimated by Hct measurement. For each measured parameter (PFHb, Hct), the Bland-Altman analysis showed that they were within 95% confidence intervals, demonstrating the high accuracy of our device compared with lab tests. In addition, the quantitative analysis was performed by comparing the clinical lab tests with the developed device. The spectrophotometric method of measuring PFHb has a precision of SD 5.2 mg/dL and CV of 9.4% for PFHb value of 5.2 mg/dL in the range of (0.3-62.5) mg/dL. This method cannot be automated because the technician has to estimate the PFHb by measuring the absorbance at three wavelengths (380 nm, 415 nm, 450 nm) 31 . The Hb measuring device ADVIA 120 Hematology analyzer (Siemens AG, Germany) has an accuracy of SD 0.14 g/dL and CV 0.93% for Hb value of 15 g/dL in the range of (0.0-22.5) g/dL. This device has very high precision, but requires toxic chemicals and pre-processing of the sample. However, since our device separates blood cells and plasma with its own centrifuge system, and simply uses color values to measure blood parameters, it does not require toxic chemicals and pre-processing, and can measure in a short time (about 5 min). The precision analysis results of the developed device have SD 1.44 mg/ dL and CV 9.49% for PFHb value of 14.5 mg/dL, and SD 1.36 mg/dL and CV 2.56% for PFHb value of 53 mg/ dL. In addition, for Hb value of 10 g/dL derived from Hct 30%, the precision of SD 0.40 g/dL and CV 4.18%. Therefore, our device has the potential to replace the existing analysis method in MCS environment.
A limitation of this study is that validation was performed for a limited range of PFHb, Hct, and Hb levels. Since the condition of the animal should be kept stable during the experiment, the blood sample used for the verification of the proposed device did not deviate significantly from the normal blood range. It is reported that if PFHb levels rise above a certain hemolysis range (Mild), the risk of acute renal failure and thrombus formation increases. Therefore, we have shown high accuracy results in this mild hemolysis range of (15-60) mg/dL, which requires additional attention when monitoring patients 32 . Also, it is considered that the accuracy is more promising at the higher hemolysis range above the mild hemolysis range (~ > 60 mg/dL), because the higher the concentration of PFHb, the stronger the intensity of the color. In addition, the R value of Hct is relatively lower than the R value of PFHb. These results can be presumed to be due to the limited range of measurements, since during animal experiments, the Hct range of blood did not significantly change. However, the normal Hct level in swine is usually (35-40)%, the results meet the range of Hct < 30% and Hb < 12 g/dL as considered anemia 33,34 , and the results also show that the device has accuracy in this range. Therefore, the performance of our device could be sufficient to analyze Hct and Hb levels. In addition, in quantitative analysis, the three replicates could be slightly insufficient, but, in the MCS environment, blood cells are constantly stressed by pumps, making it difficult to obtain exactly the same value even if several samples are taken within a short time in the same animal. Therefore, we performed quantitative analysis on only three samples obtained from the same animal that completely consistent with the laboratory test results. However, since CV has less than 10% even in three replicates, it is expected that higher precision can be obtained through further studies to improve the performance of the device. www.nature.com/scientificreports/ In conclusion, we propose a PFHb and Hct measuring device that can be used rapidly and intuitively in the MCS environment. The device integrates a centrifuge with a control and analysis system to reduce the time from sample collection to analysis results. In addition, since PFHb and Hct are measured based on the color information of the centrifuged blood image, the technique does not require complicated preprocessing or reagents, and the device can be easily used without professional training. Therefore, we anticipate that our device will increase the convenience of diagnosing blood parameters in patients, and also has the potential as an analytical device that is rapid, easy-to-use, and portable, which meets the needs of the MCS environment. Finally, we intend to further improve the performance for on-site diagnostic hematological analysis equipment. A possible plan is to apply colorimetric and spectrophotometry simultaneously to improve measurement resolution, and to measure more hematological parameters. We also intend to develop a network system using mobile software that enables measured results and records to be checked, even when users are not in the field.
Methods
Centrifuge system. The centrifuge system consists of a 12 W brushless DC motor (EC-max22, Maxon Motor, Switzerland) and a motor driver unit (ESCON 50/5 Servo Controller, Maxon Motor, Switzerland) for stable operation at high speed 35 . Chip-on-board LEDs (SY-LD1003, SMG, China) were used to give strong light intensity at low power (12 V, 100 mA), and resistors (120Ω) were used to deliver a constant current to both LEDs so that they could maintain a constant light intensity. The two LEDs were placed on both sides of the cartridge holder, so that the light intensity did not vary depending on the position of the channel cartridge. In addition, the optical diffusers suitable for the size of the LED were fabricated using acrylic plate (3 mm), so that the light was not directly transmitted to the channel cartridge and could spread uniformly in the centrifuge chamber. The camera module (Pi camera ver.2.1) is placed in the lid of the centrifuge chamber, to obtain a front view of the cartridge (Supplementary Fig. S1 and Fig. S2).
Software process.
To carry out the analysis, the user puts the customized channel cartridge into the holder, and presses the start button on the user interface to activate the entire system. To centrifuge blood, the motor rotates at about 200 × g (8,000 RPM) for 3 min 35 . When it slowly decelerates to 100 RPM, the stop signal is transmitted. The holder is then magnetically coupled with the housing, and held in a certain position to capture the image. When the position of the holder is fixed at the same time as the stop signal, the image of the channel captured with the camera module is transferred to the image processing algorithm. The image processing process is described in detail in Sect. 2.2, and when the analysis result is obtained, it is shown on the user interface. The user interface uses a 5-inch touch screen for easy operation. The custom software was created using Qt creator to construct a graphical user interface (GUI), and all code, including motor control and image analysis, was written using Python 3.6.1.
Customized channel cartridge and holder design. The dimensions of the channel cartridge are 29.5 mm (L) × 24 mm (W) × 6 mm (H), and the dimensions of the channel in the cartridge are 15 mm (L) × 1.5 mm (W) × 2 mm (H). The channel cartridge was developed specifically for this system, and made of acrylic material using a laser cutting machine (MYM-1409, MYCNC, Korea). The cartridge consists of three parts, the lower, middle, and upper parts, and all parts have the same thickness of 2 mm. The hole in the upper part is the blood chamber, and after the blood is dropped into the chamber, it is covered with a black cap to prevent the blood from flowing outside ( Supplementary Fig. S3). The color of the cap serves as a marker for chroma difference (△C) when analyzing images. The holes on both sides of the blood chamber are used to fix the cartridge at a given position in the holder.
The cartridge holder consists of two parts, the lower and the upper part, which parts were manufactured using a 3D printer (Stratasys F123 Series, Stratasys, Israel). The lower part of the chip holder has a magnet on the opposite side of the channel, so that the induced magnetic coupling between the housing and the holder can align the cartridge in the specified location at all times. At the bottom of the channel, the white acrylic plate (25 mm (L) × 8 mm (W) × 3 mm (H)) is inserted to emphasize the color of the centrifuged plasma. The upper part of the holder is designed so that the camera can easily acquire the channel image and the ROI (Supplementary Fig. S4). In addition, the axis of rotation of the motor is aligned with the central axis of the blood chamber in the cartridge, so that when centrifugal force is applied, all the blood can move to the connected channel. Figure S5 shows the procedure for the calibration of PFHb. First, about 50 mL of blood was collected in a conical tube, and stressed for about 1 min using a vortex stirrer to cause severe hemolysis. We used the centrifuge to obtain only the plasma of the stressed blood, and this plasma was defined as the reference solution (Original). The original solution was diluted with phosphate buffer saline (PBS pH 7.4, Gibco BRL, USA) at ratios of (1:10, 1:15, 1:20, 1:30, 1:50, 1:65, 1:85, and 1:100), to obtain samples within the appropriate range of PFHb level. The samples with more than 1,000 mg/dL of PFHb were visually very red, and the red color of the solution decreased, depending on the degree of dilution. Diluted solutions in each ratio were sent to the laboratory to obtain gold standard PFHb levels that were the basis of the calibration curve. According to the guideline for the ECMO patients, the hemolysis stages are divided into four stages based on ECMO patients: Normal (< 30 mg/dL), Mild elevated ((30-50) mg/dL), Moderate elevated ((50-70) mg/dL), and Critical (> 70 mg/dL) 32,36 . Animal preparation and blood sample acquisition during MCS. All www.nature.com/scientificreports/ guidelines and regulations. The study is in compliance with ARRIVE guidelines for the in-vivo studies carried out on animals. Two female swine of 80-100 kg were fasted overnight with free access to water preoperatively. Premedication consisted of subcutaneous injection of atropine 0.05 mg/kg and intramuscular injection of xylazine 3 mg/kg and zoletil 5 mg/kg. After induction of anesthesia, swine were intubated via mouth with 7.5 mm endotracheal tube and connected to mechanical ventilator. Anesthesia was maintained with sevoflurane 1% and supplemented by oxygen 2 L/min with inspired oxygen fraction of 40% and by vecuronium bromide 0.1 mg/kg for muscle paralysis. Using the Seldinger technique, two 17-Fr cannulae were catheterized into femoral artery and opposite side femoral vein. After administration of heparin 400 unit/kg, two kinds of ECMO devices, SACCS-01 (CEBIKA, Korea) and Bioconsole-550 (Medtronic, Watford, UK), were connected to the cannulae for the MCS. Blood samples of 10 ml were obtained via arterial line at every hour after the ECMO devices started. A 35 µL of each blood sample was used for test of the device, and the remaining blood was used for laboratory tests covering PFHb, activated clotting time (ACT), arterial blood gas analysis (ABGA), and complete blood count (CBC), including Hct and Hb. We finally sacrificed the animal with high dose potassium chloride injection by animal experiment guideline.
|
2021-02-17T05:10:00.614Z
|
2021-02-15T00:00:00.000
|
{
"year": 2021,
"sha1": "88c3e312150506bcebbe0a81143b71a1147a0719",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-83327-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88c3e312150506bcebbe0a81143b71a1147a0719",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
89196209
|
pes2o/s2orc
|
v3-fos-license
|
Population structure of Megabalanus peninsularis in Malpelo Island, Colombia
Megabalanus peninsularis is a key species in the rocky shore, particularly in Malpelo Island in the Eastern Tropical Pacific (ETP) where it dominates the mesolittoral zone and the infralittoral fringe. The size structure of M. peninsularis was determined to infer the population state (pyramid model) by measuring the basal diameter of individuals (n= 837) around the oceanic island. The species population distribution followed a bimodal pattern (class I-II and VII). The pyramid expansive model, with a relatively higher number of juveniles than adults suggests that the population is growing.
INTRODUCTION
Barnacles are cosmopolitan organisms that dominate rocky shores around the world, colonizing from the supralittoral to the infralittoral depending on the species and environmental conditions (Chan 2006).These crustaceans can modify the littoral community assemblage due to their particular biological characteristics like rapid growth, high fertility, and high tolerance to extreme weather conditions, which allow them to better compete for space (Penchaszadeh et al. 2003, Chan 2006, Tapia & Navarrete 2010).
The life cycle of barnacles comprises a pelagic larval stage and a sessile stage; first as a juvenile and then as an adult when it becomes reproductive (Chan 2006).The duration of the life cycle is highly variable (Muko et al. 2001) with reports suggesting 3.75 years on average (Jeffery & Underwood 2001, Chan & Williams 2004, Golléty et al. 2008) and up to 7 years (Calcagno et al. 1998) for Balanus amphitre.
The growth rate and sexual maturity of barnacles (tm) vary according to local environmental conditions (Zvyagintsev & Korn 2003, Heather et al. 2005, Chan 2006).Chan & Williams (2004) studied barnacle growth after settlement; Tetraclita squamosa grew differently in Middle Bay and Heng Fa Chuen Bay, between 0.8 to 1.16 mm month -1 , respectively.Similar results were found for Tetraclita japonica from 1.0 to 1.4 mm month -1 .Sexual maturity also changed, individuals of Capitulum mitella in Hong Kong reached sexual maturity at 9-12 months, and 2 years in China (Lin 1993).The average size of B. amphitrite recruits varied between 3.9 mm (Zvyagintsev & Korn 2003) to 4.8 mm in different locations (El-Komi & Kajihara 1990).
Population structure studies involve measuring a number of individuals and assigning them to a particular size class (frequency), employing a previously defined and ranked independent variable (Akcakaya et al. 1999).Three types of population models are proposed in the literature (also called pyramid models), based on the nature of the size class distribution: (i) growing or in expansion, characterized by a high proportion of juveniles compared to other classes (positive skewness), (ii) stable, with a high proportion of adults but a significant relative percentage of juveniles (normal distribution), (iii) regressive, in which adults dominate and the recruitment of new individuals to the population is limited or low (negative skewness; Bühler & Schimd 2001, Hengland et al. 2001).
The population structure allows us to infer the population condition (state) and predict population growth (Akcakaya et al. 1999).It also serves as a baseline, in monitoring programs, to compare how local changes in climate or oceanographic conditions (new potential habitats) and human impact (e.g., pollutants) affect the reproductive/recruitment success and the viability of the species (Bak & Meesters 1999, Díaz et al. 2000, Gori et al. 2011).The population structure analysis helps determine the viability of the population through time (Bühler & Schimd 2001, Hengland et al. 2001, Rockwood 2006).
According to Gilmore (2004) and Macpherson & Scrosati (2008), the population structure of a particular species is specific to the area studied (location and depth) at the moment that it is measured.For key species (ecosystem functioning), the population size structure and the pyramid model have been used as cost-effective indicators of population health (Minchinton & Scheibling 1991, Kipson et al. 2014).This data will enable the implementation of adaptive management strategies to safeguard local populations; this is one of the objectives of conservation (Babcock et al. 2010, Linares et al. 2012, Santangelo et al. 2012).
Megabalanus peninsularis is the dominant cirriped of the rocky shore in Malpelo Island (Mayor et al. 2007, García et al. 2012), a Fauna & Flora Sanctuary -UNESCO located in the Eastern Tropical Pacific-ETP.This species has been reported in the ETP from Cabo San Lucas, Mexico to the Galapagos Islands in Ecuador (Gómez 2003, Witman & Smith 2003, Lozano-Cortés & Londoño-Cruz 2013).Research on the rocky shore ecosystem is limited in the Colombian Pacific as well as in Malpelo Island; most studies are taxonomic registers (García et al. 2012) with no ecological data.For this area of Colombia, only 6 studies related to coastal species taxonomy have been published; none of them related to population structure (Venail 2002, Zapata & Vargas-Angel 2003, Rodríguez-Rubio & Giraldo 2011, Sánchez et al. 2011, Velasco et al. 2011, Zapata et al. 2011, Lozano-Cortés & Londoño-Cruz 2013).Policy makers and managers have recognized the need for baseline studies on rocky shore organisms to know their abundance, spatial distribution, population structure, size, and status of key ecological species (INVEMAR 2009(INVEMAR , 2010) ) to enable the design of management efforts in the rocky shore ecosystem.The previous information highlights the importance of this study; the first on barnacles in Malpelo Island.Our approach relies on the barnacles' benthic stage as there are limited techniques to follow and quantify gametes, embryos, and the variety of larval stages (Akcakaya et al. 1999, MacPherson & Scrosati 2008).Our main objective was to characterize the population size structure of the dominant species, M. peninsularis, to infer its state around the island (4.0 km perimeter; INVEMAR 2015).
MATERIALS AND METHODS
Malpelo is located at 4º00'08''N and 81º36'3''W, in the central region of the Colombian Pacific Basin.The island and its islets are part of the Malpelo Wildlife Sanctuary (Mayor et al. 2007) and the marine conservation corridor of the Eastern Tropical Pacific (CMAR; Rodríguez-Rubio & Giraldo 2011) that extends 6.5 km2 (Mayor et al. 2007).The island is volcanic, composed of rugged basalt rocks (Caita & Guerrero 2000) 1 ; its perimeter is entirely rocky coastline, predominantly upper slopes averaging 40 degrees (Brando et al. 1992, López-Victoria & Estela 2007, Mayor et al. 2007).
Here, the North Equatorial Counter Current (NECC), which drags warm waters of the Indo-Pacific converges with the Panama Cyclonic Current (PCC) coming from the north, the north-south Colombia Current (COLC) (passing by Gorgona Island, a continental island of Colombia), the Humboldt Current (HC) and the South Equatorial Current (SEC; Brando et al. 1992, Bessudo et al. 2005;Fig. 1).This convergence of oceanic and coastal currents make the fauna in Malpelo compelling from an ecological (stepping-stone island between central and eastern Pacific, for pelagic larval dispersal; Corredor-Acosta et al. 2011) and evolutionary (endemic species) standpoint.
Sea surface temperatures vary between 23 and 28°C (Rodríguez et al. 2007).Storms in the area produce waves that exceed 5.0 m in height.These strong waves impact the littoral most of the year with enough energy to erode the rock and remove sessile organisms, and affect the succession cycle in the rocky shore.Tides in Malpelo are semi-diurnal, varying between 0.6 and 5.0 m (Bessudo et al. 2005).The datum for the rocky shore has never been calculated in the study area.Waves and tides create a wide supra and the mesolittoral zones around Malpelo Island.
The East and West sides of Malpelo are exposed to leeward or windward conditions depending on seasonality (dry, wet, and wind pattern).In February 2011, 4 zones of the island were sampled, 2 on the East side (Arrecife and Fantasma) and 2 on the West side (Freezer and Nevera; Fig. 1).These sites were selected because, in a preliminary sampling they showed the greatest density and coverage of M. peninsularis in the island.
Using a grid situated randomly on the rocky shore 84, 93, 40 and 68 plots of 50 x 50 cm (Fig. 2) were situated in Arrecife, Fantasma, Nevera, and Freezer, respectively.The unequal number of plots between locations reflects the relative density of M. peninsularis at each site.All individuals within the plots were sampled: 171 in Arrecife, 288 in Fantasma, 169 in Freezer, and 209 in Nevera.We used a Kruskal-Wallis non-parametric test for two independent samples to determine if there were differences in the diameter of the individuals sampled (population structure) in the West and the East, also to compare each class size between sites.The basal diameter of 837 individuals was measured using a gauge with a precision of ± 0.05 mm.The species were identified using taxonomy keys (Henry & McLaughlin 1986, Gómez 2003) and consulting a specialist, Dr. Romanus Prabowo 2 .The sampling setup was not designed to compare depths (meso vs. infralittoral) or measure the population in the supralittoral zone, first, because it is not the ideal habitat for this species and, second for safety reasons.High waves (3-8 m) did not allow us to remain stationary to measure the organisms, and the steep rock slope (> 40 o ) also hindered our efforts (Fig. 2).The population size structure of individuals from the mesolittoral zone to the infralittoral fringe was quantified by diving mainly during high tide and in calm water.
RESULTS AND DISCUSSION
The densities found at each sampled site were Arrecife 50.3 ind m -2 , Fantasma 91.4 ind m -2 , Freezer 55.0 ind m -2 , and Nevera 64.6 ind m -2 .Non statistical differences were found in the diameter of the East and West individuals (Kruskal-Wallis test= 2.76, P= 0.096, n west = 378 individuals, n east = 459, Fig. 2); this allowed us to pool the data to infer about the whole population.Additionally, no statistical differences were observed when we compared each size class between West and East (Kruskal-Wallis test, P > 0.05).
The relatively low frequency of the classes III-V could reflect low recruitment in the past or high mortality rates among these classes.According to the theory, low recruitment in some years could be explained by limited suitable habitat to be colonized by larvae due to adults or other benthic colonizers (Gilmore 2004, Macpherson & Socrati 2008, Suárez & Arrontes 2008, Cruz et al. 2010) and La Niña years (e.g., 2011) in which oceanographic conditions changed in the ETP.Lower temperatures curtail reproductive output and recruitment (Romero et al. submitted).Higher precipitation during La Niña causes an osmotic shock, affecting feeding, food efficiency and survival (Sanford et al. 1994, Burrows et al. 2010); this may be the case in Malpelo Island.
The survival rates of some classes decreased with high densities of individuals (class IX-XI) that produce intraspecific competition for space, overgrowth, and smothering (Schubart et al. 1995, Chan & Williams 2004).The individuals of M. peninsularis in Malpelo showed overgrowth of the same species and competition for space with Tetraclita transversus, Tetraclita panamensis, and Chthamalus sp.(Brando et al. 1992, García et al. 2012).High predation also decreases survival rates; this was measured in populations of Balanus glandula and Diadema antillarum (Menge 2000, Forero 2006), predators are drawn to intermediate size prey; small individuals offer smaller sources of energy (cost-benefit), large individuals 3,5,VIII= 3,93,IX= 3,36,X= 4,79,XI= 4,11 escape predation (size refuge); this may explain the low frequency found in classes III-V.Lastly, storms and high waves detach layers of superimposed barnacles from several cohorts and of different size classes).Malpelo´s rocky shore is exposed to high hydrodynamics that are produced by a combination of factors such as waves (over 5.0 m), tides (3.0-5.0 m), oceanic currents, and frequent storms that generate enough energy to erode the rock and detach M. peninsularis from the littoral zone (Bessudo et al. 2005).Heavy storms could reduce the frequency of larger individuals (e.g., Class X: 4.37-4.79cm and Class XI: 4.8-5.11cm), creating space for new recruits.
M. peninsularis has a growth rate of 2.08 mm month -1 in the ETP (Galápagos Islands, Witman & Smith 2003).We used this data to calculate the age of individuals belonging to a particular size class (population structure based on age; Akcakaya et al. 1998).Based on this data, individuals of the first size class, with a basal diameter between 0.5-0.92cm (Fig. 3), correspond to 1.0 to 4.4 months old, Class II to 4.5 to 6.4 months, months.All the calculations are assuming similar growth rate among size classes.The distribution curve based on age for M. peninsularis followed the same frequency and bimodal pattern in Fig 3 .We also used the sexual maturity age of 6 months for M. peninsularis proposed by Chan (2006) to estimate the proportion of immature and mature individuals in the Malpelo population.This value allowed us to infer that the size frequency of classes I and II could be juveniles (non-reproductive individuals), which corresponds to 40.1 % of the population.Classes III to XI correspond to adults (59.9% potentially reproductive).In that scenario, adult organisms that die could easily be replaced by new juveniles (Grigg 1975, Oostermeijer et al. 1994, Rodríguez et al. 2007).The juvenile to adult ratio in barnacles may indicate equilibrium in the population (Burrows et al. 2010).
According to our calculations, M. peninsularis recruitment (classes I and II) in Malpelo could occur between July and September 2010, nearly 6 months previous to sampling.In Malpelo Island, benthic populations were influenced by El Niño events in 2009 and 2010 in which sea surface temperatures rose 1.0 to 1.5°C (IDEAM 2009, León 20103 ).The successful recruitment observed in classes I and II could be the result of high sea surface temperature during April and May 2010 (IRI 2015)4 .El Niño may have improved dispersal, and long distance connectivity, as well as the recruitment of larvae dispersing from downstream populations, increasing the frequency of individuals in the first classes, and playing a role in the genetic pool of target species.Higher current velocity (Fig. 1) during El Niño may have increased the probability of larval dispersal from as far as the Central Pacific to Malpelo Island (Corredor-Acosta et al. 2011).Similarly, the high temperatures during May and June 2009 could have generated the high frequency of individuals found in age classes VI-VIII (12.6-18.6 months;IRI 2015) 4 .Comparable settlement peaks were reported for other barnacle species during the warmer months of the year (García & Moreno 1998, Dionisio et al. 2007, Suárez & Arrontes 2008, Savoya & Schwindt 2010).El Niño events in 2009 and 2010 could be associated with the bimodal pattern found for M. peninsularis in Malpelo.These explanatory hypotheses should be tested in future studies; population structure and dynamics has been related to local oceanographic conditions, as demonstrated for Chthalamus stellatusy, Chthamalus dalli, Notobalanus flosculus, Semibalanus balanoides, and Balanus gladula (Berger et al. 2006, Macpherson & Scrosati 2008, Suárez & Arrontes 2008, Tapia & Navarrete 2010).
The last two classes of M. peninsularis presented limited individuals.Based on our estimations, older individual (class X-XI) are around 2.5 years old.We hypothesize that the life cycle of M. peninsularis could be close to 2-3 years, likewise to the succession cycle in the rocky environment due to high hydrodynamic conditions in the littoral.
In conclusion, the population of M. peninsularis in Malpelo Island is growing.The high frequency of juveniles suggests a resilient population, as they can replace dead individuals.
ACKNOWLEDGMENTS
To Sandra Bessudo and German Soler (Fundación Malpelo) for financing the field trip to the island.Romanus E. Prabowo for helping in the identification of species, Mauricio Romero for the comments to improve the manuscript and to Marly Rincón for help in the editing process.
Figure 2 .
Figure 2. A. Comparison of basal diameters between East and West, non-statistical differences were found; B. Sampling method, plots of 50 x 50 cm.The lower box boundary, midline and upper box boundary, correspond to the 25 th , 50 th and 75 th , percentiles respectively / A. Comparación de diámetros basales entre Este y Oeste, no se encontraron diferencias significativas; B. Método de muestreo, cuadrantes de 50 x 50 cm.El límite inferior, medio y superior corresponden a los percentiles 25, 50 y 75 respectivamente
|
2018-12-15T18:38:33.522Z
|
2016-08-01T00:00:00.000
|
{
"year": 2016,
"sha1": "0507a8f83c739a3b3e45fd243af893dcfdc33001",
"oa_license": "CCBYNC",
"oa_url": "https://scielo.conicyt.cl/pdf/revbiolmar/v51n2/art24.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0507a8f83c739a3b3e45fd243af893dcfdc33001",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
246810511
|
pes2o/s2orc
|
v3-fos-license
|
Nano-Forensics a Comprehensive Review
Nanotechnology is making a valuable contribution in every field. It is a widely used technique because of its ability to manipulate and characterize the matter at a level of single atoms and small atoms [1]. The word “nano” is derived from the ancient Greek “Nanos,” meaning “dwarf,” which refers to the “billionth” or a factor of 10-9. In general, to understand 1nm is about 3-10 atoms wide. It is very tiny when compared with the standard size encountered day-to-day. E.g., 1nm is 1/1000th the width of human hair. Nanotechnology described the study aspect concerning the science, engineering, and technology conducted at a scale that ranges between 1 to 100 nanometers [2]. The potential of nanotechnology in electronics, diagnostics, biosensing, imaging, optical devices, and drug delivery due to their small size, large surface area, and enhanced reactivity [3-7]. Its multi-purpose application in almost every field has made it a universal purpose technology [2]. Therefore this universal technology has a plentiful of applications in the field of forensic science. Nano-Forensic, an exquisite blend of nanotechnology and forensic science, is entirely a novel discipline in forensic science. Nano-Forensic helps identify and examine evidence at the nanoscale level as earlier, it is impossible to analyze the critical evidence because of the instrument’s detection limit. With advancements in the techniques, Nano analysis is transforming the investigation process by making them more accurate, faster, and more sensitive [8,9]. Nano forensic and other technologies have significant fingerprint analysis applications, explosive detection, drug screening, toxic substance analysis, and DNA analysis [10,11]. This review would briefly update how nanotechnology is widely used in forensic analysis of evidence.
Introduction
Nanotechnology is making a valuable contribution in every field. It is a widely used technique because of its ability to manipulate and characterize the matter at a level of single atoms and small atoms [1]. The word "nano" is derived from the ancient Greek "Nanos," meaning "dwarf," which refers to the "billionth" or a factor of 10-9. In general, to understand 1nm is about 3-10 atoms wide. It is very tiny when compared with the standard size encountered day-to-day. E.g., 1nm is 1/1000th the width of human hair. Nanotechnology described the study aspect concerning the science, engineering, and technology conducted at a scale that ranges between 1 to 100 nanometers [2]. The potential of nanotechnology in electronics, diagnostics, biosensing, imaging, optical devices, and drug delivery due to their small size, large surface area, and enhanced reactivity [3][4][5][6][7]. Its multi-purpose application in almost every field has made it a universal purpose technology [2]. Therefore this universal technology has a plentiful of applications in the field of forensic science. Nano-Forensic, an exquisite blend of nanotechnology and forensic science, is entirely a novel discipline in forensic science. Nano-Forensic helps identify and examine evidence at the nanoscale level as earlier, it is impossible to analyze the critical evidence because of the instrument's detection limit. With advancements in the techniques, Nano analysis is transforming the investigation process by making them more accurate, faster, and more sensitive [8,9]. Nano forensic and other technologies have significant fingerprint analysis applications, explosive detection, drug screening, toxic substance analysis, and DNA analysis [10,11]. This review would briefly update how nanotechnology is widely used in forensic analysis of evidence.
FSAR.000618. 5(3).2021
sweat pores. Morris presented that silver nanoparticles can be used as physical developers to visualized latent fingerprints on paper [12]. During the reaction, the silver nanoparticles formed with the organic constituent of fingerprints, which develop the fingerprint in dark grey or black sliver image on the porous surface [13]. Gold nanoparticles are beneficial for latent fingerprint analysis as they are inert, highly selective, and sensitive. It has the advantage that latent print produced by gold nanoparticles can be store for a longer duration. Gold nanoparticles are used to improve latent fingerprints' visibility by multi-metal deposition (MMD) and single metal deposition due to these properties [14,15].
Quantum Dots and fluorescent materials have gained significant attention due to their small size and excellent fluorescent intensity. A study carried out by Dr. Roland Menzel [16] shows that Quantum Dots can visualize [16]. Quantum dots can also be used for the development of bloody fingerprints. Bloody fingerprints have high chances of smearing and contamination, which can damage the ridge details in fingerprints. It was overcome by quantum dots having the fluorescence property used to analyze bloody prints [17]. The study shows that incorporating the minor amount of ZnO-SiO2 nanoparticles in powder increases the visual enhancement of latent print to third-level ridge details of prints [18]. Carbon nano powder has been developing for the visualization of prints against multi-colored or patterned backgrounds [17].
Role of nanoparticles in explosives detection
Explosives-based terrorism has been rising for the last few years. Explosives such as bombs, improvised explosives devices (IEDs), grenades are causing widespread mass destruction [19].
Over the year, various sensing devices have been developed for the detection of trace explosives. Because of nanomaterial's unique electrical and optical properties, it is widely used to develop lowcost sensors. Sensing of the explosives is usually done through biologically based sensors [20]. Immunosensing techniques provide a great sensitivity in detecting the TNT with a detection limit as low as 0.09g/ml [21]. Frances S et al. [22] prepared a capillary immunosensor for the specific TNT detection by using the Anti-TNT antibody [22]. Carbon-based nanomaterials, including carbon nanotubes, graphene, and carbon nanoparticles, have properties like chemically inert, low cytotoxicity, high biocompatibility, and unique electronic properties the applications of these materials as sensors. Chen et al. utilized graphene oxide as an oxidized derivative of graphene to detect nitroaromatic explosives. They constructed the graphene-based sensor and analyzed compounds like dinitro toluene (DNT), dinitrobenzene (DNB), and trinitrobenzene (TNB) [23].
Nanomaterial in DNA analysis
DNA analysis has a tremendous potential benefit for the civil and criminal justice system [24]. It establishes the identity of an individual in forensic investigation. Nowadays, nanoparticle-based methods influence DNA analysis because of their low cost, easy automation, and convenient operation. Precisely, the magnetic nanoparticles are used to extract DNA because of increased sensitivity and high DNA yield. Nongyue He et al. used the Fe3O4 nanoparticles to extract nucleic acid from four different sources such as bacteria, yeast, human blood, and virus. The results showed that using magnetic nanoparticles to extract nucleic acid gives the high yield and relatively high purity of nucleic acids [25].
Sensor-based DNA detection technique has applications in DNA analysis. Mostly the gold nanoparticles are used for sensing mechanisms because of their optothermal property. Cheong, et al. [26] used Au nanorods for the extraction of DNA from the cell. Using the optothermal property researchers transformed the near infra-red energy into the thermal energy in a microfluidic chip that results in lysis of pathogen and eventually in the extraction of DNA [26]. Apart from gold nanoparticles, silica nanoparticle-based assay also detects DNA with the great sensitivity of 10pM [27]. The NanoPCR, a nanoparticle assisted PCR, is gaining considerable attention because of its specificity and reaction speed. Various types of nanoparticles like carbon tubes, quantum dots, and metal nanoparticles are introduced into the PCR technology. These nanoparticles are improving the efficiency and specificity of PCR products [28][29][30].
Nanoparticles in illicit drug analysis
The demand for illicit drugs is continuously increasing. Cannabis, amphetamine-type substances, cocaine, and heroin continue to be the most prevalent illicit drugs, but comparatively, new psychoactive agents have also been increasing in the market. In criminal investigations, drug analysis is a significant branch of modern analytical chemistry with many legal and socially applicable consequences. With the advancement in technology, sensing devices are used to detect drugs [31]. Sensing using nanoparticles usually takes place through colorimetric, fluorescence, and electrochemical sensors [32,33]. Gandhi, et al. [34] reported the development of dipstick assay based on the AuNP labeled single-chain fragment variable (scFv) antibody for the detection of morphine. The developed dipstick is suitable for analyzing the morphine from different biological fluids like blood, urine, and saliva [34]. Au nanoparticles show a significant Surface Plasmon Resonance phenomenon. It is used in colorimetric sensors. Gao et al. used a colorimetric sensor based on aptamer and molybdenum disulfide (MoS2)-gold nanoparticles (AuNPs) to detect cocaine. This sensor has been found to be rapid, cost-effective, and highly sensitive [35]. Different quantum dots techniques along with the other techniques, are also being used for fluorescence-based sensing because of their excellent quantum yield and fluorescence [36].
Conclusion
The potential of nanotechnology is making a positive contribution to forensic science to solve the crime. Various types of nanoparticles are in use for the detection of various forensic samples. Nano-sensors applications in Nano-forensics are due to high sensitivity. Further, Nano-Forensics has made the investigation process rapid. Nanotechnology can benefit in the future as an advanced and preventive tool in different field of forensic science.
|
2022-02-14T16:07:47.410Z
|
2021-02-03T00:00:00.000
|
{
"year": 2021,
"sha1": "263cd1543c8098913377628bcc3db24dcb44309c",
"oa_license": "CCBY",
"oa_url": "http://crimsonpublishers.com/fsar/pdf/FSAR.000618.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7065d81c5411535dd38c0067ad35a49a97e6a856",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
}
|
10413017
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of continuous femoral nerve block (CFNB/SA) and continuous femoral nerve block with mini-dose spinal morphine (CFNB/SAMO) for postoperative analgesia after total knee arthroplasty (TKA): a randomized controlled study
Background Unsatisfactory analgesia for major knee surgery with femoral nerve block (FNB) alone was reported and the additional benefit of sciatic block to continuous femoral nerve block (CFNB) was not conclusive. The aim of the present study was to find the benefit of the additional mini-dose spinal morphine (0.035 mg) to CFNB for postoperative pain control and to compare their associated side effects after total knee arthroplasty (TKA). Methods After written informed consent and with Institutional Ethics Committee approval, 68 American Society of Anesthesiologists (ASA) Physical Status I-III patients scheduled for elective unilateral TKA under spinal anesthesia (SA) were included in the present prospective, randomized controlled study. The patients were allocated into two groups. CFNB was placed in all patients by the inguinal paravascular approach with 20 ml of 0.25 % levobupivacaine. Group I (named CFNB/SA group), SA was administered with 2.8 ml levobupivacaine and Group II (named CFNB/SAMO group), SA with 2.8 ml levobupivacaine plus morphine 0.035 mg. At Post Anesthesia Care Unit (PACU), pain and other adverse effects were recorded. Pain was assessed by visual analog scale (VAS) 0-10. Tramadol 50 mg intravenous (IV) was given if the VAS > 4. In the ward, all patients were maintained by continuous femoral infusion of 0.125 % levobupivacaine rate 7 ml/hr and then reduced to 5 ml/hr if VAS ≤3. Results Patient’s demographics data in each group were not different. At post-operative (PO) 12-24 h, the VAS scores were significantly lesser in the CFNB/SAMO group. Cumulative tramadol IV requirement for PO48h were also significantly lesser in the CFNB/SAMO group. Nausea, vomiting and numbness were significantly greater in the CFNB/SAMO group during early postoperative period (PO1-6 h). Conclusion Though in some patients CFNB was inadequate, a mini-dose of intrathecal morphine (0.035 mg) in addition to CFNB was found to be effective with minimal side effects. Trial registration Thai Clinical Trial Registry (identifier: TCTR20150609003, date of registration: 6 June 2015).
Background
A large number of patients who undergo knee surgery experience moderate to severe postoperative pain that interferes with participation in early physical therapy [1][2][3]. Severe postoperative pain can contribute to immobilityrelated complications, delay in hospital discharge, and interfere with functional outcome [4,5]. Multiple techniques of postoperative pain control have been used after total knee arthroplasty (TKA). Previous studies comparing peripheral nerve block (PNB) with epidural analgesia (EA) for major knee surgery have demonstrated comparable analgesia and improvement in side-effect profile associated with PNB [6,7].
The femoral nerve along with contributions from the sciatic and obturator nerves at the posterior and the medial aspects respectively, provide sensory innervations of the knee. These three terminal nerves are targeted by PNB techniques for major knee surgery [6,[8][9][10]. Reports of satisfactory analgesia with femoral nerve block (FNB) alone [2,3] are countered by studies that found it to be inadequate [11][12][13]. Sundarathiti P et al. reported a lesser analgesia efficacy at postoperative (PO) 6-12 h in continuous FNB (CFNB) group compared with epidural analgesia [7]. Though sciatic blocks analgesic effect is undisputed, there are conflicting opinions about its general benefits in light of the additional time, costs and required skill of therapists [6,10].
In response of surgeons' concerns regarding postoperative sciatic block (e.g, difficulty in diagnosing peroneal nerve injury or an evolving sciatic nerve injury from compartment syndrome), we attempted to limit its use. The aim of the present study was to find the effect of adding a mini-dose spinal morphine 0.035 mg (based on our pilot study) to CFNB for postoperative pain control in patients with total knee arthroplasty (TKA).
Ethical consideration
This study was conducted according to the declaration of Helsinki. Informed consent was obtained and documented from the participants before data collection. The final protocol and written informed consent form had been approved by Ethics Committee, Faculty of Medicine Ramathibodi Hospital, Mahidol University. This study has been registered at Thai Clinical Trial Registry (identifier: TCTR 20150609003).
Methods
After written informed consent and with Institutional Ethics Committee approval from June 2012 to June 2015, 70 American Society of Anesthesiologists (ASA) physical status I-III patients scheduled for elective unilateral TKA under spinal anesthesia (SA) were included in the prospective, randomized controlled blind study.
Exclusion criteria included age < 40 years or > 80 years, body mass index (BMI) > 45, renal insufficiency [Creatinine level (Cr) > 1.5 mg/dl], pre-existing neurological deficit, inability to comprehend pain scales, chronic opioid use, and contraindications to either neuraxial block or FNB. The patients were allocated into two parallel groups at a ratio of 1:1, using random number table. All patients were premedicated with oral lorazepam 0.5 mg 1 h before surgery and were sedated with midazolam 1 mg and fentanyl 25 mcg intravenously before conducting anesthesia. CFNB was placed in all patients by the inguinal paravascular approach, 19 G, 50 mm needle (PAJUNK®, PlexoLong NanoLine acc, Meier, Germany). Localized femoral nerve was defined by quadriceps twitch at <0.5 mAmp using a stimulation of 0.1 ms at 2 Hz. After negative aspiration, 20 ml of 0.25 % levobupivacaine was administrated and catheter was inserted 3-4 cm past the cannula. SA was done in lateral position at lumbar vertebrae segment (L) 3-4, 27-G needle. Group I (named CFNB/SA), received SA with 2.8 ml levobupivacaine. Group II (named CFNB/SAMO), received SA with 2.8 ml levobupivacaine plus morphine 0.035 mg. Urinary catheters were placed in all patients and were continued until 24 h post-operative.
The standard monitoring was used, including none invasive blood pressure, blood oxygen saturation (SpO 2 ), electrocardiogram. The surgical time was noted as the time from incision to the end of surgery. On arrival in the Postanesthesia Care Unit (PACU), pain, and other adverse effects such as nausea, vomiting, pruritus, dizziness, hypotension (30 % reduced from baseline) numbness, and motor blockade were recorded every 15 min. Motor blockade was estimated using a modified Bromage scale (0 = no blockade: extended limb lift off the bed; 1 = flexion/extension at knee and ankle joint; 2 = no flexion/extension at knee or ankle joint; 3 = complete blockade). Pain was assessed by visual analog scale (VAS) from 0-10, where 0 = no pain; 1-3 = mild pain; 4-7 moderate pain; 8-10 = severe pain. Tramadol 50 mg intravenous (IV) was given if the VAS ≥ 4.
Forty eight hours post-operative in the ward, patients in both groups were maintained by continuous infusion of 0.125 % levobupivacaine rate 7 ml/hr and then reduced to 5 ml/hr if VAS ≤3. The femoral catheters were removed at 48 h post-operative. During the hospital stay, all patients received oral ultracet one tab two times a day, oral acetaminophen 500 mg four times a day, and oral lorazepam 0.5 mg before bed time. The patients having breakthrough pain (defined as VAS ≥4) were treated on demand with tramadol 50 mg IV every 4 h until discharge. The blinded residents made visits at 6, 12, 24, 36, and 48 h post-operative to record adverse effects, pain scores, patients' satisfaction (1 = poor, 2 = fair, 3 = good, and 4 = excellent).
Based on the data from Sundarathiti et al. [7], at 12 h post-operative, 80 % of patients with CFNB after TKA experienced moderate to severe pain, while approximately 38 % of patients with continuous epidural infusion (CEI) did. About 22 patient in each group would suffice to demonstrate a significant difference with a probability of type I error of 0.05 and power value of 80 %. The long-term expected rate of patient loss was considered to be about 20 %, so finally each group was to have at least 26 patients Detailed information on enrollment of patients into the study is depicted by the CONSORT flow diagram in Fig. 1.
Data were recorded and analyzed using SPSS 15 statistical package (SPSS; Inc., Chicago, IL) for Windows. Results are reported as mean (SD) for continuous variables, where number and percentage for nominal variables. All continuous variables were first checked for normality of data distribution by Shapiro-Wilk test. Independent samples T-test or Mann-Whitney U test were used to compare the data between the two study groups depending on data distribution in each measurement. Nominal variables were analyzed by chi-square or Fisher's exact test. A pvalue < 0.05 was considered statistically significant.
Results
A total of 68 patients participated, 33 with spinal anesthesia and femoral catheter, and 35 with the same procedure plus intrathecal morphine. Patient's demographics data (Table 1) in each group were not different. All patients had satisfactory anesthesia and operation without intaoperative complications. There were 2 patients in Group I (CFNB/SA) dropped out from the study due to cognitive changes postoperatively (disorientation to time and place). Residual motor blockade was similar in both group ( Table 2). The VAS scores and the number of patients suffering from 'moderate to severe pain' are presented in Tables 3 and 4. Experiencing pain was significantly less pronounced in patients with spinal morphine (CFNB/SAMO), starting 6 h after surgery during the entire investigation period; these differences were significant at 6, 12, 24, and 48 h respectively. Cumulative tramadol IV requirement was significantly lesser in the CFNB/SAMO group (median = 125 mg, range 50-400 mg) compared with the CFNB/SA group (median = 200 mg, range 50-500 mg) as presented in Fig. 2 (p-value = 0.01). As presented in Fig. 3, at 6 h after surgery a significant higher rate of CFNB/SAMO patients were affected by PONV (approx. 40 vs. 15 %; p < 0.05). Patients in both groups had similar overall incidences of other side effect such as pruritus, dizziness and hypotension (p-value >0.99 in all side effects) except the incidences of numbness during PO1-6 h which was significantly greater in the CFNB/SAMO group (p-value 0.03 and 0.04 respectively). Patient's satisfaction rated as good or excellent was not different between the two groups.
Discussion
In this study, we used a very small dose of intrathecal morphine 0.035 mg in addition to femoral block (CFNB/ SAMO) to complete the analgesia without any more effort and care on the side of the anesthesiologists. The pain scores (VAS) 6, 12, 24 and 48 h after surgery and the cumulative tramadol IV requirement (Fig. 2) were significantly lesser in the CFNB/SAMO group compared with the CFNB/SA group. The VAS scores at PO1-6 h were not different, which may be partly due to the residual analgesia after spinal anesthesia. Whereas motoric function was not affected by the respective method, the rate of PONV was signifivantly higher in patients with morphine at the 6 th hour after surgery (approx. 40 vs. 15 %) but not any further (Fig. 3).
The effect of CFNB in this study was not satisfactory, because the VAS scores in both groups were quite high (Table 3). Categorizing VAS scores ≥ 4 as 'failure' we observed a failure rate between 6 and 48 h postoperatively of 39.4 -69.7 % and 31.4 -48.6 % in patients with CFNB and patients with CFNB plus spinal morphine respectively (Table 4). Pöpping et al. 9 in their large observational study including 1374 patients with femoral/ sciatic block had a failure rate (no analgesic effect) of 3.96 % and a rate of malposition after correct placement of 15.2 %. Among others we have to contemplate the possibility of secondary block failure, where the catheter has not been positioned appropriately. Although the block was achieved with the stimulation technique, the trigger during stimulation applied in this study with < 0.5 mA was probably too high. In addition the initial The values are expressed as number of patients (percent) MBS modified bromage scale (0 = no blockade; 1 = for flexion/extension at knee and ankle joint; 2 = no flexion/extension at knee and ankle joint; and 3 = complete blockade) dose of local anesthetics might have been too low and unnoticed catheter migration could also lead to failure. Inadequate efficacy of FNB alone for knee surgery is due to the fact that the posterior part of the knee is innervated by the sciatic nerve. Hence sciatic block has been added to FNB by many study groups [8,9,[12][13][14], though it is unclear whether blocking both the femoral and sciatic nerve may result in a greater risk of direct needle trauma to the peripheral nerve. Performance of sciatic nerve block is time-consuming, requiring additional costs and skill. However, our data clearly indicate that femoral block alone is not sufficient in major knee surgery, as it does not affect the popliteal area, which is in accordance to Sundarathiti P et al. [7] reporting inferior analgesic efficacy of FNB compared with epidural analgesia.
The purpose of our study was to find a simple 'multimodal' form of postoperative pain control, using a technique that can be added to femoral catheter in patients with major knee surgery under spinal anesthesia. Multimodal analgesia combines alternative strategies with the goal to avoid routine parenteral narcotics, and minimize the side effects of the respective method [15][16][17][18]. It takes advantage of the synergistic effects of various analgesics, permitting the use of smaller doses.
Intrathecal opioids added to local anesthetics during spinal anesthesia have been applied in a variety of surgical settings since 1979 [19], providing prolonged postoperative analgesia without the need for catheters or expensive pumps. However, the use of intrathecal morphine may be associated with distressing side effects, such as itching, urinary retention, nausea and vomiting (PONV), and respiratory depression [20]. In an attempt to limit opioid side effects, the use of low-dose spinal opioids has been advocated [21]. Even mini-dose morphine (<0.1 mg) was frequently reported to be effective for managing acute postoperative pain after variety of surgeries without any evidence of respiratory depression [22,23].
Achieving high quality pain relief after TKA is possible using regional anesthesia and multimodal pain management [24]. In patients already treated with spinal anesthesia intrathecal opioid analgesia (ITOA) has specific advantages regarding ease of administration, safe and rapid onset of action and low costs. A single dose administered at the time of surgery can provide good neuroaxial analgesia during the first postoperative day. Because spinal morphine is frequently accompanied by a high rate of PONV and itching, we tried a very low dose with 0.035 mg and found a temporary increase in numbness which was clinically irrelevant, but a significant increase in PONV 6 h after surgery. There were no further or late side effects. We can state mini-dose intrathecal morphine (ITMO) is a safe, effective, and relatively inexpensive modality for the management of postoperative pain. We estimate CFNB and mini-dose ITMO as a good combination-technique to achieve the goal of multimodal pain management for TKA.
Conclusion
Postoperative analgesia with continuous femoral block (CFNB) alone for total knee arthroplasty (TKA) is inadequate. A mini-dose of intrathecal morphine in addition to CFNB was found to be simple, safe, inexpensive and more effective than CFNB alone for managing postoperative pain after TKA with little side effects. However, the analgesic effect could be better. Future studies have to determine the most appropriate dose of spinal morphine when added to proper working femoral analgesia.
|
2018-04-03T01:28:58.261Z
|
2015-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "cc9e136992dab1bb4d50a90754f900dfb0ad67d3",
"oa_license": "CCBY",
"oa_url": "https://bmcanesthesiol.biomedcentral.com/track/pdf/10.1186/s12871-016-0205-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc9e136992dab1bb4d50a90754f900dfb0ad67d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219260186
|
pes2o/s2orc
|
v3-fos-license
|
Microscopic modeling of contact formation between confined surfaces in solution
We derive a Kinetic Monte Carlo model for studying how contacts form between confined surfaces in an ideal solution. The model incorporates repulsive and attractive surface-surface forces between a periodic (2+1)-dimensional solid-on-solid (SOS) crystal surface and a confining flat surface. The repulsive interaction is derived from the theory of electric double-layers, and the attractive interactions are Van der Waals interactions between particles on the SOS surface and the confining surface. The confinement is induced by a constant external pressure normal to the surfaces which is in mechanical equilibrium with the surface-surface forces. The system is in thermal equilibrium, and particles can deposit to and dissolve from the SOS surface. The size of stable contacts formed between the surfaces in chemical equilibrium show a non-trivial dependency on the external pressure which is phenomenologically similar to the dependency of oscillatory hydration forces on the surface-surface separation. As contacts form we find classical phenomena such as Ostwald ripening, coalescence, and primary and secondary nucleation stages. We find contacts shaped as islands, bands or pits, depending solely on the contact size relative to the system size. We also find the model to behave well out of chemical equilibrium. The model is relevant for understanding processes where the force of crystallization and pressure solution are key mechanisms.
I. INTRODUCTION
Solids brought into contact are ubiqutous and dynamic processes and solid contacts are central to tribology [1,2] and the nature of granular materials in general [3]. Physicists often idealize the contact dynamics and study inert surfaces that deform only mechanically, since stateof-the art surface measurement techniques fail to work in the confined environments where the chemical reactions at the interfaces are important [2], and the vast number of simultaneously occurring chemo-mechanical phenomena that depend on the contact topology and stresses makes modeling difficult [1]. Hence the dynamics of reactive solid contacts, has not received the attention it deserves.
Reactive contact dynamics have important applications in processes such as sintering [4,5], where mineral grains stick together after e.g. compaction without liquification occurring at the grain boundaries, fracture healing/crack sealing [6,7], where material in voids and cracks is rearrange in time such that the aperture decreases without the need of a supersaturated solution, the weathering of rocks and concrete [8][9][10][11][12], which is of fundamental interest to building conservation, in addition to metamorphism, diagenesis and weathering in the Earth's crust [13,14].
Biological applications span from the development of the fracture callus in the reparative stage of bone fracture healing [15] to the initial stages of cell membrane fusion [16].
Stress-induced instabilities in reactive solid-solid boundaries are also responsible for the formation and evolution of stylolites [17,18], and recent experimental [19,20] and theoretical [21][22][23][24] studies of frictional interfaces show that the behavior of the microjunctions (i.e. contact points) between the surfaces is crucial when determining the frictional dynamics.
Popular models of how the grain boundary behaves during pressure solution [25] are growth and dissolution with the presence of a confined thin yet stable liquid film [26], and growth causing stabilizing island-shaped contacts between the surfaces [27]. Recent experiments have shown that growth rims during experiments on the force of crystallization [28], and grains that have undergone pressure solution creep [29], have a structured roughness. This is in disagreement with the liquid film model which predict smooth interfaces [26,28]. We therefore hypothesize that attractive surface-surface interactions are important mechanisms in confined crystallization.
We have previously reported on a model without attractive surface-surface forces, which reproduce known thermodynamics for confined surfaces in solution, as well as the pressure solution and the force of crystallization phenomena [30]. However, this model also predicts a smooth interface. We will therefore in this work address the question whether extending the earlier model by adding a Van der Waals-like interaction between the two surfaces is sufficient to produce a structured roughness.
Questions we want to address are under which conditions we can expect stable contacts to form between the surfaces, how these equilibrate and how they appear once equilibrated, and whether these contacts remain stable in systems where the confining surface is displaced due to the force of crystallization or the crystal surface is dissolved due to pressure solution.
The paper is structured as follows: In Sec. II we introduce all aspects of the model such as the different surfaces and the solution, the allowed transitions and their rates, and the interactions used and how their resulting forces arXiv:2006.02129v1 [cond-mat.soft] 3 Jun 2020 are used to maintain mechanical equilibrium. In Sec. III we present the results of how the system equilibrates, how the equilibrium contacts behave,the contact fluctuations, and finally the out-of-equilibrium properties.The final discussions and conclusions in Sec. IV concludes the paper.
II. MODEL
The model consists of a periodic crystal surface placed in an ideal solution confined vertically by a flat inert surface of the same material at a height h l (t) ∈ R. The latter will from here on be referred to as the confining surface. The crystal surface is modeled using a (2 + 1)dimensional periodic solid-on-solid (SOS) surface.
The SOS condition does not allow for overhangs, hence the surface is described by an array of heights h i ∈ Z (in units of bond lengths l 0 ), where i ∈ [0, L×W ] with L and W being the spatial extents of the system. The top-most particles of the crystal surface, from here on referred to as a surface particles, are the only ones that can take part in transitions. The reactive surface area of the crystal is thus A = L × W . The confining surface is kept inert to reduce the complexity the model, since SOS models are not applicable to systems where two opposing surfaces fluctuate in and out of contact with each other (formed contacts could never break).
The liquid surrounding the crystal has a uniform concentration of solute particles, which limits the model to reaction limited systems. Adding a more realistic description of the liquid is possible, but we will here keep the description as simple as possible. The concentration level may vary in time.
Allowed transitions in the system are dissolution of crystal surface particles into solution and deposition of solute particles to the crystal surface.
Particles interact with other particles through nearest neighbor interactions with bond energy E b . The confining surface is subject to an external force with magnitude F 0 in the direction normal to the confining surface. A repulsive force F λ is generated between the surfaces which increases as the separation decreases. This far the model is identical to the one used in our earlier work on the effect of normal stress on confined crystals [30]. Here we will include additional short-range attractive forces f b (i) between the surface particles and the confining surface. We enforce the mechanical equilibrium of the confining surface, that is, the repulsive force always balances the external-and attractive forces. This involves repositioning the confining surface height h l (t). The acting forces are illustrated together with the surfaces in Fig. 1.
For an initial volume V (0) and concentration c(0), the effective number of solute particles is N s (0) = c(0)V (0). Since the system is periodic particles have no means of escaping or entering the system, which means that the total number of particles is conserved. We can therefore keep track of N s (t) by counting the number of dissolved and deposited particles and add it to the initial value. Hence the concentration at a time t is c(t) = N s (t)/V (t).
A. State Transitions and Rates
A deposition can occur at any site given that the confining surface does not block it. A dissolution can occur at any site given that there is an available neighboring site. These restrictions on the deposition and dissolution reactions ensures that the surfaces do not penetrate into one another.
If the neighboring dissolution site is in the solution, the surface particle dissolves and is removed from the surface. If on the other hand the neighboring site is at the crystal surface, the particle slides one lattice length horizontally. This can be interpreted as an immediate dissolution-deposition chain, and is important to include in order to avoid surface particles in regions where the surfaces are in contact becoming static. Horizontal sliding is the only surface-surface transition we allow, since including transitions up or down kink sites in a SOS model is known to cause an anisotropy between vertical and horizontal diffusion [31,32].
We use that the rate of a particle i dissolving is [30] where ν is a frequency factor, ∆G(i) is the free energy gain by removing particle i from the system, k is the Boltzmann constant and T is the temperature. The deposition rate is proportional to the current concentration c as follows: where we for simplicity have used the same frequency factor such that it can be used to set the time scale for the simulations. The system is in chemical equilibrium when c has an equilibrium value c eq at which no net growth occurs. Using a different frequency ν + would cause the system to equilibrate at a different concentrationc eq = c eq ν/ν + .
B. Free Energies
We model ∆G(i) from Eq. (1) as three terms representing three interactions as follows: where the first term is the particle-particle interaction represented by the breaking of n i nearest neighbor bonds with energy E b , the second term represents the particlesurface attraction, and the last term represents the surface-surface repulsion. The arrows indicate forces acting on the confining surface, which are a constant external force F0, a surface-surface repulsive force F λ , and attractive particle-particle Van der Waals-like forces f b . The confining surface height h l (t) is set such that these forces are in equilibrium, or the surfaces are resting on one another. Between the surfaces there is an uniform ideal solution at a given concentration level.
We set the following criteria for the free energies, which determines the shape and interplay of the different interactions: 1. When in perfect contact (no liquid present), the two surfaces produce an infinite bulk structure consisting only of nearest neighbor bonds.
2. The surfaces are made up of the same material.
3. The attractive particle-particle interactions has a Van der Waals-like decay as 1/d 6 [16], where d ≥ 1 is the center-center separation (in lattice units).
The free energy due to the external force F 0 is F 0 h l , where h l is the height of the confining surface. This is analogous to the gravitational potential. This free energy does not directly depend on the crystal surface structure, hence there is no change in free energy associated with F 0 present in the rate calculations.
Surface-surface repulsion
We use the same expression for the repulsive interaction as in Ref. [30] with the exception that it goes to 0 for no separation: where d i ≥ 1 is the center-center distance between the outer particles of the confining surface and surface particle i in lattice units. The free parameter σ 0 represents the ratio between the strength of the surface-surface repulsion and the particle-particle attraction E b . The decay strength λ D is analogous to the Debye length. We will use λ D = 5 throughout this work and vary the value of σ 0 to study the effect of an increasing/decreasing repulsion.
The reason why we set the interaction to 0 for d i = 1 is that we assume that the repulsion originates from the existence of an electrolyte, and if there is no room for liquid, the only interaction is single bonds with energy E b . Moreover, keeping the repulsion active for d i = 1 would violate the first criteria listed earlier concerning the convergence to a bulk material. The decay to 0 repulsion is probably smooth, and the abrupt cut used here is a simplification. This simplification does not cause instabilities since if d i = 1 at any point, we set the total force to 0 (balanced by a normal force). Hence forces at d i = 1 does not need to be evaluated explicitly.
The total free energy due to this interaction is where ζ ≡ 1 − exp(−1/λ D ).
Particle-particle attraction
The surface particles are confined to a lattice and thus always have a bond length separation and consequently share an energy E b if they are nearest neighbors, and 0 else. The position of the confining surface, however, is not restricted to integer bond lengths, hence intermediate separations d i ∈ (1, 2), that is, separations between the Fig. 2. Illustrations of the three cases in Eq. (6) concerning the free energy between a particle on the crystal surface and a particle in the confining surface separated by a distance di in lattice units. The main idea is that this interaction behaves like the nearest neighbor interaction for integer separations. This is illustrated by di = 2 (left) and di = 1 (right) having no bond and a nearest neighbor bond, respectively. The interaction at intermediate separations (middle), indicated by a stippled bond, is not described by the nearest neighbor model. We model this interaction as a 1/d 6 i Van der Waals-like decay, with added corrections to ensure a continuous transition to 0 for di = 2 for both the function and its derivative. The interaction is given by Eq. (7).
nearest and next nearest neighbor, have some value in G WV (i) ∈ (0, E b ) which we assume decays as 1/d 6 i (Van der Waals attraction).
Summarized we may writẽ This attraction is illustrated for all three cases in Fig. (2). We cannot choose G WV (i) = −E b /d 6 i directly, since this would render the free energy discontinuous at d i = 2. We must therefore correct the potential with a shift (as is often done with the Lennard-Jones interaction [33]). A constant shift, however, would still leave the derivative of the free energy discontinuous at d i = 2, hence we introduce a shift in the derivative as well (a linear term in the original expression). The expression now reads which when inserting d i = 1 and d i = 2 produce the correct values E b and 0 such that Eq. (6) is continuous. In Fig. 3 we have compared this expression to the 1/d 6 i function it was designed to resemble, and it is evident that this is in fact the case. The derivative of Eq. (7) with respect to d i is Fig. 3. Comparison between the attractive interaction used in this workGWV(di), and the Van der Waals-like decaying function −1/d 6 i it is designed to resemble, where di is the surface-surface separation at site i in lattice units. It is evident that there is indeed a strong resemblance. The reason for the difference is that the attractive interaction, as well as its derivative, is required to be 0 at a separation of di = 2 (stippled vertical line). This requirement is a consequence of our assumption that this attractive interaction should act identical to the nearest neighbor interaction used on the crystal surface for integer separations. This is also the reason why for separations di = 1 we have a free energy equal to the bond energy E b .
which equals 0 when d i = 2, which means that the derivative of Eq. (6) is continuous as required.
When we remove a surface particle i with a separation d i ∈ [1, 2), that is, the particle has a non-zero attractive interaction with the confining surface, we are guaranteed that the new surface particle i has d i ∈ [2, 3) and consequently no attraction. The change in free energy used in Eq. (3) is therefore The forces on the confining surface is balanced at all times. This gives the equation where F λ ≥ 0 is the generated repulsive force, f b (j) ≤ 0 is the attractive interaction at surface site j, and F 0 > 0 is the magnitude of the external force. These are illustrated in Fig. 1.
If the two surfaces are resting on one another, i.e., d i = 1 for any i, the net force is always 0 due to a normal force associated with the contact. This is a necessary criterion since we do not model the repulsive forces for d i < 1, Fig. 4. A visualization of the separation (d) dependency of the different free energy sources available in the model which affects the confining surface (top panel) and the resulting total force on the confining surface (bottom panel) for a perfectly flat crystal surface using a repulsion strength σ0 = 1 relative to the bond energy E b , and an external force with magnitude F0/E b A = 0.5. The potential from the constant external force becomes F0d similar to that of a gravitational potential, the attractive interactionGWV is shown in more detail in Fig. 3, and the free energy due to repulsion is given by Eq. (5). We observe multiple equilibria: one where the surfaces are resting on each other (d = 1), one peak in the free energy at a short separation where a strong attraction is in equilibrium with a strong repulsion, and one valley where a weaker (or constant) attraction is in equilibrium with a weaker repulsion. We do not include transitions to the unstable peak equilibrium, since we allow only transitions between stable states. The challenge is thus obtaining the second stable equilibrium here located at h λ ∼ 4.96 indicated by the vertical stippled line. The force peak will appear smoother for rough surfaces.
which could have been included by adding a 1/d 12 i term, i.e. use the Lennard-Jones 12-6 potential [33] instead of a pure Van der Waals attraction.
The height at which the confining surface is resting on the crystal surface, h c , is related to the maximum surface height as h c = max(h) + 1, where h denotes the array of all surface heights h i . is the unit less pressure generated by the external force of magnitude F0, σ0 is the magnitude of the repulsive interaction relative to the binding energy E b (attractive interaction). We observe that as we make the repulsion weaker (σ0 smaller), the equilibria disappear since the attractive interaction overcomes the repulsive interaction before it comes out of range at d = 2. In these cases the only valid equilibrium is when the two surfaces are resting on each other, i.e. the surfaces snap into contact due to an overwhelming attraction. A flat surface under these conditions will always be in contact, however, an arbitrary rough surface can produce many different force profiles under the same conditions. Nevertheless, these examples give an important insight into how we can expect the force profile to look, and what it means for the different equilibria.
The force is calculated as F = −∂G/∂h l . The calculation of the repulsive force is straight forward: where we have set the force in contact equal to zero, which is necessary since the potential is discontinuous due to our abrupt cutting of the interaction potential.
Writing the separation as d i = h l − h i , it is clear that ∂d i /∂h l = 1, such that differentiating with respect to d i or h l yields the same result. Using this, the attractive forces become where the derivative is given in Eq. (8). When d i = 1 for any i, i.e. the surface is resting on one another, we assume that the total force is zero. Hence we do not have to be concerned about instabilities at d i = 1. Figure 4 shows the total free energy together with its individual contributions (top panel) and the resulting total force (bottom panel) for a flat surface. From the bottom panel we see two candidates for out-of-contact equilibria (F tot (d) = 0), however, looking at the top panel we see that the one closest to the crystal surface is unstable. In KMC we allow only transitions to stable states, hence we only consider the far equilibrium point h λ and the contact point at d = 1.
Maintaining mechanical equilibrium thus boils down to calculating h λ and deciding whether to choose it or the contact height h c . We split this into two cases: when h l = h c , i.e. when the surfaces are resting on one another, and when they are not.
When the surfaces are not resting on one another, the confining surface is moved to the closest equilibrium height (h c or h λ ) in the direction of the current total force. Given that the position of the confining surface is h l , and the maximum of the force (the peak in Fig. 4) is located at h m , then this rule translates into the following conditions: Condition 2(a) represents the case where the repulsive interaction is too weak to withstand the applied force. In Fig. 5 the total force for commonly used values in this paper is shown, and we see that condition 2(a) occur for high external forces F 0 and/or a weak repulsive interaction strength σ 0 .
If h λ > h c +2, we are outside the cutoff in the attractive interactions, and h λ has an analytical solution [30]: In practice, we calculate this value, and if it is larger than h c + 1, we know it is a valid solution.
If the surfaces are resting on one another, i.e. h l = h c , the net force is 0 by assumption, and is no longer responsible for the dynamics of the confining surface. If the contacts are unstable, dissolution is the primary mechanism for separating the surfaces, and if the contacts are stable and the system is in chemical equilibrium, the attraction is so strong that condition 2(a) applies almost exclusively. However, if the solution is supersaturated, the repulsive energy in the system will steadily increase as particles deposit, and since the repulsion has a longer range than the attractive interaction, the two equilibra may coexist even when h l = h c . In this case we need a criterion that determines when contact bonds break and the confining surface moves from h c to h λ .
This should be expressed in terms of a rate which depends on the energy barrier between the two states, however, we are unable to do this since we here use rates of the form of Eq. (1) which assumes single particle transitions. In other words, if we want an implicit condition for separating the surfaces, we should use the Eyring rate equation [34] directly.
Since this condition does not impact the equilibrium simulations, and the most important part of the separation mechanism is that condition 2(a) stops applying due to a buildup of repulsive energy, the actual condition is not as important as it might seem. We therefore choose to model it based on simple thermodynamical considerations combined with an attempt frequency.
The probability that the surfaces separate in a given attempt we model as a Boltzmann weight using the change in free energy per area between the contact and the separated states where Z = 1 + W b since the weight associated with staying W stay = 1. When we calculate ∆G tot , we do not include the repulsive free energy at h λ for the points in contact at h c . This represents a sort of retardation time needed to form the electric double layers between the newly formed surface areas.
In this paper we fix the attempt frequency to once every A cycles, which we implemented by scaling P b by 1/A using a frequency of 1. This choice ensures that the probability that the surfaces separate in a given time interval will not have an unphysical dependency on the system size, since the fact that we move only one particle per cycle in KMC makes the time step ∆τ KMC ∝ 1/A [35].
III. RESULTS
We will focus on understanding the equilibrium properties of systems in which stable contacts form between the surfaces. We want to understand how these contact form from an initial separated state, which shapes they possess and why, and how they fluctuate in time.
In order to achieve this we first need a proper definition of a contact. Macroscopically it suffices to define a contact as a point where the surfaces rests on one another. Microscopically, however, we need a less binary definition, since the attractive interactions promoting surfacesurface contacts do not necessarily require the surfaces to rest on one another to do so. We will therefore define a contact as a point which is within the range of the attractive interaction, that is, if h l − h i < 2, then site i is in contact. The contact density is then where A is the area of the (flat) confining surface and κ(i) is 1 if h l − h i < 2 and 0 else. We have ρ WV ∈ [0, 1], where ρ WV = 0 represents completely separated surfaces, and ρ WV = 1 represents the case where all surface sites are in range of the attraction. For any realistic choices of repulsion strength σ 0 , the latter scenario leads to the surfaces joining together perfectly. We initialize the system in an a priori known equilibrium state of the system without an attractive interaction [30], which is obtained by setting an initial concentration ln c(0) = (F 0 /E b A − 3)E b /kT and the confining surface to the height given by Eq. (13). Note that for high pressures this initial state may have initial contacts. We then use the method described in Section II A to keep a constant effective number of particles, such that the closed system will equilibrate automatically. The initial crystal surface is random with an average height equal to 0.
All simulations are done using a 30 ×30 surface lattice. We have done sample simulations using a 50 × 50 surface lattice and observed no noticeable change in the results. The mechanical equilibrium calculations are also very CPU-intensive, meaning that if we want to do a thorough analysis of a vast parameter space, keeping the size as small as possible is very favorable. Periodic boundaries are also very forgiving on the system size, so if we were to open up a boundary, caution should be taken to ensure that the boundary effects are still negligible.
A. Equilibration
In this section we investigate how the contacts in the system form as the system evolves from the initial state to its equilibrium state.
In Fig. 6, we present a high and low pressure simulation of the same system. For the low pressure case we observe three stages. In the primary stage the number of contacts increase very slowly, since most of the contacts which form are unstable due to their small size and dissolve quickly. In the secondary stage growth is rapid since here one or more contacts are above the critical size needed to stay stable. Finally we enter a stage where all the contacts have coalesced into a single contact where the average number of formed contacts equals the average number of broken contacts. This process has a clear similarity to kinetic limited growth theory which passes through the same stages towards the stationary state [36].
For even lower pressures the initial surface-surface separation is too large for any stable contact clusters to appear, and the equilibrium state becomes equal to that with no attractive interactions (total separation). For higher pressures initial contacts are much easier to produce since the surfaces on average are closer to one another, and often start out with parts being in contact already. The primary stage is thus skipped.
We also observe that smaller clusters dissolve in favor of the larger ones, which is analogous to Ostwald ripening [37]. This process occurs since the concentration needed to stabilize a contact cluster decreases as the size of the cluster increases because a larger cluster is harder to dissolve. Hence the largest contact cluster has the fastest net growth, which makes the solution undersaturated for smaller contact clusters. Equally sized clusters will, given enough time, coalesce into a single dominant contact.
B. Equilibrium Contacts
In this section we investigate what determines the equilibrium contact size, shape and stability.
In Fig. 7 we show a selection of equilibrium shapes obtained by simulations at different rescaled temperatures kT /E b , rescaled applied pressures F 0 /E b A using σ 0 = 1. We see three classes of shapes occurring: islands for low contact densities, periodic bands for intermediate contact densities, and pits for large contact densities. The shapes appear rougher for higher temperatures, which is expected due to larger surface fluctuations.
It makes sense that the resulting shape should depend on the contact density, since we can imagine an island transforming into a band when it coalesces with its own periodic image, and the pits forming when the bands grow so wide they merge with their own periodic images.
In practice surface dislocations and/or solute diffusion could set a characteristic length scale promoting the existence of separate islands. These islands could merge by forming bands between one another, which in several directions would lead to pit shapes. Nucleation on system boundaries could also lead to half-islands, bands and pits depending on the size of the crystals growing on the boundaries.
It is clear that the equilibrium contact density is a key factor in deciding the behavior of the equilibrium contact. Since the contact density is not a priori known, we need to investigate how the contact density depends on the system parameters. This relationship is shown in Fig. 8.
For each value of σ 0 we observe a domain where ρ WV = 0, i.e. no stable contacts are formed. This domain represents the cases where the surface fluctuations are not large enough relative to the surface-surface Fig. 6. Equilibration from an initial state at time tν = 0 to a state where the contact density ρWV has stabilized for two different levels of applied pressures F0/E b A using σ0 = 1 and E b /kT = 1. The left column shows results at F0/E b A = 0.5, which is considered a low pressure, and the right column shows results at F0/E b A = 1, which is considered a high pressure. The top row shows how the contact density grows with time. For low pressures, we observe a primary stage where the contacts grow slowly, before reaching a secondary stage where new contacts are formed rapidly. This stage eventually ends when we have reached chemical equilibrium. The stages are separated by vertical dashed lines. The second row shows the accumulated number of formed and broken contacts in the system, which clearly reveal the differences between the three stages just described. The final row shows how the contact cluster centeroids move in time. The crosses are vertical projections separated by a constant number of simulation cycles (the time step is not constant). For F0/E b A = 0.5 we see two prominent clusters coalescing into a single cluster as the secondary stage ends. For F0/E b A = 1, the first stage immediately ends, and smaller clusters dissolve in favor of growing one large dominant cluster. separation for stable contacts to form. Since applying more pressure decreases the separation, and increasing the temperature increases the surface fluctuations, we see this domain curving off with higher pressure F 0 /E b A and higher E b /kT . At higher pressures, contact initiation is not limited by surface fluctuations, since the initial separation is low. In this limit, higher fluctuations simply means that the surfaces can fit less tight. Hence for large pressures we observe that increasing E b /kT results in larger contacts. Fig. 7. Equilibrium shapes with a given contact density ρWV at rescaled inverse temperature E b /kT and rescaled external pressure F0/E b A using σ0 = 1. We observe three classes of shapes: islands, bands and pits. The images have been rendered using Ovito [38]. Fig. 8. Plot of the contact density ρWV vs rescaled inverse temperature E b /kT and rescaled external pressure F0/E b A for various repulsion strengths σ0. The latter is relative to the bond energy E b . Each data point is averaged over ten simulations. A white field means no stable contacts were formed. Higher pressures generally means more contacts since the surfaces are closer, and thus we see more contacts at the top of each plot than at the bottom. The reverse occurs when we increase σ0 (more repulsion), which is why the right figure has a lot less contacts than the left. As we heat the system by lowering E b /kT , we observe fewer contacts, which is due to the surface fluctuations becoming very high, which destabilizes contacts. Hence we see a drop in ρWV as we move left in each plot. The surface height fluctuations are those responsible for creating the initial stable contacts, hence cooling the system by increasing E b /kT makes it very hard to initiate contacts and we see a drop in ρWV after a certain point, unless the pressures is so high that contacts are inevitable. If contacts are inevitable, then lowering the fluctuations means that the surfaces can fit more smoothly on top of one another, and we expect an increase in ρWV. Hence we observe a decay in ρWV to the right in each plot for low pressure and an increase in ρWV to the right in each plot for high pressures. For higher σ0, we see lines at certain pressures that decays slower with increasing E b /kT than their surroundings, hinting to a mechanism that favors contacts to form at certain pressures. Fig. 9. Plot of the most frequently occurring equilibrium contact cluster shape for a given rescaled external pressure F0/E b A, rescaled inverse temperature E b /kT and repulsion strength σ0. The latter is relative to the bond energy E b . These data are obtained by pattern recognition of the same surfaces used to produce Fig. 8. It is clear that there is a strong correlation between the contact density ρWV and the stability of the different shapes. The parameter dependency explained in Fig. 8 thus holds for this figure as well, with the exception that some low contact density cases are missing since they do not correspond to any shape.
Contact favoring pressure levels
At high pressures the surface-surface separation is so low that contact clusters always form. An interesting effect which we will investigate in this section is that at certain lower pressures contact clusters appear much more stable than their immediately lower and higher pressure levels. This can be seen as stripes going into the unstable domain (ρ WV = 0) in Fig. 8. A contact favoring pressure level (CFPL) appears because the corresponding far equilibrium point of the confining surface h λ is very close to an integer value. This means that when the crystal surface fluctuates into contact with the confining surface, the binding energy is close to the maximum value E b at a separation of d i = 1, which makes it harder to remove than if the separation was anything else.
For simplicity lets consider a flat crystal surface at an integer heighth. We know from Eq. (13) that the far equilibrium point is if there are no attractive interactions. The condition that this height is an integer value n then translates into the following equation: The CFPLs then become and the distance between two sequential CFPLs becomes These equations are in agreement with the fact that the CFPLs seems further apart for larger σ 0 . In Fig. 10 we show the predicted CFPLs (top row) together with simulations done at a specific value of E b /kT = 1.2 (bottom row). It is clear that there analytical predictions matches very well with the simulations, however, some levels are not predicted. This is expected since we in the derivation completely ignored the attractive interactions.
The reason why the CFPLs appear flat is that if there is a small gap, the surfaces will snap into contact, resulting in the same scenario as if the confining surface was located at an integer value. This also explains why Fig. 10. Plot of the critera in Eq. (17) (top row) and contact density ρWV (bottom row) vs rescaled external pressure F0/E b A for σ0 = 0.5 (left column) and σ0 = 1.5 (right column). The simulations in the bottom row are performed using E b /kT = 1.2. The top row predicts contact favoring pressure levels (CFPLs) when the function takes integer values. The stippled lines are visual aids to identify these levels. We see that Eq. (17) matches very well where the levels are sparse, but when they become crowded we miss out on some. This is expected since when the contact density increases, the attractive interactions, which we completely ignored when deriving Eq. (17), becomes increasingly important.
the contact density appears to possess discrete levels and transitions from one level to another quite abruptly.
If we instead of solving a mechanical equilibrium problem simply fixed the position of the confining surface, we would expect the contact density to oscillate smoothly as the confining surface position was varied. This behavior is phenomenologically similar to oscillatory hydration forces [16] where the energy cost of removing layers of water confined between surfaces oscillates with a period related to the thickness of the layers.
Equilibrium contact shape vs size
The contact density ρ WV does not contain any information about the shape of the contact. However, by comparing the equilibrium densities in Fig. 8 to the most frequently occurring equilibrium shapes in Fig. 9, we see that the two are strongly correlated.
The equilibrium contact shape minimizes the free energy of the contact, and for high values of E b /kT the free energy is dominated by the binding energies.
The free energy due to binding energies is given by the number of available bonds into solution [30], which means we expect square-like contact shapes for a cubic lattice structure. For lower E b /kT we would expect the shapes to fluctuate around these square-like shapes.
For simplicity we will consider only square surfaces with area L 2 and square contact shapes with area A c = L 2 ρ WV . For a square island shape, the relationship between the contact area and the contact perimeter S I is For the bands we have no perimeter associated with the width of the contact, hence the perimeter is simply The pit shapes are inverted versions of the islands. The area of the pit cavity isĀ = L 2 − A c = L 2 (1 − ρ WV ), such that the perimeter becomes In order to predict which of these shapes are the most stable ones, we need to know which of these three cases has the shortest perimeter for a given value of ρ WV . Comparing S I and S B yields which means that for contact densities lower than 1/4, we expect islands. Comparing S B and S P yields which means that for contact densities higher than 3/4, we expect pits.
Equivalently we may say that we expect bands when ρ WV ∈ [1/4, 3/4]. In Fig. 11 we show the number density of an occurring shape as a function of ρ WV , and we see that the limits derived here agree with the simulations. We also see that islands and pits cannot be stable at the same value of ρ WV without bands also being stable, which means that the bands are a transitional state between islands and pits.
C. Contact fluctuations
In this section we investigate how a stable contact fluctuates in time. In e.g. Fig. 6 we observed that the contact clusters were dynamic, and the contact density ρ WV fluctuated around a mean value in time. We measure these fluctuations as the standard deviation of the time series of ρ WV in equilibrium, that is where X t denotes the average of X in time.
In the top and middle row of Fig. 12 these fluctuations and the corresponding value of ρ WV are presented for . For small contacts we get only islands (I), for medium contacts where ρWV ∈ [1/4, 3/4] we get bands (B) as well, and above this limit we get mostly pits (P). These results are obtained by combining Fig. 8 with Fig. 9. The dashed lines are the theoretical stability limits for the bands from Eqs. (23) and (24). It is clear that they agree well with the simulations. various pressures. We see that the fluctuations increase with increasing temperature while the dependency on the pressure decreases. This happens since for high temperature systems, thermal surface fluctuations are dominant. At high pressures the surfaces are always resting on one another, which means that applying more pressure has no effect. Hence the fluctuations stagnate.
Except for high temperature systems (E b /kT = 0.5), we see that larger contact clusters have lower fluctuations. This is to be expected since the stability of a contact depends on its total number of nearest neighbor bonds, and large compact clusters have a low surface to area ratio. Hence these results suggest that the mobility of a contact cluster increases with increasing temperature and decreasing size. 17). The bottom row shows the average contact clusters height ∆Hc /l0 defined in Eq. (26). The crosses represent simulations where no stable contact clusters were formed. Since the thermal fluctuations of the surface increase with temperature, and the contact is part of the surface, the contact fluctuations also follow this trend. We also see that the contact cluster fluctuations decrease as ρWV increases. The jumps in the fluctuations are clearly caused by ρWV transitioning to a different CFPL. Fig. 13. Fluctuations in the number of broken bonds ∆n− and number of gained bonds ∆n+ sampled every 10 000 time steps for various rescaled inverse temperatures E b /kT and rescaled external pressures F0/E b A using σ0 = 1. The vertical dashed lines represent the contact favoring pressure levels (CFPLs) calculated by Eq. (17). As expected, increasing the temperature increases the fluctuations. The fluctuations stagnate at the point where the surfaces are initiated in contact, such that applying more pressure has no effect. We clearly see a correlation between the jumps in fluctuations and the CFPLs.
Fluctuations in the contact cluster size and the surface roughness are correlated since the more the surface heights are fluctuating, the more often we expect the surface to transition in and out of the contact regime. This is the same mechanism that correlated the pressure (initial height) and the contact size ρ WV in Fig. 8, i.e. smaller height fluctuations are required to form contacts if the surface-surface separation is small. This suggests that the height of the contact cluster should have an impact on its fluctuations. Assuming a stable contact cluster exists, we calculate this height as where the first average is over the domain which is in contact (Ω c ), and the second average is over the domain which is out of contact (Ω c ). From the bottom row of Fig. 12 we observe only small changes in H c as the pressure is increased. The reason for this is that even if the initial surface-surface separation is lower at higher pressures, once a stable contact is formed, we are in practice locked into contact for the rest of the simulation. From here on the system dissolves the part of the surface which is out of contact in order to grow the contact cluster to the optimal equilibrium height H c and size ρ WV . These two quantities are balanced since increasing H c increases the energy cost of growing new pillars to increase ρ WV (more particles are needed).
We also observe jumps in the contact fluctuations in Fig. 12. However, looking at Fig. 13, where we plot the fluctuations in the number of gained and broken bonds separately, it is clear that these jumps are caused by the jumps in ρ WV due to transitions to a different contact favoring pressure level.
D. Out-of-equilibrium systems
Here we will give a qualitative description of the typical behavior of the system out of equilibrium.
When the system has converged to equilibrium at some concentration c eq , a supersaturation Ω ≡ c/c eq − 1 can be introduced by setting the concentration to c Ω (t) = c eq (Ω + 1), (27) and keeping it constant regardless of how many particles are dissolved or deposited. Figure 14 shows the four typical behaviors the system possesses out of equilibrium. For dissolving systems (left column where Ω < 0) we see that if the pressure is low (top row), the equilibrium contacts simply dissolve and we end up with minor fluctuations and no stable contacts between the surfaces. Dissolution at high pressures (bottom row zoom-in) causes the initial large equilibrium contact to dissolve steadily as well, however, due to the high pressure, the surfaces never decouple completely. This causes ρ WV to spike in value whenever the layer closest to the confining surface has been dissolved completely, since this enables the confining surface to snap down to the next layer.
For growth (middle column where Ω > 0), we see that if the pressure is too high (bottom row), the surfaces simply grow into a perfect contact at ρ WV = 1. This occurs because the energy cost of breaking the contacts are very high due to the combined effect of a large number of contacts and a high pressure. This makes it so that being at rest on the crystal surface is the only mechanical equilibrium for the confining surface [condition 2(a) from Sec. II C always holds].
For growth at low pressures (center of the top row), we see that the that the surfaces repeatedly separate,i.e. ,ρ WV drops to 0, after which it grows rapidly as particles are favored to stick in the newly formed cavity between the existing contact cluster and the recently displaced confining surface. This newly formed contact has the same shape as the equilibrium contact. When the healing of the broken contact has stagnated, the crystal surface starts to rise again, and the cycle repeats. This produces the jagged profile for growth at low pressures shown in the top zoom-in of Fig. 14. Each repeating cycle lifts the confining surface one lattice unit.
This repeating separation occurs because at low pressures the initial contact is small, such that the system can build up enough repulsive energy to enable the mechanical equilibrium at h λ where the confining surface is not resting on the crystal surface. When h λ is enabled, whether the surfaces separate is controlled by the bond breaking criteria from Sec. II C.
The period of this cycle should therefore depend on two things: how quickly h λ can be enabled, which is controlled by the supersaturation, and the rate of breaking the bonds, which does not depend on the supersaturation, but on the attempt frequency (which we have set to every A cycles). A larger pressure makes it harder to enable h λ and to break the bonds, thus it should increase the period. If we increase the attempt frequency, the surfaces would separate sooner after h λ is enabled, and if we decease it, the separation would happen later, until a point where the surfaces are able to merge completely before they are able to separate.
IV. DISCUSSIONS AND CONCLUSIONS
The results suggest that the force of crystallization [26], which has been observed and studied in various early experiments [39,40] as well as new [28], can cause a lifting of the confining surface even when there are stable contact clusters between the surfaces. This means that we could have potentially large and irregular surface variations in such a system simply due to attractive interactions promoting contacts between the surfaces. This fact is in agreement with recent experimental observations [28]. For dissolution at high pres- Prior to this point the system has been equilibrated by fixing the effective number of particles. The left column shows results for a dissolving system (Ω < 0), the middle column shown results for a growing system, and the right column shows zoom-ins of the black squares in the respective row. Dissolution at low pressures is simply a decay of the initial contact cluster, and growth at high pressures simply results in the two surfaces merging completely (ρWV = 1). Growth at low pressures (top zoom) periodically causes the confining surface to separate from the crystal surface. For dissolution at high pressures (bottom zoom) we observe an initial stage where the contact cluster dissolves, after which we observe the contact density to spike every now and then.
sures we found the surfaces to possess a continuous state of contact after the dominant equilibrium contact cluster dissolved. Hence the results also suggest that pressure solution might occur with parts of the surfaces being in contact. This is in agreement with experiments observations [29]. This means that we by including attractive surfacesurface interactions made the original model [30] go from producing solely flat interfaces, to producing structural roughnesses in the limits of low pressured growth and high pressured dissolution. This suggests that our hypothesis stating that attractive surface-surface interactions are essential mechanisms in confined crystallization is feasible.
Generally we observe a dominant equilibrium contact cluster forming between the surfaces from an initial state with no such contact. The pressure and temperature dependency of the contact cluster size indicate that it is limited by thermal surface fluctuations (roughness). If the fluctuations are insufficient then no stable contacts are formed. During equilibration we recognize known concepts from kinetic limited growth theory such as primary and secondary nucleation stages, coalescence and Ostwald ripening. These effects are known to appear in island nucleation and step growth [36,37], and their presence suggests that the kinetics are properly treated in our model.
Since the initial surface-surface separation depends smoothly on the external pressure, and the thermal fluctuations depend smoothly on the temperature, the fact that fluctuations limit the contact size tells us that the coexistence of the repulsive and attractive interactions is stable not only for a selective few parameters, but for all. The fact that certain pressure levels promote the formation of contacts due to the initial surface-surface separation being such that formed contacts are especially hard to dissolve, is phenomenologically similar to oscillatory hydration forces [16]. This shows that the model produces effects known to be associated with confined surfaces.
The size of the contact cluster relative to the system size, which we refer to as the contact density ρ WV , was found to be the key parameter deciding whether the equilibrium contact cluster was shaped as an island, a band or a pit, as well as governing the stability of the contact in time (e.g. the fluctuations and mobility). The stability regions of these shapes were in excellent agreement with theoretical predictions. This demonstrates that the model can be used to study details regarding both when and how contacts form between surfaces.
Possible extensions of the model include introducing a Hamaker constant [16] in the attractive interaction to model a different material in the confining surface, and using the Eyring equation [34] for the rates (i.e. calculate energy barriers), such that a parameter for the bond breaking frequency would not be necessary. Moreover, a Lennard-Jones potential [33] could be used between the surfaces instead of the pure Van der Waals term we have used, which would remove the need to distinguish between the surfaces resting on one another and being separated. Using discrete solute particles would enable the study of diffusion limited systems [41]. Elastic interaction could also be added to the surfaces [42][43][44][45].
Details aside, it is fascinating how much interesting physics came out of simply extending the previous model [30] by counting the confining surface as a neighbor. This leads us to believe that there is something simple yet fundamentally correct with our description of how attractive and repulsive forces work together in a confined system.
ACKNOWLEDGMENTS
This study was supported by the Research Council of Norway through the project "Nanoconfined crystal growth and dissolution" (No. 222386). We acknowledge support from the Norwegian High Performance Computing (NOTUR) network through the grant of machine access.
|
2020-06-04T01:00:54.521Z
|
2020-06-03T00:00:00.000
|
{
"year": 2020,
"sha1": "5fde171aecacd6fe2bd7e642b2a1dcf24b840865",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5fde171aecacd6fe2bd7e642b2a1dcf24b840865",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
2977144
|
pes2o/s2orc
|
v3-fos-license
|
UV intensity may affect autoimmune disorder.
Phenotype of biological systems needs to be robust against mutation in order to sustain themselves between generations. On the other hand, phenotype of an individual also needs to be robust against fluctuations of both internal and external origins that are encountered during growth and development. Is there a relationship between these two types of robustness, one during a single generation and the other during evolution? Could stochasticity in gene expression have any relevance to the evolution of these types of robustness? Robustness can be defined by the sharpness of the distribution of phenotype; the variance of phenotype distribution due to genetic variation gives a measure of 'genetic robustness', while that of isogenic individuals gives a measure of 'developmental robustness'. Through simulations of a simple stochastic gene expression network that undergoes mutation and selection, we show that in order for the network to acquire both types of robustness, the phenotypic variance induced by mutations must be smaller than that observed in an isogenic population. As the latter originates from noise in gene expression, this signifies that the genetic robustness evolves only when the noise strength in gene expression is larger than some threshold. In such a case, the two variances decrease throughout the evolutionary time course, indicating increase in robustness. The results reveal how noise that cells encounter during growth and development shapes networks' robustness to stochasticity in gene expression, which in turn shapes networks' robustness to mutation. The necessary condition for evolution of robustness, as well as the relationship between genetic and developmental robustness, is derived quantitatively through the variance of phenotypic fluctuations, which are directly measurable experimentally.
INTRODUCTION
Robustness is ability to function against changes in the parameter of a system [1,2,3,4,5]. In a biological system, the changes have two distinct origins, genetic and epigenetic. The former concerns with genetic robustness, i.e., rigidity of phenotype against mutation, which is necessary to maintain a high fitness state. The latter concerns with fluctuation in number of molecules and external environment.
Indeed, phenotype of isogenic individual organisms is not necessarily identical. Chemotaxis [6], enzyme activities, and protein abundance [7,8,9,10,11] differ even among those sharing the same genotype. Recent studies on stochastic gene expression elucidated the sources of fluctuations [7]. The question most often asked is how some biological functions are robust to phenotypic noise [11,12], while there may also be positive roles of fluctuations in cell differentiation, pattern formation, and adaptation [13,14,15,16].
Noise, in general, can be an obstacle in tuning a system to the fittest state and maintaining it there. Phenotype of an organism is often reproducible even under fluctuating environment or under molecular fluctuations [2]. Therefore, phenotype that is concerned with fitness is expected to keep some robustness against such stochasticity in gene expression, i.e., robustness in 'developmental' dynamics to noise. Phenotype having a higher fitness is maintained under noise. How is such ''developmental robustness'' achieved through evolution? In the evolutionary context, on the other hand, another type of robustness, robustness to mutation need to be considered. When genetic changes occur, gene expression dynamics are perturbed so that phenotype with a high fitness may no longer be maintained. The ''genetic robustness'' concerns with the stability of a high-fitness state against mutation.
Whether these two types of robustness emerge under natural selection have long been debated in the context of developmental dynamics and evolution theory [3,5,17,18,19,20], since the proposition of stabilization selection by Schmalhausen [21] and canalization by Waddington [22,23,24]. Are developmental robustness to noise and genetic robustness to mutation related? Is phenotypic noise relevant to attain robustness to mutation? In the present paper, we answer these questions quantitatively with the help of dynamical network model of gene expression.
Under the presence of noise in gene expression, phenotype as well as fitness, of isogenic organisms is distributed, usually following a bell-shaped probability function. When the phenotype is less robust to noise, this distribution is broader. Hence, the variance of this distribution, i.e., variance of isogenic phenotypic fluctuation denoted as V ip , gives an index for robustness to noise in developmental dynamics. On the other hand, robustness to mutation is measured from the fitness distribution over individuals with different genotypes. An index for it is given by variance of phenotypic fluctuation arising from diversity of genotypes in a population [25,26,27], denoted here as V g . This variance V g increases as the fraction of low-fitness mutants increases.
Here we show that evolution to increase both types of robustness is possible only when the inequality V ip $V g is satisfied.
Since the isogenic phenotypic fluctuation V ip increases with noise, this means that evolution of robustness is possible only when the amplitude of phenotypic noise is larger than some critical value as derived by V ip $V g , implying a positive role of noise to evolution. We demonstrate that both the two variances V ip and V g decrease in the course of evolution, while keeping the proportionality between the two. This proportionality is consistent with an observation in a bacterial evolution experiment [16,17,18].
We explain the origin of the critical noise strength, by noting that smooth dynamical behavior free from a rugged potential landscape evolves as a result of phenotypic noise. When the noise amplitude is smaller than the threshold, we observe that low-fitness mutants are accumulated, so that robustness to mutation is not achieved. Generality and relevance of our results to biological evolution are briefly discussed.
Theoretical Framework on Genetic-Phenotypic Relationship
In natural population, both the phenotype and genotype differ among individuals. Let us consider population distribution P(x, a) where x is a variable characterizing a phenotype and a is that for the corresponding genotype [18]. Here the phenotype x is responsible for the fitness of an individual, and the selection depending on x is considered as an evolutionary process. Since the phenotype differs even among isogenic individuals, the distribution P(x; a = a 0 ) for a fixed genotype a 0 has some variance. This isogenic phenotypic variance V ip , defined as the variance over clones, is written as V ip (a)~Ð (x{x(a)) 2 P(x,a)dx, where x(a) is the average phenotype of a clonal population sharing the genotype a, namely x(a)~Ð P(x,a)xdx. This variation of phenotype is a result of noise through the developmental process to shape the phenotype. If this variance is smaller, the phenotype is less influenced by noise, and thus V ip works as a measure of robustness of the phenotype against noise.
On the other hand, the standard evolutionary genetics [25,26,27] mainly studies the phenotypic variance due to genetic variation. It measures phenotypic variability due to diversity in genotypes in a population. This phenotypic variance by genetic variance, which is termed V g here, is then defined as the variance of the average x(a), over genetically heterogeneous individuals. It is given by V g~Ð (x(a){vxw) 2 P(a)da, where P(a) is the distribution of the genotype a and ,x _ . is the average of x(a) over genotypes a. While V ip is defined as variance over clones, i.e., individuals with the same genotype, V g comes from those with different genotypes. As V g is smaller, the phenotypic change by genetic variation is smaller. Hence V g gives a measure of robustness of the phenotype against mutation.
We have previously derived an inequality V ip .V g between the two variances, by assuming evolutionary stability of the population distribution P(x, a), that is preservation of single-peakedness through the course of evolution [18] (see Supporting Text S1). Indeed the single-peaked distribution collapses as V ip approaches V g , where the distribution is extended to very low values of x (fitness). In other words, error catastrophe occurs at V g <V ip ; (Here error catastrophe means accumulation of low-fitness mutants in the population after generations, and the term is used here by extending its original meaning by Eigen [28]). For each course of evolution under a fixed mutation rate, the proportionality between V g and V ip is derived, since the genetic variance increases roughly proportionally to the mutation rate [18].
Note, however, that the derivation of these relationships (V ip $V g , error catastrophe at V g <V ip , and proportionality between V g and V ip for a given course of evolution) is based on the existence of two-variable distribution function P(x = phenotype, a = gene), and the postulate that single-peaked distribution is maintained throughout evolution, which is not trivial. Hence the above relationships need to be examined by some models for evolution. In addition, why does the population distribution extend to low-fitness values when the phenotypic fluctuation V ip is smaller than V g ? Or, put it another way, why do systems with small phenotypic noise run into ''error catastrophe''? In fact, the emergence of error catastrophe as a result of decreasing isogenic phenotypic fluctuation below V g may look rather counter-intuitive, since in general one expects fluctuation to perturb a system from the fittest state. The necessity of fluctuation for evolution to increase robustness to noise and to mutation needs theoretical examination.
Model
To study the proposed relationships, we need to consider seriously how the phenotype is shaped through complex ''developmental process''. In the present paper, we use the term 'development', in a broad sense, including a process in uni-cellular organisms to reach cell division. It is a dynamical process to shape a phenotype at a 'matured' state (where fitness is defined) from a given initial state. In general, this dynamic process is complex so that the process may not reach the identical phenotype due to the noise through this developmental process. This leads to the isogenic variance of the phenotype V ip . On the other hand, the equation governing the developmental process is varied as a result of mutation. The phenotype variance over a population with distributed genotypes gives V g .
We consider a simple model to satisfy the requirement on 'development' above. It consists of a complex dynamic process to reach a target phenotype under a noise which may alter the final phenotypic state. We do not choose a biologically realistic model that describes a specific developmental process, but instead take a model as simple as possible, to satisfy a minimal requirement for our study. Here we take a simplified model, borrowed from a gene regulatory network, where expression of a gene activates or inhibits expression of other genes under noise. These interactions between genes are determined by the network. The expression profile changes in time, and eventually reaches a stationary pattern. This gene expression pattern determines fitness. Selection occurs after introduction of mutation at each generation in the gene network. Among the mutated networks, we select a network with a higher fitness value. Since there is a noise term in the gene expression dynamics, fitness fluctuates even among the individuals with an identical gene network, which leads to the isogenic fluctuation V ip . On the other hand, the expression pattern varies by mutation in the network, and gives rise to variation in the average fitness, resulting in V g .
This simplified gene expression follows a typical switch-like dynamics with a sigmoid input-output behavior [29,30,31,32,33] widely applied in models of signal transduction [34] and neural networks [35] (For a related evolution model with discrete states, see e.g., [24]). The dynamics of a gene expression level x i is described by where J ij = 21,1,0, and g(t) is Gaussian white noise given by ,g(t)g(t9). = d(t2t9). M is the total number of genes, and k is the number of output genes that are responsible for fitness to be determined. The value of s represents noise strength that determines stochasticity in gene expression (For simplicity we mainly consider the case that the noise amplitude is independent of x i , while inclusion of such x-dependence of noise amplitude does not alter the conclusion to be discussed). By following a sigmoid function tanh, x i has a tendency to approach either 1 or 21, which is regarded as ''on'' or ''off'' of gene expression. Even though x is defined over [2', '], it is attracted to the range [21,1] (or slightly above or below the range due to the noise term). We consider a developmental process leading to a matured phenotype from a fixed initial state, which is given by (21,21,…,21); i.e., all genes are off, unless noted otherwise. (This specific choice of initial condition is not important). Let us define a fitness function so that gene expression levels (x i ) for genes i = 1,2,…,k(,M) would reach an ''on'' state, i.e., x i .0. The fitness is maximum if all k genes are on after a transient time span T ini , and minimum if all are off. To be specific, we define the fitness function by where S(x) = 1 for x.0, and 0 otherwise, […] temp is time average between t = T ini and t = T f (The time average here is not important, because the gene expressions x i are fixed after some time, in most cases). Adoption of the value (S(x j )21) after initial time T ini leads to the same result (. The fitness function takes the maximum value F = 0 when the selected pattern of gene expression (x i ; i = 1,2,…,k) is always ''on'' and takes the minimum (F = 2k) when all k genes are always off. Note that fitness is calculated only after time T ini , which is chosen sufficiently large so that the temporal average can be computed after the gene expression dynamics has fallen on an attractor. This initial time can be considered as the time required for developmental dynamics.
As the model contains a noise term, fitness fluctuates at each run, which leads to the distribution in F, even for the same network. Hence we obtain the distribution p(F; g), for a given network ''g'', whose variance gives isogenic phenotypic fluctuation. At each generation, we compute the fitness F over L runs, to obtain the average fitness value F _ of a given network. Now we consider the evolutionary process of the network. Since the network is governed by J ij which determines the 'rule' of the dynamics, it is natural to treat J ij as a measure of genotype. Individuals with different genotype have a different set of J ij At each generation there are N individuals with different sets of J ij For each individual network, we compute the average fitness F _ . Then we select the top N s (,N) networks that have higher fitness values. (The value N/N s corresponds to the selective pressure. As it is larger, the evolution speed increases. However, specific choice of this value itself is not important to the result to be discussed).
At each generation, mutation changes the network, i.e., changes J ij at a given mutation rate m. We rewire the network at a given rate so that changes in J ij produce N new networks. (In most simulations, only a single path, i.e., a single pair of i, j is changed. The mutation rate can be lowered by changing a path only for some probability. Although it is important to the evolution speed and the error catastrophe point to be discussed, the conclusion to be discussed is not altered by specific choice of m.) Here we make N/N s mutants from each of the top N s networks, so that there will be N networks again for the next generation. From this population of networks we repeat the process of the developmental dynamics, fitness calculation, selection, and mutation (Instead of this simple genetic algorithm, we can also assume that the number of offspring increases with the fitness. This choice does not alter the conclusion to be presented).
Simulations start from a population of random networks with a given fraction of paths (for example, 50% of J ij are nonzero). At each generation, the N individuals have slightly different networks J ij , so that the values of F _ are different. We denote the fitness distribution over individuals with different genotype as P(F _ ). On the other hand, the fitness distribution for an identical network (''g'') is computed to obtain p(F; g).
Remark:
Developmental dynamics and selection process in our model are too simple. Still, this model is relevant to examine general statement on phenotypic fluctuations, as the model at least captures complex dynamics giving rise to a phenotype, stochasticity in dynamics, mutation, and selection according to a given phenotype. Indeed, real gene expression dynamics depend on environmental conditions, and the fitness is defined as expression patterns to adapt each environmental condition. We have also carried out some simulations by imposing such fitness but the results to be discussed (with regards to V g and V ip ) are invariant.
RESULTS
Let us first see how the evolutionary process changes as a function of the noise strength s. After generations, the peak position in P(F _ ) increases, so that the top of F _ in the population approaches the highest value 0. Indeed, in all cases, the top group quickly evolves to the highest fitness state F _ = 0 (see Fig. 1a; even for s = 0.2, the highest fittest value approaches 0 after a few hundred more generations.). The time necessary for the system to reach this state becomes shorter as the phenotypic noise decreases (see Fig. 2). On the other hand, the time evolution of the distribution function P(F _ ) depends drastically on the noise strength s. When s is small, the distribution is broad and the existing individual with the lowest F _ remains at the low fitness state, while for large s, even the individuals with the lowest fitness approach F _ = 0 (see Fig. 1b and Fig. 3). There is a threshold noise s c (<0.02), below which the distribution P(F _ ) is broadened, as is discernible in the data of the variance of the distribution, V g in Fig. 2. Here, the top individuals reach the highest fitness, leaving others at the very low fitness state. As a result, the average fitness over all individuals, vF w~Ð FP(F )dF is low. ,F _ . and the lowest fitness over individuals F _ min , after a sufficiently large number of generations, are plotted against s in Fig. 2. The abrupt decrease in fitness suggests threshold noise s c , below which low-fitness mutants always remain in the distribution. For s.s c , the distribution P(F _ ) takes a sharp peak at F _ ,0, where the variance is rather small. Distribution below and above s c are displayed in Fig. 3. (This type of transition is also observed by increasing the mutation rate, while fixing the noise strength at s.s c ).
Let us study the relationship between V g and V ip Here V ip is defined as variance from the distribution p(F; genotype), i.e., over individuals with the same genotype. As the distribution p depends on each individual with different genotype, the variance changes accordingly. Naturally, the top individual has a smaller variance, and the individual with lower fitness has a larger variance. As a measure of V ip , we used either the average of the variance over all individuals or the variance of phenotype from a gene network that is located closest to the peak in the distribution P(F _ ). Both estimates of V ip do not differ much, and the following conclusion is drawn in both cases. V g , on the other hand, is simply the variance of the distribution P(F _ ), i.e., over individuals having different genotypes present.
The relationship between V g and V ip thus evaluated is plotted in Fig. 4. We find that both the variances decrease through the evolutionary time course when s.s c , where we note: (ii) V g !V ip during the evolutionary time course under a fixed noise strength s(.s c ) and a fixed mutation rate. As s is lowered toward s c , V g increases so that it approaches V ip .
(iii) V g <V ip at s<s c , where error catastrophe occurs. In other words, the fittest networks maintaining a sharp distribution around the peak dominate only when V ip .V g is satisfied. As s is decreased to s c , V ip approaches V g , error catastrophe occurs and a considerable fraction of low-fitness mutants accumulates. Hence, the relationships proposed theoretically assuming evolutionary stability of a two-variable distribution function P(x = phenotype, a = genotype) is confirmed. Here, without introducing phenomenological assumptions, the three relationships are observed in a general stochastic gene-network model. Why didn't the system maintain the highest fitness state under small phenotypic noise s,s c ? To study the difference in dynamics evolved for different values of s, we choose the top individual (network) that has evolved at s = s 0 , and place it under a different noise strength s = s9. In Fig. 5, we have plotted the fraction of runs giving rise to F = 0 under such circumstance. As shown, the successful fraction decreases when s9 goes beyond s 0 . In other words, the network evolved under a given noise strength successfully reaches the target gene expression up to that critical noise level, while it begins to fail doing so at a higher noise strength. Accordingly, the distribution p(F; gene) extends to lower values in fitness, when a network evolved under small phenotypic noise is developed under a higher noise level. On the other hand, the network evolved under high level noise maintains a high fitness value, even when the noise is lowered.
Next we study the basin structure of attractors in the present system. Note that an orbit of the network with the highest fitness, starting from the prescribed initial condition, is within the basin of attraction for an attractor corresponding to the target state (x i .0 for i = 1,…,k). Hence the basin of attraction for this target attractor is expected to be larger for the dynamics evolved under higher level noise. We have simulated the dynamics (1) for the evolved fittest network under zero noise, starting from a variety of initial conditions over the entire phase space, and measured the distribution of F at each attractor. The distribution is shown in Fig. 6 (Due to the symmetry against x j = 1«x j = 21 in the model, the distribution is symmetric around F = 2k/2 when all initial conditions are taken. In fact, by starting from x i = 1 for all i, the orbit reaches an attractor x j ,0 for j = 1,…,k, resulting in F = 2k). For the network evolved under s.s c , the distribution has a sharp peak at F = 0 (and F = 2k due to the symmetry), with more than 40% attraction to each. On the other hand, for the networks evolved under s,s c , the peak height at F,0 is very small, i.e., the basin for the attractor with F = 0 is tiny. There exist many small peaks corresponding to attractors with 2k,F,0, having similar sizes of basin volumes. In fact, the basin volumes for attractors with 2k,F,0 grow as s is decreased, and are dominant for s,s c.
Dynamic Origin of Robust Evolution
The difference in the basin structure suggested by Fig. 6 is schematically displayed in Fig. 7. For the network evolved under s.s c there is a large, smooth attraction to the target state, while for the dynamics evolved under s,s c , the phase space is split into small basins. Let us consider the distance between the basin boundaries from a ''target orbit'' starting from 21,…,21 and reaching x i .0 (for 1#i#k) which is defined by D here. The distance D remains small for the dynamics evolved under a low noise strength s,s c , and increases for those evolved under a higher noise. It is interesting to note that evolution influences the basin structure globally over the phase space, although the fitness condition is applied locally to an orbit starting from a specific initial condition.
The results in Fig. 5 and Fig. 6 imply that the gene regulation networks that operate and evolve under noisy environment exhibit qualitatively different dynamics compared to those subjected to low level noise. In our model, the fitness of an individual changes when its gene expression x j for j = 1,…,k changes its sign. Recall that the fixed point solution x i = tanh(S j J ij x j ) changes its sign when S j J ij x j in the sigmoid function changes its sign. This change may occur during the developmental dynamics by noise, and we call such points in the phase space 'turning points'. When an orbit of eq.(1) passes over turning points, x j takes a negative value for some j (for 1#i#k) at the attractor (see Fig. 8 for schematic representation). Since there are many variables for gene expression and the values of J ij are distributed over 21 and 1, the term tanh(S j J ij x j ) changes its sign at several points in the phase space {x j } generally. Hence there can be many turning points in the phase space. The fittest network with F _ <0 chooses such orbits having no turning points within the noise range from the original orbit. An orbit from the fittest individual evolved under low-level noise Figure 6. Distribution of the fitness value when the initial condition for x j is not fixed at 21, but is distributed over [21,1]. We choose the evolved network as in Fig. 5, and for each network we take 10000 initial conditions, and simulated the dynamics (1) without noise to measure the fitness value F after the system reached an attractor (as the temporal average 400,t,500). The histogram is plotted with a bin size 0.1. doi:10.1371/journal.pone.0000434.g006 encounters many turning points when subjected to noisy environment. The average distance between the turning points and an orbit that has reached the target gene expression pattern is estimated by the distance D defined above. Recall that the distance D is small for the dynamics evolved under a low noise strength. Such dynamics, if perturbed by a higher level of noise, are easily caught in the turning points, which explains the behavior shown in Fig. 5.
Let us now discuss the relationship between V g and V ip . Noise disturbs an orbit so that it may go across the basin boundary (turning points) with some probability. We denote the standard deviation of the location of the orbit due to noise as d p , which is proportional to the noise strength s. Since the distance between the orbit and the basin boundary is deviated by d p , and the fitness value drops when the orbit crosses the basin boundary, the variance V ip is estimated to be proportional to (d p /D) 2 .
Next, we discuss how the mutation in the network influences the dynamics. When the network is altered, i.e., a path is added or removed as a result of mutation in J ij , there exists a variation of the order of O(1= ffiffiffiffi ffi N p ) in the threshold function term in eq.(1). This leads to a deviation of the location of turning points (or basin boundary). We denote this deviation as d g , which increases with the mutation rate. The variance V g is estimated to be proportional to (d g /D) 2 , with the same proportion coefficient as that between V ip and (d g /D) 2 .
Under the presence of noise, there is a selective pressure to avoid the turning points (basin boundaries) that exist within the distance d p from the ''target'' orbit. This leads to an increase in D. However, if d p is larger than d p , the memory of this distance between the target and the boundaries will not be propagated to the next generation, due to large perturbation to the original network by the mutation. Hence increase in D (i.e., increase in robustness) is expected only if d p .d g . Since d p and d g increase with the noise strength s and the mutation rate m respectively, there exists a critical noise strength s c beyond which this inequality is satisfied. From the relationship between d p,g and V ip,g , the condition for robust evolution is rewritten as V ip .V g .
When the condition V ip .V g (i.e., s.s c ) is satisfied, the system increases D during evolution. We have computed the temporal evolution of basin distribution. With generations, the distribution evolves from the pattern at a low level noise in Fig. 7, to that at large s characterized by enhanced peak at F = 0. Accordingly D increases with generations. Recall that V ip ,(d p /D) 2 , and V g ,(d g / D) 2 , both variances decrease with generations, while V ip /V g is kept constant.
DISCUSSION
We have demonstrated the inequality and proportionality between V g and V ip , through numerical evolution experiment of a gene network. As phenotypic noise is decreased and the inequality V ip .V g is broken, low-fitness mutants are no longer eliminated. This is because the mutants fail to reach the target gene expression pattern, by crossing the boundary of the basin of attraction to the target. When the amplitude of the noise is larger, on the other hand, the networks of the dynamics with a large basin volume for the target attractor are selected and thus mutants with lower fitness are removed successively through the selection process. Hence noise increases developmental robustness through evolution, together with genetic robustness.
Although we used a specific example to demonstrate the relationship between V ip and V g and the error catastrophe, we expect this relationship to be generally applicable to systems satisfying the following conditions: (i) Fitness is determined through developmental dynamics.
(ii) Developmental dynamics is sufficiently complex so that its orbit, when deviated by noise, may fail to reach the state with the highest fitness.
(iii) There is effective equivalence between mutation and noise in the developmental dynamics with regards to phenotype change.
Note that the present system as well as the previous cell model [18] satisfies these conditions. The condition (i) is straightforward in our model, and the condition (ii) is satisfied because of the complex dynamics having many turning points in the phase space. Noise in developmental dynamics sometimes perturbs an orbit to cross the basin boundary so as to escape from attraction to the target gene expression pattern, while a mutation in the network may also induce such failure, by shifting basin boundaries. Hence the condition (iii) is satisfied.
When developmental process fails due to phenotypic noise, the fitness function takes a low value. Evolution under noise acts to prevent such failure within the range of noise. On the other hand, due to the condition (iii), mutation may also lead to such lethality. When the effect of mutation goes beyond the range given by the phenotypic noise, mutants with very low fitness values begin to accumulate. Hence there appears a threshold level of phenotypic noise below which low-fitness (or deleterious) mutants accumulate (or threshold mutation rate beyond which such mutants accumulate). In this sense, we expect that for robust evolution, the inequality V g ,V ip must be satisfied in order for the low-fitness mutants to be eliminated. Violation of the inequality leads to accumulation of low-fitness (or deleterious) mutants, a phenomenon known as error catastrophe [28]. Only under the presence of noise in the developmental process, systems acquire robustness through evolution. In other words, developmental robustness to stochasticity in gene expression implies genetic robustness to mutation. Quantitative analyses of stochasticity in protein abundance during the laboratory evolution of bacteria are possible [17,36]. By carefully measuring the variation V g of given phenotype in mutants, and comparing it with that of isogenic bacteria, V ip , one can examine the validity of our conclusion between V g and V ip .
It is worthwhile to mention that in a class of theoretical models, fitness landscape is given as an explicit continuous function of a gene sequence (e.g., energy function in a spin glass [37]), where a minute change in sequence does not lead to a drastic change in fitness. On the other hand, in a system satisfying (i) and (ii), a small change in genotype (e.g., a single change in the network path) may result in a large drop in fitness, since fitness is determined after the developmental dynamics. Indeed, there may appear mutants with very low fitness values from an individual with a high fitness value, only by a single change of a path in the network. Such deleterious mutations are also observed in nature [27].
It is interesting to note that a larger basin of attraction to a target attractor (with the highest fitness value) is formed through a mutation and selection process. As a result, dynamics over the entire phase space are simplified to those having only a few attractors, even though the fitness function is given locally without scanning over the entire phase space. When the time-course is represented as a motion along a potential surface, our results suggest that the potential landscape becomes smoother and simpler through evolution, and loses ruggedness after generations. Indeed, existence of such global attraction in an actual gene network has recently been reported in yeast cell-cycle [38].
Such smooth landscape was also studied in protein folding [39,40]. Saito et al. [41] observed an evolutionary process from a rugged to the so-called funnel-like landscape in an interacting spin system abstracting protein folding dynamics. Under a general framework of statistical mechanics [42], a relationship between the degree of variance in coupling coefficients J ij between spins (corresponding to V g ) and the temperature (i.e., phenotypic noise for spin x i , corresponding to V ip ) is formulated. Such relationship may be relevant to understand the relationships between V g and V ip in our study.
According to established Fisher's theorem on natural selection, evolution speed of phenotype is proportional to the phenotypic variance by genetic variation, V g [25,26,27]. The demonstrated proportionality between V ip and V g then suggests that the evolution speed is proportional to the isogenic phenotypic fluctuation, as is also supported by an experiment on bacterial evolution in a laboratory [17] and confirmed by simulations of a reaction network model of a growing cell [18].
Isogenic phenotypic fluctuation is related to phenotypic plasticity, which is a degree of phenotype change in a different environment. Positive roles of phenotypic plasticity in evolution have been discussed [20,43,44,45]. Since susceptibility to the environmental change and the phenotypic fluctuation are positively correlated according to the fluctuation-response relationship [16,46.47], our present results on the relationship between phenotypic fluctuations and evolution imply, inevitably, a relationship between phenotypic plasticity and evolution akin to genetic assimilation proposed by Waddington [22].
SUPPORTING INFORMATION
Text S1 Derivation of General Relationship on Fluctuations. Mathematical derivation on general relationships on phenotypic variances is presented. Found at: doi:10.1371/journal.pone.0000434.s001 (0.05 MB PDF)
|
2014-10-01T00:00:00.000Z
|
2003-09-01T00:00:00.000
|
{
"year": 2003,
"sha1": "9c7226830699ddceb9e0bf16c38e9abe880816fc",
"oa_license": "public-domain",
"oa_url": "https://doi.org/10.1289/ehp.111-a634",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c7226830699ddceb9e0bf16c38e9abe880816fc",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
119262109
|
pes2o/s2orc
|
v3-fos-license
|
The Multiplicity of High-Mass Stars
We report about an ongoing photometric and spectroscopic monitoring survey of about 250 O- and 540 B-type stars in the southern Milky Way with the aim to determine the fraction of close binary systems as a function of mass and to determine the physical parameters of the individual components in the multiple systems. Preliminary results suggest that the multiplicity rate drops from 80% for the highest masses to 20% for stars of 3 solar masses. Our analysis indicates that the binary systems often contain close pairs with components of similar mass. This coincidence cannot originate from a random tidal capture in a dense cluster but is likely due to a particular formation process for high-mass stars. The large percentage of multiple systems requires a new photometric calibration for the absolute magnitudes of O-type stars.
Introduction
The most massive stars, historically classified as O-type stars, are rare objects with masses above 16 M , typically found at large distances from the sun; early B stars (M > 8 M ) also contribute to the group of high-mass objects. The formation of these O-and B-type stars is still under debate and can be explained by various models (Zinnecker & Yorke 2007). Both observations of massive circumstellar disks (Shepherd, Claussen & Kurtz 2001;Chini et al. 2004;Cesaroni et al. 2005;Patel et al. 2005;Kraus et al. 2010) and theoretical calculations (Yorke & Sonnhalter 2002;Krumholz et al. 2009;Kuiper et al. 2010) seem to favour the accretion scenario. However, the high multiplicity among high-mass stars (Preibisch et al. 1999;Mason et al. 2009) might alternatively support a merging process of intermediatemass stars (Bonnell, Bate & Zinnecker 1998).
Massive binary stars are believed to be the progenitors of a variety of astrophysical phenomena, e.g. short gamma-ray bursts (Eichler et al. 1989;Paczynski 1991;Narayan, Paczynski & Piran 1992), X-ray binaries (Moffat 2008), millisecond pulsar systems and double neutron stars (van den Heuvel 2007). Even more relevant, the multiplicity of stellar systems is a crucial constraint for the various star formation scenarios, particularly if the multiplicity fraction were a function of stellar mass. While many massive stars are found to be part of binary or multiple systems, comprehensive statistics on close binaries is still missing. The smallest separations are expected to be around 0.2 AU, resulting in orbital periods of a few days only. Below this minimum distance the binary system will merge to form a single object.
The vast parameter space in possible orbital periods and mass ratios requires different, partly overlapping complementary methods that have their own limitations and their observational biases (Sana & Evans 2011): Highresolution imaging like adaptive optics or interferometric techniques serve for systems with wider separations and mass ratios q = M 2 /M 1 between 0.01 and 1 while high-resolution spectroscopy is biased toward finding close companions and those that are a significant fraction of the primary's mass (q > 0.1). We note that the inclination of the orbit with respect to the observer and the eccentricity are other crucial limitations for the spectroscopic detection of close multiple systems.
An adaptive optics survey of about a third of the known galactic O stars revealed visual companions in 27% of the cases (Sana & Evans 2011). Speckle interferometry for most of the galactic O stars showed companions for 11% of the sample (Mason et al. 2009). Moreover, the same study claimed that 51% of the O-type objects are spectroscopic binaries (SBs), based on an extensive review of the literature. This is in accord with a recent spectroscopic survey finding that among ∼ 240 southern galactic O and WN stars more than one hundred stars show radial velocity (RV ) variations larger than 10 km/s (Barbá et al. 2010). Recently, a high-resolution imaging campaign of 138 fields containing at least one high-mass star yielded a multiplicity fraction close to 50% (Maíz-Apellániz 2010). In summary, the spectroscopic binary frequency of high-mass stars so far observed and reported in the literature is moderately high, while the visual binary frac-tion is low. Little or no discussion as to why the multiplicity for O-type stars is high has been offered up to now nor have conclusions been drawn so far about which of the competing high-mass star formation models best explains the hitherto known trends in the stars' multiplicity.
There have been several surveys of B star duplicity in the past. In a sample of 109 B2 -B5 stars there were 32 (29%) spectroscopic and 49 (45%) visual binaries yielding a total binary frequency of 74% (Abt, Gomez & Levi 1990). A spectroscopic survey of 83 late B-type stars revealed that 24% of the stars had companions with mass ratios greater than 0.1 and orbital periods less than 100 days (Wolff 1978). A speckle interferometry survey of the Bright Star Catalogue resolved 34 of 245 B stars into binaries corresponding to a multiplicity fraction of 14% (McAlister et al. 1987(McAlister et al. , 1993. Another speckle interferometry survey of 48 Be stars revealed a similar binary fraction of 10% ± 4% (Mason et al. 1997). A further comparison study, based on adaptive optics IR imaging and probing separations from 20 to 1000 AU for 40 B and 39 Be stars derived the same qualitative result, i.e. that the multiplicity of B and Be stars are identical. This time, however, the binary fractions were 29% ± 8% for the B stars and 30% ± 8% for the Be stars (Oudmaijer & Parr 2010). Finally an adaptive optics photometry and astrometry survey of 70 B stars revealed 16 resolved companions (23%) (Roberts, Turner & Ten Brummelaar 2007). In summary, the overall multiplicity fraction of high-mass stars seems to decrease with stellar mass.
In the present paper we investigate the multiplicity fraction in the stellar mass range of about 3 − 80 M and for mass ratios q > 0.2 and search for new eclipsing O-type binaries.
The Survey
Our spectroscopic survey comprises 138 O-and 581 B-type stars with V ≤ 8 mag. The O-stars were taken from the Galactic O-Star Catalogue V.2.0 (Sota et al. 2008) (GOSC). The B stars were selected from the HIPPARCOS archive: 50% of the B stars form a volume-limited sample with d < 125 pc, the remaining 50% have been chosen to provide roughly an equal amount of stars for each subclass from B0 to B9. The distribution of visual magnitudes is displayed in Fig. 1 showing that the B-star sample contains on average brighter stars than the O-star sample.
spectroscopy
Using the high-resolution spectrograph BESO (Fuhrmann et al. 2011) at the Hexapod Telescope at the Universitätssternwarte Bochum near Cerro Armazones in Chile we obtained 4577 multi-epoch optical spectra. The observing period started in January 2009 and is still going on. The spectra comprise a wavelength range from 3620 to 8530 Å and provide a mean spectral resolution of R = 50, 000. The entrance aperture of the star fibre is 3.4". The integration time per spectrum was adapted to the published visual brightness of each star. It was our primary goal to monitor a large number of stars rather than to obtain a very high S/N for individual stars. As a consequence of this strategy potential companions fainter than ∆V ∼ 2 mag are barely visible in our spectra, thus decreasing the chance for detecting SB2s. Converting this brightness difference of 2 mag into a mass difference we are sensitive to mass ratios q > (0.18 − 0.40) for O5 -O9 stars and q > (0.43 − 0.55) for B stars. In other words, the detectable companions of an O5 star range from O5 to about B2, those of a B9 star from B9 to about A7.
Additional spectra were collected during various observing runs with FEROS at the ESO 1.5 m and the MPG/ESO 2.2 m telescopes, both located on La Silla, Chile. Because BESO is a clone of FEROS the instrumental parameters are identical to those described above. These spectra cover a time span between 2006 and 2008. Finally we extracted 824 O-star spectra from the ESO archive covering an observing period from 1999 to June 2011. These spectra were also obtained with FEROS and processed in the same way as the BESO spectra.
Until today there are 1849 O-and 2728 B-star spectra in our archive. For each O-star we collected about ten spectra on the average with at least two spectra for those objects which were known to be multiple systems before and up to of 39 spectra in such cases where multiplicity was not immediately obvious. The spectra are separated by days, weeks and months. So far we have observed 550 stars from our B-type sample with an average number of five spectra per star. We stopped temporarily collecting data for those B-stars where the multiplicity became obvious during the first two spectra.
Cent. Eur. Astrophys. Bull. vol (2014) 1, 5 All data were reduced with a pipeline based on the MIDAS package developed for the ESO FEROS spectrograph. A quantitative analysis was done with standard IRAF line-fit routines allowing the detection of line shifts in single-line spectroscopic binaries (SB1) or line deformation and/or separations in double-line spectroscopic binaries (SB2). For the O-and early Bstars we used exclusively He lines for identification; He I (λ5875 Å) was particularly useful due to the nearby interstellar Na I doublet (λλ5890, 5896 Å) allowing for a sensitive verification of any relative line shift. For later B-types we had to rely primarily on hydrogen lines. A typical set of multi-epoch spectra is shown in Fig. 2.
Photometry
Simultaneously with our spectroscopic survey we are performing a photometric monitoring at the robotic VYSOS 6 telescope (Haas et al. 2012) also hosted at the Universitätssternwarte Bochum. Due to the high brightness of the stars we typically use I-or narrow-band filters to avoid saturation. In a first approach we have concentrated on the O-star sample with the goal to obtain 30 − 50 photometric values in consecutive nights -a strategy which is obviously biased against periods shorter that one month. In a second photometry run we will follow those stars that show slow variations suggesting longer periods. To give an example for the current quality of our data we show the light curve for HD 100213 in Fig. 3. We obtain a period of P = 1.39 days which perfectly agrees with the results by Linder et al. (2007).
Spectroscopy
We detect RV variations for about 75% of all O stars brighter than V = 8 mag while additional 6% are potential candidates for variable RV. For B0 stars the variability fraction is 48% (67%) decreasing to 12% (15%) for B9 stars (Fig. 4). In general, our observed multiplicities are lower limits due to the unknown orbit inclination and the constraints for the detectable minimum mass ratio.
The overall percentage of SBs for O stars is higher than found before in similar investigations and is due to the numerous spectra obtained for a single star, which reveal mainly variations within days and weeks that were not obvious in previous studies. As mentioned above Mason et al. (2009) report a general spectroscopic multiplicity fraction of 51% for the population of Galactic O stars. Individual nearby clusters were found to have binary fractions between 0% and 63% with an average value of 44% (see e.g. Sana & Evans 2011 for an overview). Inspecting those 60 O stars that did not show any RV variations throughout the last six years, we find that additional 13 out of 29 stars observed through adaptive optics measurements (Mason et al. 1997) or speckle interferometry (McAlister et al. 1993) possess visual companions. These complementary data increase the total percentage of multiple systems for stars brighter than V = 8 mag to 91% leading us to suggest that basically all O-stars are members of multiple systems. This finding also goes beyond the results summarized by Sana & Evans (2011) who obtain a total minimum multiplicity fraction close to 70%.
Among the SBs in the present study 65% (72%) of the spectra for O stars contain more or less separated multiple lines (SB2) reflecting that the majority of systems contains pairs of similar mass. This is in agreement with results by Kobulnicky & Fryer (2007) who found that massive stars preferentially have massive companions. From our current time coverage, however, it is not yet possible to derive orbital periods and constrain semiamplitudes of radial velocities and hence accurate binary component mass ratios. Although the general multiplicity fraction seems to decrease with spectral type the relative high fraction of SB2 compared to SB1 remains valid until the latest B-types. Due to the limited spectroscopic binary studies for B stars a comparison between our work and previous investigations is difficult. Kouwenhoven et al. (2005) studied the binary population in the Scorpius OB2 association. Summarizing the results from all techniques they find multiplicity fractions of ∼ 80% and ∼ 50% for B0 -B3 and B4 -B9, respectively. While the numbers for the early B-types are compatible with our study the multiplicity rate for the late B-types is slightly higher in Sco OB2; the latter difference might be due to the different observing techniques, because without the adaptive optics data of Kouwenhoven et al. (2005) the multiplicity rate would drop by about 10% for the late B-types. Kouwenhoven et al. (2007) repeated the analysis of the primordial binary population in Sco OB2: one set of spectroscopic data in this study comes from Levato et al. (1987) who found that the binary fraction is at least 30% for all early-type stars but might be as high as 74% if all reported RV variations were due to binaries. Another spectroscopic study (Brown & Verschueren 1997) included in the analysis by Kouwenhoven et al. (2007) yields a similar range between 28% and 76%. However, the dependence on stellar mass has not been addressed in these studies. Miroshnichenko (2010) investigated the multiplicity for bright galactic Be stars and found that their binary fraction should be at least 50% independent of the spectral type. In contrast, the AO study of Oudmaijer & Parr (2010) yielded a binary fraction of only 30% ± 8% for 39 Be stars. The same authors claim that the binarity of normal B stars is 29%±8% and thus identical to those of Be stars. Wheelwright, Oudmaijer & Goodwin (2010) investigated the binarity of 25 Herbig Be stars with spectro-astrometry and derive a high binary fraction of 74%. Obviously, the multiplicity of B stars is still a subject that needs further investigation; the existing results -including ours -likely suffer from various biases. We actually might expect to observe more binary stars amongst late type B stars compared to O stars, however, then their mass ratios must be larger than what is covered by our study.
Photometry
Currently our analysis comprises 233 (out of 249) O-type stars from the GOS catalogue. We found variability for 56 stars corresponding to about 24% of the sample. Lefèvre et al. (2009) argue that 17% of all high-masss stars with magnitudes V < 8 can be considered as eclipsing binaries. We could verify the orbital periods of four previously known eclipsing binaries: CPD−59 2603 (Rauw et al. 2001), HD 093206 (Walker & Merino 1972), HD 100213 (Terrel et al. 2003, and HD 152219 (Sana et al. 2006). Fig. 5 shows the light curve for CPD−41 7742, an early-type spectroscopic binary in the young open cluster NGC 6231; we obtain a period of P = 2.44 days in agreement with the spectroscopic result by Sana et al. (2003).
The fact that most O-type stars occur as binaries suggests that the existing photometry is contaminated and thus their absolute magnitude calibration has been overestimated; in the worst case the tabulated O star magnitudes are wrong by 0.75 mag at each wavelength. We aim at a new photometric calibration for O-type stars on the basis of our spectroscopic and photometric surveys. Fig. 4 shows a clear trend for the multiplicity to decrease with mass. Future observations will show whether part of this trend is due to our observing strategy and due to the fact that the number of useful spectral lines decrease from early O to late B. One may also speculate whether the decrease of the binarity fraction towards lower masses is due to an evolutionary effect: an O-type star -whether on the main sequence or not -is much younger than a late B-type star on the main sequence and is likely more evolved due to its rapid hydrogen burning. If evolution "destroys" binarity then evolved stars (i.e. luminosity classes III and I) should have lower binarity fractions than stars on the main sequence. Fig. 6 shows the distribution of luminosity classes I, III, and V for the different spectral type bins in our sample. Obviously those spectral types with the highest binarity fraction (O3 -B1) have the highest fraction of evolved stars while late B-Type stars with the lowest binarity fraction comprise only few evolved stars. This excludes the possibility that evolution influences binarity. On the other hand, one can exclude a pure age effect as SB2 binaries are all very tight and hard to break up. They could dynamically interact in dense cluster cores and get ejected, but breaking up such "hard" SB2s is not very likely.
Discussion
The multiplicity of high-mass stars seems to depend on their environ-ment. It was found that the binary frequency among O stars in clusters and associations is much higher than among field stars (which have no apparent nearby cluster) and runaway stars (O stars with peculiar RV s in excess of 40 km/s or remote from the Galactic plane). The spectroscopic binary fraction in clusters and associations, in the field, and among runaways obtained in two different studies (Gies 1987;Mason et al. 1998) were 55 (61)%, 45 (50)% and 19 (26)%, respectively. For the classification of the environment we have used the designations in the GOSC (Sota et al. 2008) refined by a recent investigation which investigated the origin of field O-stars (Schilbach & Röser 2008). The fractions of SBs within the individual groups are: clusters 11% (N = 15), associations 59% (N = 82), field 6% (N = 8), and runaways 21% (N = 29); we have only counted the secure classifications (black colums in Fig. 4. Our results for clusters is based on rather low numbers. Taking the average of clusters and associations our results are compatible with previous studies (Gies 1987;Mason et al. 1998). The same holds for our binary fraction among the runaway stars; Gies (1987) obtained 19 ± 5% while Mason et al. (1998) found 26 ± 5%, respectively.
While the direct observation of disks around the earliest O stars will remain a fundamental challenge to test model computations for individual high-mass systems, binary star statistics offers other constraints on hydrodynamical simulations of star forming clusters. Early calculations of the fragmentation of an isothermally collapsing cloud have already shown that binary systems and hierarchical multiple systems are frequently obtained (Larson 1978). Although the variety of processes and their sequences are manyfold (Bate, Bonnell & Bromm 2002), there is consensus that the accretion processes favor the formation of binaries with mass ratios q ∼ 1 for those systems with separations below 10 AU, (e.g. Clarke (2007) and references therein).
Nevertheless, the simulations almost never produce binaries with q < 0.5. This appears to be a general problem of turbulent fragmentation calculations which seems to origin from gas accretion onto a proto-binary (Goodwin, Whitworth & Ward-Thompson 2004). Independent of how the binary system was formed, the infalling material has a high specific angular momentum compared to the binary and thus will preferentially accrete onto the secondary (Bate et al. 2002). Bate (2000) even predicts that binaries will evolve rapidly towards q ∼ 1 regardless of their initial mass ratio. For the special case of massive stars recent theoretical calculations claim that close high-mass stellar twins can in principle be formed via fragmentation of a disk around a massive protostar and subsequent mass transfer in such close, rapidly accreting oversized proto-binaries (Krumholz & Thompson 2007). This scenario provides a natural explanation for the numerous high-mass spectroscopic binaries with mass ratios close to unity. Most likely, mass transfer will continue during the entire life of such a close system leading to continuously changing properties of the individual components and thus to a different evolution compared to a single star.
The high binary frequency for clusters found in the present study exclude random pairing from a classical IMF as a process to describe the similarmass companions in massive binaries. Our results make binary-binary interactions inside clusters very probable and thus can explain the high binary fraction among runaway stars. Because ejection requires a cluster origin of the binaries (?) it supports the model of competitive accretion within the cluster environment. Finally, the small number of field O stars -only four "certified" stars remain currently in this category -suggests that probably all O stars are born in associations or clusters.
Radiation-hydrodynamic simulations show that, during the collapse of a massive prestellar core, gravitational and Rayleigh-Taylor instabilities channel gas onto the star system through non-axisymmetric disks. Gravitational instabilities lead to a fragmentation of the disk around the primary star and form a massive companion. Radiation pressure does not limit stellar masses, but the instabilities that allow accretion to continue lead to small multiple systems (e.g. Krumholz et al. 2009).
The current study was a pure discovery project that aimed exclusively at the multiplicity statistics in the mass range 3 < M [M ] 80. In a next step we will study the orbital properties and the spectral types of the individual components to obtain a statistically relevant archive for high-mass binary systems.
|
2013-06-07T19:12:35.000Z
|
2013-06-07T00:00:00.000
|
{
"year": 2013,
"sha1": "6b89f07a0ff97a43442a35b5579cb4ca8d2b3927",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6b89f07a0ff97a43442a35b5579cb4ca8d2b3927",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
202217511
|
pes2o/s2orc
|
v3-fos-license
|
Numerical Study Of Thermal Behavior Of Anisotropic Medium In Bidirectional Cylindrical Geometry
In this work, we investigate the transient thermal analysis of two-dimensional cylindrical anisotropic medium subjected to a prescribed temperature at the two end sections and to a heat flux over the whole lateral surface. Due to the complexity of analytically solving the anisotropic heat conduction equation, a numerical solution has been developed. It is based on a coordinate transformation that reduces the anisotropic cylinder heat conduction problem to an equivalent isotropic one, without complicating the boundary conditions but with a more complicated geometry. The equation of heat conduction for this virtual medium is solved by the alternating directions method. The inverse transformation makes it possible to determine the thermal behavior of the anisotropic medium as a function of study parameters: diagonal and cross thermal conductivities, heat flux.
Introduction
Anisotropic materials are present in various industrial applications.Such materials exist either in the natural state as wood and quartz, or industrial, like fibrous materials.Thermal conductivity of these kinds of materials varies according to direction.This makes the heat conduction study delicate due to a cross-derivative terms of the temperature with space variables.When the term cross-derivation is absent, as in the case of orthotropic media, the analysis is simplified, and many studies have been reported in references [1][2][3].
The case of anisotropic medium has been studied in the context of a permanent [4][5][6][7][8] and a transient conduction regime [9].The solution in the latter case is limited to the cases of infinite geometries, otherwise there's no analytical solution, hence the need to use a numerical solution, object of our study.
Problem formulation
The cylindrical anisotropic medium of length L and radius b is illustrated in Fig. 1.The left and right sections of are maintained at temperatures , LR TT ; whereas a radial flux is applied on the surface lateral.
Equation ( 1) is associated to the boundary conditions: As the resolution of the equation ( 1) is tricky due to the cross derivative term, we search to bring back in a similar form to isotropic medium, for which the cross term is absent.This is achieved by applying the linear coordinate transformation [6]: with : From this coordinate transformation, Eq. ( 1) can be written as an isotropic heat conduction equation in the new coordinate system ( R , Z ) as: The heat fluxes in the new coordinates system are given by: . .
These equations are associated with the domain of Fig. 3 in the plane ( R , Z ).The non-dimensional forms of equations ( 5), ( 6) and ( 7) are given as follows: The associated boundary conditions are given by: ** (z 0, ) 1 ; The determination of the temperature profile in the isotropic dimensional space allows us by the inverse transformation to deduct the evolution of the temperature in the dimensional anisotropic medium.
Numerical solution and Validation
The numerical method used to solve the above heat conduction equation is the alternating-direction implicit method (ADI).This method is unconditionally stable and gives rise to a tri-diagonal matrix resolved by the THOMAS algorithm [10].
To ensure the validity of numerical analysis, the numerical results were compared with the analytical solution for the case of a steady-state orthotropic medium that is expressed by: * ( *, *) 1 - , , With : The validation of the numerical code was made by comparing analytical solution with the numerical solution for low value of the cross thermal conductivity
Results and discussion
The numerical code allows studying transient and steady state thermal behavior of the anisotropic medium according to the study parameters: the ratio of diagonals thermal conductivities, the anisotropic character
Conclusion
This study concerns the thermal behavior of an anisotopic medium in two-dimensional cylindrical shape, subjected to a lateral flux and whose sections are at imposed temperatures.This behavior was evaluated by examining the effect of diagonals and cross thermal conductivities in addition to that of the flux.The numerical results show that the anisotropy affects the thermal level as well as the shape of the isotherms.
Fig. 1 :
Fig. 1: Three-dimensional representation of the anisotropic cylinder.Due to the symmetry, the problem is reduced to the twodimensional () , rz figure, schematized below:
Fig. 2 :
Fig. 2: Bidirectional representation of anisotropic cylinder.The thermal behavior of the medium studied is governed by:2 and k rr are respectively, the thermal conductivities according to directions , z,and r rz .The heat fluxes in the directions and z r according to Fourier's law are given by:
Fig. 3 :
Fig. 3: Computation domain of anisotropic medium in the virtual space.By introducing the non-dimensional parameters:
. 2 and a form factor 2 G
The comparison concerning an orthotropic medium 3 KK rr zz , for a lateral flux * is reported in Fig.4 and shows a good agreement.
Fig. 4 :
Fig. 4: Validation in the case of an orthotropic medium.
|
2019-09-11T02:02:49.902Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "f2c409728cd686aec33dc3e426df7aca223e4542",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/35/matecconf_cmm18_08009.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4b72d517d6cc9c245c8a6bcee31a3f580e04f247",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
93575795
|
pes2o/s2orc
|
v3-fos-license
|
Studies on Some Physico-Chemical and Microbiological Characteristics of Potable Water Used in Some Rural Areas of Surat District ( Gujarat )
A physico chemical and microbiological study of the ground water of some villages of Surat district of Gujarat state (India) has been made. Physico chemical parameters such as colour, odour, taste, temperature, pH, electrical conductivity, TS, TDS, total hardness, total alkalinity, calcium, magnesium, iron, sodium, potassium, chloride, sulphate, nitrate, fluoride and silica were determined. In microbiological study, total coliforms, E. coli, sulfate reducing anaerobic bacteria, pseudomonas aeruginosa, yeast and mould were investigated. Samples were taken from ten sampling points in ten different villages viz. Parvat(S-1), Kharvasa(S-2), Bonand(S3), Vesu(S-4) Amroli(S-5), Kadodara(S-6), Chalthan(S-7), Variyav(S-8), Gaviyar(S9) and Bhairav(S-10). Samples were taken four times in year in the month of May, August, November and February to check the seasonal effects. In village Gaviyar, Gujarat Water Supply and Sewerage Board has set up a treatment plant to supply good quality potable water in few surrounding coastal villages. In all other cases samples were taken from bore-well. Here study reported is for samples taken in May-2004 and August-2004.For colour, iron, sulphate, nitrate, fluoride and silica, instrumental methods like spectrophotometry were used. “Hach-Odyssey spectrophotometer” which has facility to store calibration curves and which can display the value for that parameter directly was used. In present study programmes of “Hach” with their reagents were used while some programmes were prepared by us using our reagents. This is an excellent instrument and results of this instrument are validated by USEPA. Sodium and potassium was determined using flame photometer. It was found that some water samples have higher TDS, chlorides, total hardness and total alkalinity than the permissible limits. In all cases samples were not found to contain significant quantities of bacteria and water was palatable from this point of view.
Introduction
In continuation of earlier studies on ground water 1 , here we report the physico-chemical as well as microbiological studies of potable water used in some rural areas of Surat district, Gujarat.Because of the geographical isolation and remoteness, people residing in the rural area, mostly do not have access to safe drinking water.In the absence of fresh water supply, the people are forced to take water from any source that lies near their village.In most of the interior rural area, the borewell water is used for drinking and other domestic purposes.Borewell water is the under ground water that has come mainly from the seepage of surface water and is held in subsoil and pervious rocks.Borewell water is generally of good quality and is difficult to pollute.The use of fertilizers, pesticides and insecticides in rural area, manure, lime, septic tank, refuse dumps, etc. are the main sources of borewell water pollution 2 .The water used may be unsafe chemically as well as microbiologically.Chemically unsafe water shows long term and slow effect while microbiologically unsafe water creates short term problems such as dysentery, diarrhea, jaundice, gastrointestinal disorders, fever and amoeboisis which may assume epidemic proportions 3 .The work on microbiological pollution is still lacking.Kaushik and Prasad 4 , Thapliya et al. 5 , Shrivastav et al. 6 , Riccharia and Mishra 7 , Garoda et al. 8 and J.Hussain et al. 9 are among the few workers who have worked on microbiological quality of water.
Experimental
Water samples were collected in the first week of May-2004 and the first week of August-2004.The villages selected were Parvat(S-1), Kharvasa(S-2), Bonand(S-3), Vesu(S-4) Amroli(S-5), Kadodara(S-6), Chalthan(S-7), Variyav(S-8), Gaviyar(S-9) and Bhairav(S-10).For physico-chemical analysis water samples were collected in properly washed polyethylene bottles while for microbiological analysis sterile glass bottles were used.Standard procedures were adapted for the determination of both physico-chemical and microbiological analysis. 10or spectrophotometric determination of colour, fluoride, iron, nitrate, sulphate and silica, "Hach -Odyssey Spectrophotometer (USA)" was used.This instrument has facility to store calibration curves and which can display the value for that parameter directly was used.In present study, programmes of "Hach" with their reagents were used while some programmes were prepared by us using our reagents.This is an excellent instrument and results of this instrument are validated by USEPA.Sodium and potassium were determined with the help of microprocessor based flame photometer.Calcium, magnesium, total hardness, chloride, total alkalinity were estimated by titrimetric methods.
For microbiological study, the modern, Membrane Filter Technique (MFT) were used.All the culture media used were of "Hi-Media Products".
Results and Discussion
All metabolic and physiological activities and life processes of aquatic organisms are generally influences by water temperature.In the present study temperature ranged from 27-31ºC.
The pH of the water body indicates the degree of deterioration of water quality.In the present study pH ranged from 7.07-8.10which lies within the range prescribed by ISI 11 , which is 6.5-8.5.The specific conductivity (SC), which is a measure of the dissolved ion concentration, was much higher than the permissible limits.In the present study it ranged from 271-3130 µS/cm.maximum SC was observed at vesu(S-4) during the study period.According to WHO 12 and ISI, total dissolved solids (TDS) value should be less than 500 mg/L for drinking water.In the present study it ranged from 110-1524 mg/L.Most of the samples have higher values of TDS than the prescribed value.
Total hardness in water is mainly due to the salts of calcium and magnesium.In the present study it ranged from 90-480 mg/L.Some samples have higher values than the prescribed by ISI, which is 300 mg/L.The limits of calcium and magnesium have been prescribed in the range 75-200 mg/L and 50-100 mg/L respectively.In the present study calcium and magnesium ranged from 14-100 mg/L and 7.29-97.2mg/L respectively.Total alkalinity of all samples ranged from 90-470 mg/L.All the samples have higher values than the prescribed limits, 200 mg/L, except the value of S-9.The chloride content in the samples ranged from 33.75-795.50mg/L.The highest chloride observed in the sample of Vesu (S-4).This may be due to its location, near Sea.The concentration of sulphate in all samples observed within the limits prescribed for sulphate content, 200 mg/L and it varies from 4.2-104.2mg/L during the study period.
Nitrate is one of the major constituents of organisms along with carbon and hydrogen as amino acids, proteins and organic compounds in ground water.In the present study nitrate ranged from 1.99-68.66mg/L which lies under the prescribed limits.Fluoride limits in drinking water from 1.0-1.5 mg/L. in the present study it ranged from 0.21-1.20 mg/L, which lies within the range.Iron is one of the most abundant elements in the earth's crust.Iron deficiency in the human body causes anaemia.In the present study it ranged from 1.0-1.24mg/L, which lies under the limits prescribed by WHO and ISI.
Sodium and potassium ranged from 31.6-295 mg/L and 0.3-174.4mg/L respectively.Sodium content more than 50 mg/L makes the water unsuitable for drinking purposes.The ground water of Vesu was found to have higher concentration of sodium and potassium.Sodium is the most important element, which influences the soil quality and plant growth either by affecting the permeability of soil by clogging or replacing other cations.The extent of replacement of other cation by sodium is denoted by sodium adsorption ratio (SAR) calculated by the following equation as described by Richards 13 .SAR = Na + / (Ca 2+ + Mg 2+ /2) 0.5 Where, Na + , Ca 2+ and Mg 2+ are in meq/L.SAR in present study ranged from1.154-7.294meq/L.Salinity Laboratory of Agriculture recommended the water classification according to the value of SAR.In present study SAR was found below the prescribed limit.
The concentration of bicarbonate and carbonate also influence the suitability of water for irrigation purpose 13,14 .One of these empirical approaches is based on the assumption that all Ca +2 and Mg 2+ precipitate as carbonate.Considering this hypothesis, Ealtron 14 proposed the concept of residual sodium carbonate (RSC) for the assessment of high carbonate waters.RSC is calculated by the following formula.RSC = (CO 3 2-+ HCO 3 -) -(Ca 2+ + Mg 2+ ) The water with high RSC will have high pH and makes soil infertile by depositing black alkali on the surface.According to a classification made by United States Salinity Laboratory water samples are safe for irrigation purpose with RSC value below 1.25 meq/L while water samples with RSC value above 2.5meq/L are unsuitable for irrigation purpose.In our study area RSC ranged from -1.977 to 5.807 meq/L.
Percentage sodium (PS) is another important factor to study sodium hazard.It is calculated as the percentage of sodium and potassium against all cationic concentrations.It is also used for adjudging the quality of ground water for the use of agricultural purpose.The use of high PS waters for irrigation purpose stunts the plant growth.It is calculated by the following formula.
PS = [(Na + + K + ) / (Ca 2+ + Mg 2+ + Na + + K + )] x 100 In the present study PS ranged from 23.469-74.338meq/L.All the samples were found good to permissible limit except S-4.Coliforms generally occur in drinking water due to contamination of sewage water or unhygienic practices.Coliforms in drinking water can cause amoebic dysentery and various other pathogenic complexities.In our present study it was not observed.
E. coli occurs in drinking water due to contamination of sewage water or unhygienic practices.Three types of diseases may be produced (i) they can produced abscesses in internal organs, septicemia, endocarditic, and meningitis (ii) produce a severe and often fatal type of epidemic diarrhea in infants and (iii) they are the cause of sporadic, nonepidemic summer diarrhea which occurs in children during their second and third summer of life.As coliforms was not observed, E.Coli was also absent in the present study.
Fungi are present, and have been recovered from, diverse, remote, and extreme aquatic habits including lakes, ponds, rivers, streams, estuaries, marine environments, wastewaters, sludge, rural and urban strormwater runoff, well waters, acid mine drainage, asphalt refineries, jet fuel systems, and aquatic sediments.The association between fungal densities and organic loading suggests that fungi may be useful indicators of pollution.This organism often occurs in faeces of humans, but in lower numbers than coliforms.It indicates faecal contamination.It helps in detecting the reconstitution of rehydration mixtures, baby foods and pharmaceutical preparations as well as surveillance of bottled water.In the present study it was absent.
Aearobic Microbial Counts are used to assess the general bacterial content of water.Sudden increase in colony count from a groundwater source may be an early sign of pollution of the aquifer, useful in evaluating the efficiency of water treatment processes -coagulation, filtration, and disinfection.In the present study it was determined at two temperatures, 20ºC and 37ºC.They were found within the limits.
Table - 6
Microbiological analysis report of Potable waters inAUGUST-2004 Studies on Some Characteristics of Potable Water
|
2019-04-04T13:12:50.176Z
|
2005-01-01T00:00:00.000
|
{
"year": 2005,
"sha1": "9406900fd420c64ca6401ad4d699f2c08aeafcaa",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jchem/2005/187453.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fd0cb6a3ab86b48456190de8dfc0781d597260cc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
259819427
|
pes2o/s2orc
|
v3-fos-license
|
Emotional Intelligence and Grit among Young Adults: A Correlational Study
Emotional Intelligence (EI) is the ability to regulate one's moods, prevent distress from impairing thought, control impulse, delay gratification, and empathize which may affect Grit, the propensity to pursue long-term goals with persistence and unwavering passion. Aim : To explore the relationship between Emotional Intelligence and Grit among Young Adults. Method : The study comprised of 136 young adults (76 females and 60 males) who were selected through Convenience Sampling. The Brief Emotional Intelligence Scale (BEIS-10) and Grit Scale short (Grit-8) was administered on the participants. Results : The findings of the inferential statistics indicates that there was a significant positive correlation between Emotional Intelligence and Grit (r= .188; p<0.05) A significant gender difference exists in EI between Females (M= 33.42) and Males (M=28.95); t= 2.251, p= .026; and in Grit between Females (M= 27.21) and Males (M=25.53); t= 2.009, p= .047. Conclusion : It was concluded that Emotional Intelligence and Grit have a significant positive relationship. There is a significant gender difference in Emotional Intelligence and Grit among Young Adults.
lack of positive feedback. It can help to "predict an individual's achievement in stimulating areas over and beyond measures of aptitude," explains a recent 2020 study. It is linked with but different from conscientiousness, resilience, aspiration, hardiness, self-control, interest, endurance, and motivation. It is the predisposition to pursue long-term goals with determination and continuing passion in terms of role demonstrating and inventing, as well as inspiring, empowering and backing up supporters, grit is an important predictor of leadership behavior and projects both practical and theoretical implications (Caza, A., & Posner, B. Z., 2018).
"Perseverance of effort" and "consistency of interest" are two of the lower-order features that make up the higher-order construct known as "grit." These two characteristicsperseverance and consistency-refer to the propensity to work hard despite setbacks and the propensity to hold onto goals and interests over the long term, respectively. Both are believed to play a role in success: consistency because it typically takes many hours of deliberate practise to become proficient in a skill, and persistence because achieving mastery in a field frequently involves initial failures that the person must persevere through (Ericcson, Krampe, & Tesch-Römer, 1993). That is to imply, people who either give up when faced with challenges or frequently switch their pursuits are less likely to ever practice something deliberately enough to reach high levels of performance. Indeed, emphasized that earlier studies of highly successful people had long noted the significance of grit for achievement (Howe, 1999). came to the following conclusion after reviewing the literature of the time that: "The presence of a general trait of Grit, saturates all behavior of the individual, has not been established, while evidence both for and against such an idea has been revealed." Positive psychology has recently rekindled interest in the empirical investigation of character in general and the quality of perseverance . Matthews, and Kelly developed the concept of "grit," which they characterized as "trait-level perseverance and passion for long-term goals." They demonstrated that, in addition to measures of talent, grit predicted success in difficult domains.
Grit contrasts from the urge for achievement, which McClelland (1961) defined as the need to achieve attainable goals that permit quick feedback on performance. People with higher levels of grit intentionally create highly long-term goals for themselves and do not deviate from them even during the absence of adequate feedback-in contrast to people with higher levels of need for achievement, who chase goals that are not always too simple or too difficult. The need for achievement is by definition an unconscious need for inherently rewarding activities, and as such, it cannot be measured via self-report techniques (McClelland, Koestner, & Weinberger, 1992). Contrarily, grit can refer to commitment to either implicit or overtly rewarded aims.
The present study can be helpful for counsellors, clinical psychologists, educational institutions, and workplaces to provide an insight and awareness about the relationship with emotional intelligence. A relationship between Emotional Intelligence and Grit, can pave way towards formulation of a therapeutic intervention for individuals with a vulnerability of these traits during adolescence. Thus, leading to healthier coping mechanism and psychosocial skills, and a better adjusted personality during the transit from adolescents to young adults.
Method
The aim of the present study was to explore the relationship between Emotional Intelligence and Grit among Young Adults. The objectives of the study were to understand the relationship between Emotional intelligence and grit and the gender difference among young adults. Null hypotheses were formed. The sample was collected from the universities and college of Lucknow. 136 individuals were selected for the study (N=136). These young adults filled the Brief Emotional Intelligence Scale-10 (BEIS-10) and Grit Questionnaires The data for this study was analysed by using Mean, Standard deviation, Independent Sample T-test, and Pearson's Correlation on IBM SPSS Version 20.
Results
Statistical analysis of the data was done through the SPSS version 20. Pearson's Correlation Analysis was used to study the relationship between Emotional Intelligence and Grit. Independent samples t-test was performed to obtain the descriptive statistics and to examine the mean difference between males and females in Grit, and Emotional Intelligence individually. Represented in the tables below is the analysis of the data and its results. Figure 3 depicts the graphical representation of the mean gender difference in Grit among Young Adults.
Discussion
The aim of the present study was to explore the relationship between Emotional Intelligence and Grit among Young Adults. Adults. There was no significant mean difference depicting the sample was comparable and well matched. The average of approximately 20 gives us an appropriate range to assess and understand the changes that may occurring the future or may be plausible.
In the present study, Table 2 and for Males the mean was 28.95±12.06 (Table 1). As illustrated by Figure 2, the mean score of females on Emotional Intelligence was higher than that of males indicating the females during young adulthood have a higher emotional intelligence. Thus, their ability to understand self and others emotion is better than males and have a better problem solving when related with socio-emotional problems. The findings are supported by literature as quoted ahead, "Female students have higher scores in their Emotional Intelligence hence depicting higher self-control, self-awareness and social awareness" (Molaie, E., Asayesh, H., & Ghorbani, M., 2012;Das, R. P., & Sahu, T. L., 2015;Bindu, P., & Thomas, I., 2006;Fida, A., Ghaffar, A., Zaman, A., & Satti, A. N., 2018). Contrarily a study by Ahmad, S., et. al., in 2009 showed contradictory results as they found higher emotional intelligence in males than in females indicating that in comparison to women, men exhibit more assertiveness, self-awareness, independence, and situational management. (Table 1). As illustrated by Figure 3, the mean score of females on Grit was higher than that of males indicating that females are grittier hence showing higher noncognitive skills like resilience and persistence when it comes to achieving long term goals and putting in consistent efforts towards those goals. In support of the current findings, literature quoted that "Girls seem to work harder over longer time and with more focus than boys" (Sigmundsson et al., 2017b, Sigmundsson et al., 2018, Sigmundsson, H., Haga, M., & Hermundsdottir, F., 2020. Previous research on grit and gender have also shown conflicting findings, with either no association between the two or somewhat higher grit ratings for females than males (Batres, 2011;Bazelais et al., 2016;Flanagan, K. M., & Einarson, J., 2017;Sigmundsson, H., Guðnason, S., & Jóhannsdóttir, S., 2021). Therefore, it is not surprising that there is significant difference in grit between males and females. As it may depend on other features like culture, nurture, and early life experiences.
Conclusion
The findings from the present study reflect that among Young Adults, Emotional Intelligence and Grit had a significant positive correlation which indicates that individuals who score greater on the emotional intelligence scale tend to be grittier and vice versa. This indicates that an absence of emotional awareness or inability to identify one's own emotion may increase the chances of surrendering and decreased determination.
Emotio nal Intelligence was found to be higher in females which explains how the society expects more empathy, emotional insight, and sensitivity from females than males. Grit was found to be higher in females which again lays in line with the general understanding that females may aim higher and achieve better if they set their mind to it without giving up whereas males tend to get distracted or give up easily.
Limitations
The finding of the study could not be generalized as the sampling method employed was convenience and there was a comparatively smaller sample size. The sample was limited to a particular geographical area of Lucknow. An absence of equal data in males and females could have potentially affected the results of the study. More extensive tools could be employed to study each variable in detail for the purpose of generalization and a deeper level understanding. The data was majorly self-reported which may increase the chances of a reporting bias.
Implication and Future Directions
This study helps in understanding the inter-relationship between emotional intelligence, and grit. The personality of young adults is still in a transition phase, thus an intervention at this stage can help in better emotional adjustment as adults. As the emotional intelligence of the individual is dynamic, young adulthood may be the suitable stage to help enhance these skills. It gives us an Indian perspective of the occurrence of the under studied variables among youth. This study provides the researcher with a base for research on a larger scale.
A study based on the sub dimensions can be conducted using the same data to understand the finer details and relationship amongst each variable. A longitudinal study can be done on adolescents that can help in understanding the progression of these variables and interdependence. This will also give us a better understanding of the role of Emotional Intelligence as a protective factor.
|
2023-07-12T16:41:56.764Z
|
2022-07-13T00:00:00.000
|
{
"year": 2022,
"sha1": "ce22bc67a270b9450c663425256160009e03f35e",
"oa_license": null,
"oa_url": "https://doi.org/10.46523/jarssc.05.02.03",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e73bd6f117fe8ecb3c2ae7d885329712a08c91b3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
214623018
|
pes2o/s2orc
|
v3-fos-license
|
Preparation of a superposition of squeezed coherent states of a cavity field via coupling to a superconducting charge qubit
The generation of nonclassical states of a radiation field has become increasingly important in the past years given its various applications in quantum communication. The feasibility of generating such nonclassical states has been established in several branches of physics such as cavity electrodynamics, trapped ions, quantum dots, atoms inside cavities and so on. In this sense, we will discuss the issue of the generation of nonclassical states in the context of a superconducting qubit in a microcavity. It has been recently proposed a way to engineer quantum states using a SQUID charge qubit inside a cavity with a controllable interaction between the cavity field and the charge qubit. The key ingredients to engineer these quantum states are a tunable gate voltage and a classical magnetic field applied to SQUID. Some models including these ingredients and using some appropriate approximations which allow for the linearization of the interaction and nonclassical states of the field were generated. Since decoherence is known to affect quantum effects uninterruptedly and decoherence process are works even when the quantum state is being formed, therefore, it is interesting to envisage processes through which quantum superpositions are generated as fast as possible. The decoherence effect has been studied and quantified in the context of cavity QED where it is shown that the more quantum is the superposition, more rapidly the environmental effects occur during the process of creating the quantum state. In the latter reference, we have succeeded in linearizing the Hamiltonian through the application of an appropriate unitary transformation and for certain values of the parameters involved, we have showed that it is possible to obtain specific Hamiltonians. In this work we will use such approach for preparing superposition of two squeezed coherent states.
Introduction
In the past years, given its applications in quantum communication, the generation of nonclassical states of a radiation field has become more and more important. The possibility of generating nonclassical states has been possible in various branches of physics, such as cavity electrodynamics, trapped ions, quantum dots, atoms within cavities, and so on. 1,2 In this sense, we will discuss the generation of non-classical states in the context of a superconducting qubit in a microcavity. Recently, it was proposed a way to project quantum states using a qubit SQUID charge qubit inside a cavity with a controllable interaction between the cavity field and the charge qubit. The main ingredients for projecting quantum states are a tunable gate voltage and a classic magnetic field applied to SQUID. The recent interest in the study of the cavity quantum electrodynamics type systems such as a superconducting qubit can open new ways for studying the interaction between light and solid-state quantum devices. 3,4 Various theoretical and experimental works have discussed the interaction between superconducting qubits with either quantized [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] or classical fields. [20][21][22] Recently, it has been proposed a way to project quantum states using a superconducting quantum device (SQUID) charge qubit inside a cavity 5, 6 with a controllable interaction between the cavity field and the charge qubit. In references 5, 6 the model proposed including these ingredients and using some adequate approximations which allow for the linearization of the interaction and nonclassical states of the field are generated. The single-cavity scheme of reference 5, 6 may be extended to generate entangled coherent states of two microwave cavity fields coupled to a SQUID -type superconducting box, as proposed in reference. 8 In the literature it can also be found proposals for the generation of entangled states and squeezed states using linear and nonlinear interactions between microwave cavity field and SQUID -type superconducting box; 9 schemes for generation of multiqubit entangled cluster states, 10 for deterministic generation of entangled photon pairs in a superconducting resonator array, 11 and for controlling the entanglement between two Josephson charge qubits. 12 We show that the essential contribution of the nonlinear interaction is to shorten the time necessary to build the quantum state. Since decoherence is known to affect quantum effects uninterruptedly, they are at work even while the quantum state is being formed. This has been studied and quantified in the context of cavity QED where it is shown that the more quantum is the superposition the more rapid are the environmental effects during the process of creating the quantum state. 23 It is therefore interesting to envisage processes through which quantum superpositions are generated as fast as possible.
The model
We consider a system constituted by a SQUID type superconducting box with n c excess Cooper-pair charges connected to a superconducting loop via two identical Josephson junctions having capacitors C J and coupling energies E J , see Fig.(1a). An external control voltage V g couples to the box via a capacitor C g . We also assume that the system operates in a regime consistent with most experiments involving charge qubits, in which only Cooper pairs coherently tunnel in the superconducting junctions. Therefore, the system Hamiltonian may be written as 5,6,24 where E ch = e 2 /2(C g + 2C J ) is the single-electron charging energy, n g = C g V g /2e is the dimensionless gate charge (controlled by V g ), Φ X is the total flux through the SQUID loop and Φ 0 the quantum flux. The phase Θ is the quantum-mechanical conjugate of the number operator n c of the Cooper pairs in the box. The superconducting box is assumed to be working in the charging regime and the superconducting energy gap ∆ is considered to be the largest energy involved. Moreover, The superconducting box then becomes a two-level system with states |g (for n c = 0) and |e (for n c = 1) given that the gate voltage is near a degeneracy point (n g = 1/2) 24 and the quasi-particle excitation is completely suppressed. 25 If the circuit is placed within a single-mode microwave superconducting cavity, the qubit can be coupled to both a classical magnetic field (generates a flux Φ c ) and the quantized cavity field (generates a flux Φ q = ηa + η * a † , with a and a † the annihilation and creation operators), being the total flux through the SQUID given by Φ X = Φ c + Φ q , 7 see Fig.(1). The parameter η is related to the mode function of the cavity field. The Hamiltonian system will then read where we have defined the parameters γ = πΦ c /Φ 0 and β = πη/Φ 0 . The first term corresponds to the free cavity field with frequency ω = 4E ch / and the second one to the qubit having energy E z = −2E ch (1 − 2n g ) with σ z and σ x the Pauli matrices. The third term is the (nonlinear) photon-qubit interaction term which may be controlled by the classical flux Φ c . In general the Hamiltonian in equation (2) is linearized under some kind of assumption. In Ref., 5,6 for instance, the authors decomposed the cosine in Eq.(2) and expanded the terms sin[π(η a + H.c.)/Φ 0 ] and cos[π(η a + H.c.)/Φ 0 ] as power series in a (a † ). In this way, if the limit |β| ≪ 1 is taken, only single-photon transition terms in the expansion are kept, and a Jaynes-Cummings type Hamiltonian (JCM) is then obtained. In contrast to that, in the reference [26][27][28][29][30] it is presented a technique that obtain a JCM Hamiltonian valid for any value of |β|. This technique consists in applying a unitary transformation that linearizes the Hamiltonian of the system. After transforming the original Hamiltonian, it is possible to obtain a simpler Hamiltonian under certain resonance regimes.
In other words, it is possible linearize the superconductor/quantized field Hamiltonian without doing the usual power series expansions of the Hamiltonian.
Dynamics of the system
The central idea of the approach proposed in reference 26 which allows for the inclusion of nonlinear effects is the following: a unitary transformation is constructed in a way that diagonalizes the Hamiltonian leading it to a much simpler form. The nonlinear effects are therefore guarded in the transformation affecting directly the time evolution of the system in a tractable manner. The comparison of our proposal with other method is not simple. Normally the full hamiltonian is truncated after some kind of approximation -for instance, by taking the limit |β| << 1, a simple linearized 6 or nonlinear Hamiltonians 7 are obtained. In the method used here it is possible to obtain in a direct way, a Hamiltonian which allows an exact solution for the state vector in a specific resonance regime (as long as |β|). However, as the nonlinear effects are somehow guarded in the transformed Hamiltonian, they may give rise to a more complex dynamics, for example: in reference 28 for the preparation a Schrödinger cat (SC) of mode of cavity field interacting with a superconducting charge qubit; in reference, 29 the resulting dynamics exhibits typical behavior of a driven Jaynes-Cummings model 31 (or a trapped ion within a cavity 26 ), but without the presence of a classical driving field; in reference 27 for the preparation of SC with cold ions. We believe that the approach used here could be useful not only for establishing a direct connection to other well-known models in quantum optics, but also the exploration of different regimes in superconducting systems. Next we apply a unitary transformation to the full Hamiltonian given by (2) and make approximations afterwards. By applying the unitary transformation 26 to the Hamiltonian in equation (2), with D(α, γ) = D(α)e i γ 2 where D(α) = exp[(αa † − α * a)] is the Glauber's displacement operator, with α = iβ * /2, we obtain the following transformed Hamiltonian This result holds for any value of the parameter β. In the regime in which ω|β| = 4|β|E ch ≫ E J , that can be obtained for |β| ≥ 0.25, the Hamiltonian in Eq. (4) becomes Our Hamiltonian in Eq.(5) becomes a Jaynes-Cummings type Hamiltonian. For |β| = 0.25 the charge regime, E ch ≫ E J , is satisfied. Note that in the approach of reference, 5 the condition |β| ≪ 1 is also necessary, but for a different reason, i.e., to truncate the co-sine (sine) series. We should remark that in our scheme the Jaynes-Cummings evolution takes place in the transformed frame, differently from the model developed in. 5 The term | β 2 | 2 was not taken into account because it just represents an overall phase. The same setup and transformation given by (3) may also be employed (see reference 28 ) in a scheme for preparation of superpositions of coherent states of a single-mode cavity field (Schrödinger cats -SC) extending the approach of Ref. 5 The results is very similar to the SC obtained in Ref., 5 but in contrast to that we did not use the condition |β| ≪ 1. In our scheme, as |β| is large and the value of the amplitude of coherent states are proportional to t 2 , the time for preparing of an observable SC state is much shorter than that in other schemes.
Squeezed Coherent State
Now we show how to prepare a superposition of two squeezed coherent states. To obtain this superposition we set E z = 0 ( n g = 1/2) in (5). Here, we take advantage of the fact that the Hamiltonian given in (5) has not been approximated and, therefore, there are no restriction on the values of their parameters. By transforming the Hamiltonian (5) with the unitary operators, with ε 1 , ε 2 ≪ 1, and σ − , σ + are the Pauli matrices. Setting as where we consider β as real. Remaining up to first order in the expansion , doing a small rotation, we obtain the Hamiltonian In the Eq.(10) the first interaction term describe a squeezed state Hamiltonian and the second interaction one describe a dispersive Hamiltonian. For the regime in which ω −E J ≫ 4E J the Hamiltonian in Eq.(10) becomes a squeezed Hamiltonian Now to obtain a superposition of squeezed coherent states we will make a rotation on Eq.(11) so that σ z → σ x . This rotation is equivalent to applying the operator on (11) obtain the Hamiltonian with U R σ z U † R = σ x . If the system is initially in the coherent state |γ and the charge qubit is in the ground state |g = 1 √ 2 (|+ − |− ) where |+ (|− ) is eigenstate of the Pauli operator σ x with the eigenvalue 1(−1), we can entangle qubit states with superpositions of two different squeezed coherent states (SS) evolving in time as with the squeezed coherent states where ξ 2 = β 2 2 ω 2 4(EJ + ω) , |γ, ∓iξ 2 t/ = e −iωa † at∓iξ 2 (a 2 +a †2 )t/ |γ . Here, |γ, ∓iξ 2 t/ denote squeezed coherent states, and the degree of squeezing is determined by the time-dependent parameter ξ 2 t/ = β 2 ω 2 4(EJ + ω) t. The result is an entangled state involving qubit and a cavity field. If one measures the charge state (either in |g or |e ), the action will collapse the |Ψ R T (t) into a SS state |Φ ± . The form of Eq. (14) is very similar to the SS obtained in Ref. 5 But, in contrast to that, we did not do use the condition β ≪ 1. In our scheme, as β is large, and the value of the amplitude of coherent states are proportional to β 2 ωt = β 2 E ch t/ , ( ω = E ch ≫ E J ), the time for preparing an observable SS state is much shorter than that in other schemes.
Conclusion
In conclusion, we have presented an approach for preparing SS states of the mode of cavity field interacting with a superconducting charge qubit. In contrast to other schemes we include nonlinear effects. In general, approximations are made directly to the full Hamiltonian in equation (2) neglecting all higher orders of β. In our scheme, we first apply an unitary transformation to the Hamiltonian (2) and make the relevant approximations after performing the transformation. The result obtained holds for any value of the parameter β. In the regime in which ωβ ≫ E J , which can be obtained forβ ≥ 0.25, the Hamiltonian becomes a squeezed Hamiltonian. Based on the measurement of charge states, we show that SS states of a single-mode cavity field can be generated. Here, as |β| is large, and the amplitude of coherent states are proportional to β 2 E ch t/ , the time for preparing observable SS states is much shorter than in the linear regimes.
|
2020-03-25T01:01:14.367Z
|
2020-03-20T00:00:00.000
|
{
"year": 2020,
"sha1": "7cdaf5d10f75e10113a7c8616bb5421b15121e31",
"oa_license": "CCBYNCND",
"oa_url": "http://periodicos.uefs.br/index.php/SSCF/article/view/SCF.v16-A1/4824",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "7cdaf5d10f75e10113a7c8616bb5421b15121e31",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
268121127
|
pes2o/s2orc
|
v3-fos-license
|
Model Checking of Component Behavior Specification: A Real Life Experience 1
This paper is based on a real-life experience with behavior specification of a non-trivial component-based application. The experience is that model checking of such a specification yields very long error traces (providing counterexamples) in the order of magnitude of hundreds of states. Analyzing and interpreting such an error trace to localize and debug the actual specification is a tedious work. We present two techniques designed to address the problem: state space visualization and protocol annotation and share the positive experience with applying them, in terms of making the debugging process more efficient.
Software Component Behavior and Model Checking
Model checking is one of the formal verification methods.Checking for important properties of a system (e.g.absence of deadlocks, array element indices within limits) assumes a model describing the system behavior is available.The model defines a state space and the desired property is verified via its exhaustive traversal.In case of software model checking, a model can be obtained either from a system specification such as ADL (e.g.Wright [15], FSP [5], behavior protocols [1]) or via the source code analysis (the Bandera [10], SLAM [7] projects and Java PathFinder [11]).
Model checking faces two key inherent problems -state space explosion and error trace complexity and interpretation.An error trace is the path through the state space representing the particular computation in which the desired property is violated.The main problem regarding error traces is that a very long trace, in the order of magnitude of hundreds of states, may be very hard to analyze and interpret [21,22,23,24].
There are two widely used tactics for exhaustive traversal of the state space: Depth First Search (DFS) and Breadth First Search (BFS).Specification of a software unit (e.g.software component) usually generates huge state space.This is caused by the need of modeling large data type domains and parallelism (threads/processes).Therefore, the BFS-based tactics cannot be practically used because of their high memory requirements; instead, a DFS-based tactic has to be chosen.Unfortunately, in comparison with BFS, DFS has a drawback -the error trace it finds is not the shortest one in general.
Goals and Structure of the Paper
Behavior protocols [1] are a method of software component behavior specification.They are used for behavior specification in the SOFA [16] and the Fractal [4] component models.We employed behavior protocols in several non-trivial case studies of component behavior specification, comprising high number of components.This includes a non-trivial component-based test bed application in a project funded by France Telecom aiming at integration of behavior protocols into Fractal component model.One of the key lessons learned has been that the error trace length problem is severe and has to be addressed seriously.The goals of this paper are (i) to share with the reader the experience gained during specifying behavior of a non-trivial component-based application and show that the error trace length problem is really serious, and (ii) to describe the techniques we designed to address this problem.
These goals are reflected in the rest of the paper as follows: Sect.2.1 and 2.2 shortly describe behavior protocols and Sect.2.3 illustrates how to use them for component behavior specification and demonstrates the problem with the error trace length on a fragment of a non-trivial application that will be used as a running example.In Sect.3, as the key contribution, the proposed techniques for addressing the error trace length and interpretation problems are described.Sect. 4 contains an evaluation of the proposed techniques while Sect. 5 discusses related work.Sect.6 concludes the paper and suggests future research direction.
Behavior Protocols and Software Components
Software components are building blocks of software and communicate through interface bindings [4,15,16].A component may provide some functionality by its provides (server) interfaces and may require other functionality from its environment (other components) though its requires (client) interfaces.As an example, consider the DhcpServer component on Fig. 3.It is a composite component built of two other components -ClientManager and DhcpListener that are bound via their Listener interfaces.The DhcpServer has a provides interface (Mgmt) and two requires interfaces (PermanentDb and Callback).
A behavior protocol [1] is an expression describing the behavior of a component; the behavior means the activity on component's interfaces viewed as sequences (traces) of accepted and emitted method call events.A behavior protocol 2 is syntactically composed of event denotations (tokens), the operators (Fig. 1 and parentheses.For a method m on an interface i, there are four event token variants: Emitting an invocation: !i.m↑ Accepting an invocation: ?i.m↑ Emitting a response: !i.m↓ Accepting a response: ?i.m↓ Furthermore, three syntactic abbreviations of method calls are defined: Issuing a method call: !i.m is an abbreviation for !i.m↑;?i.m↓ Accepting a method call: ?i.m is an abbreviation for ?i.m↑;!i.m↓ Processing of a method: ?i.m {expr } stands for ?i.m↑;expr ;!i.m↓ meaning that expr defines the m's reaction to the call in terms of issuing and accepting other events.As an example consider the fragment of behavior protocol in Fig.Although a behavior protocol may define an infinite set of traces, each trace is finite -the repetition operator denotes any arbitrary finite number of its argument repetition.Each behavior protocol defines a finite automaton with transitions labeled by the protocol's events.A frame (behavior) protocol of a component describes its "black-box" behavior (only the events on provides and requires interfaces are visible), while an architecture protocol of a (composite) component describes its behavior as defined by the composition of its first-level subcomponents, i.e. the communication events of these subcomponents appear in the behavior.Using the DhcpServer composite component in Fig. 3 as an example, its frame protocol contains only the events of the Mgmt, PermanentDb and Callback interfaces; the architecture protocol of the DhcpServer component is created by a parallel composition of frame protocols of DhcpListener and ClientManager components.
Protocol Compliance and Composition
The key benefit of using behavior protocols to describe behavior of components is at the design stage of an application.The developer can check whether the components he/she composes have compatible behavior: it enables for checking the component compatibility both horizontally (e.g. between the ClientManager and DhcpListener components) and vertically (between the DhcpServer frame protocol and the architecture protocol created by parallel composition of the ClientManager and DhcpListener frame protocols) [1].
The horizontal protocol compatibility is defined via the consent operator [2], which is basically a parallel composition converting the subcomponents' communication events to internal (τ ) events.This is similar to CSP, however in addition the consent composition detects three kinds of composition errors: bad activity, no activity, and infinite activity.Bad activity occurs when a component emits a call on an interface and the component providing that interface is not able to accept (according to its behavior protocol) such a call.No activity is a deadlock and infinite activity means that there is "no agreement" in two composed repetitions on a joint exit (there is a loop that cannot be exited due to the nature of communication).The consent operator and composition errors are thoroughly described in [2].
The vertical compatibility is captured via protocol compliance [1].The protocol compliance is defined between the frame protocol of a component and its architecture protocol, i.e. the protocol created from its subcomponents' frame protocols composed via the consent operator.
Example: A Fragment of the Test Bed Application
In this section we describe a fragment of a test bed application ("Wireless Internet Access") mentioned in Sect.1.2.The application is a quite complex system allowing clients of various air-carriers to access the Internet from airport lounges via local Wi-Fi networks.The whole Wireless Internet Access application is composed of about 20 Fractal components.One of the key components is the DhcpServer composite component (Fig. 3).It communicates with system's clients at the lowest level, i.e. it is responsible for managing clients' IP addresses, monitoring overall state of the local wireless network and providing this information to the rest of the system.A simplified version is presented in this section.
DhcpServer Architecture
In principle, the DhcpServer composite component works in two functionality modes which can be swapped via the Mgmt interface: (i) DhcpServer generates IP addresses dynamically for new clients (this is the default functionality that can be also set by calling the UseTransientIPs method on the Mgmt interface).
(ii) DhcpServer assigns IP addresses statically based on mappings between clients' MAC and IP addresses in an external database accessible via the PermanentDb interface (this functionality is set by calling the UsePermanentIPs method on the Mgmt interface).
When a client disconnects from the network, the DhcpServer calls the Disconnected method on its Callback interface to notify its environment about this event.
As already mentioned, the DhcpServer functionality is implemented by its subcomponents: ClientManager and DhcpListener.The architecture of the DhcpServer and bindings between the subcomponents is shown on Fig. 3.
Fig. 4. Frame protocol of DhcpListener
The DhcpListener component is responsible for the "real" communication with network clients and the network infrastructure.Internally it uses existing system infrastructure to manage client nodes.Events that occur at the network level are unified by DhcpListener which converts them to method calls.As they can arrive at any time, the corresponding frame protocol has to express the inherent parallelism (Fig. 4).
ClientManager accepts notifications on network events from the DhcpListener and processes them either internally (RequestNew and Update) or forwards them to DhcpServer's environment (via Callback.Disconnected) as part of Return processing.
ClientManager's behavior is expressed by its frame protocol in Fig. 5.The part A of the protocol represents the "generate IP addresses dynamically" functionality of ClientManager while the part B represents the "assign IP addresses statically" functionality.The parts A.1 and B.1 express the Client-Manager's ability to process DhcpListener's notifications and also describe reactions to them.The parts A.2 and B.2 capture ClientManager's ability to detect client disconnections internally, resulting in a call of Disconnected.The ClientManager's functionality mode swapping mechanism is reflected in the parts A.3 and B.3: At any time, ClientManager can accept a method call requesting a mode change (?Mgmt.UsePermanentIPs↑ or ?Mgmt.UseTransientIPs↑), but it does not respond it immediately.Instead, it waits until the processing of all pending method calls on the Listener interface is finished and then it issues the !Mgmt.UsePermanentIPs↓ or the !Mgmt.UseTransientIPs↓ response.Then ClientManager is again ready to accept further calls on the Listener interface and respond to them according to its newly set functionality mode.
DhcpServer Frame Protocol
The frame protocol of DhcpServer is shown in Fig. 6.The interactions between DhcpServer's subcomponents are not visible in it.However, their communication can trigger interaction with the environment of DhcpServer that is therefore visible in its frame protocol.This is illustrated by the part C of the frame protocol in Fig. 6: the !Callback.Disconnected call can be invoked by the ClientManager subcomponent either as a reaction to an accepted ?Listener.Return call or due to its internal detection of client disconnection (Sect.2.3.1);however these two causes are indistinguishable in the DhcpServer frame protocol.The part D of the protocol expresses the DhpcServer's ability to swap between its two modes (Sect.2.3.1).
Approaches to Error Trace Analysis and Interpretation
In behavior protocols, an error trace's end is reflected in the state space (defined by the protocol) as a state F. It is a specific feature of behavior protocols that each trace reaching F is an error trace.Hence, F is an error state.In consequence, an error state represents a set of error traces SF. (Note that the existence of error states is not a general feature of an LTS.) Finding all elements of SF means complete traverse of the state space.Sometimes, however, the knowledge of the whole set of error traces corresponding to an error state may be very beneficial for error cause's identification.As the set of error traces may be huge (or even infinite), providing it as a list of traces would not be of much help.Therefore, additional forms of SF representation are needed.
Plain Error Trace
As demonstrated in Sect.2.3.3, an error trace identifying a compliance or composition error may be quite long and hard to interpret.Moreover, due to the DFS tactic used, the error trace may contain states not capturing "the essence" of the error.For example, the state subsequence S5, S226, S230, S231 of the error trace in Fig. 7 also forms an error trace, but the longer one was found first.In this respect, the other states are "not-important" ones.It is a challenge to filter out these "not-important" states (to find a canonical representation of the error trace set associated with an error state).One can imagine a filtering technique based on iterative re-searching the state space, which would take advantage of the knowledge of the depth at which the error was found.
State Space Visualization
One of the checking outputs we propose in order to make error interpretation easier is state space visualization.Visualization is a graphical representation of the state space associated with the protocol (Sect.2.2).For the state space related to Sect.2.3.1, this is illustrated on Fig. 8 (only a fragment of the state space is captured here for brevity).This helps find out what the problem cause is by tracking the error trace in the state space.Apparently, state space size might be a problem here -a state space having more than 1,000 states is hard to visualize.Thus, visualizing only a part of the state space becomes a practical necessity.In this perspective, capturing only the part containing the error state and its "neighborhood" is a straightforward thought.We employed this idea with a very positive experience.Such a result still provides useful information, detailed enough to identify where the essence of an error is.Technically, our visualization outputs all the transitions leading from a state on the error trace -this helps with finding correspondence with the original protocol.
Fig. 8. State space visualization -dashed lines represent longer paths omitted due to the limited space of this paper.The state S231 is the error state F.
Protocol Annotation
Another way of representing an error state are annotated protocols.Consider a composition of protocols P and Q via the consent operator.If the composition yields a composition error in an error state S, the state S is represented by marks <HERE> put into P and Q, forming the annotated protocols PS and QS.
For illustration consider Fig. 9 where a fragment of the annotated frame protocol of DhcpServer corresponding to the error trace in Sect.2.3.3 is depicted.Advantageously, there is no need to construct the entire state space, but it suffices to annotate only the protocols featuring as operands in a composition.For example, the set of error traces specified by the annotated protocol in Fig. 9, together with the annotated architecture protocol of DhcpServer internals, yields the error traces: τ Callback.Disconnected↑; τ Callback.Disconnected↓; τ Mgmt.UsePermanentIps↑; τ Mgmt.UsePermanentIps↓ and τ Mgmt.UsePermanentIps↑; τ Mgmt.UsePermanentIps↓; τ Callback.Disconnected↑; τ Callback.Disconnected↓There are two issues to be addressed with this technique: (i) Identical prefixes in alternatives.For example, consider the following frame protocol: (?i.m1; ?i.m2) + (?i.m1; ?i.m3).If an error state is to be indicated after ?i.m1, the corresponding annotated protocol takes the form: (?i.m1<HERE>; ?i.m2) + (?i.m1<HERE>; ?i.m3)Even though one of the alternatives could be eliminated, we prefer keep them both to provide more context of the error.
(ii) Transformations performed on input protocols.In the protocol checker, the protocols are modified during the parsing process (e.g.?i.m is decomposed into ?i.m↑; !i.m↓ and the formatting information is lost).Therefore, exact mapping of an error state back to the source protocols may be difficult.Fortunately, the transformations typically still yield a reasonably readable behavior protocol, which, annotated, provides useful information for specification debugging.
Evaluation
During the work on the case study mentioned in Sect.2.3, it has turned out that combining all of the three forms of checking output is the most promising approach.Even though protocol annotation (Sect.3.3) appears a very generic technique, in complex cases the other checking outputs have to be also provided, since tracking all the path alternatives in a annotated complex protocol may be error-prone.
The most complex components of the case study have behavior protocols with up to 60 events; such behavior protocols generate a state space with hundreds of thousands of states.The typical errors encountered during the development of such components then generate error traces of about 100 states in length.However there were also some error states that generated error traces with several hundreds of states.It then took the developer about an hour (often even more) to identify the actual error in case only a plain error trace was available.The checking output techniques presented in Sect. 3 have been developed to improve debugging efficiency.During the further development of our case study application, the developers used a combination of these techniques and an average time to resolve a typical error shortened down to one third or one forth of the original time.
As for the plain error trace checking output, a problem is the existence of "local loops" in behavior of a component.Typically, with respect to the other parts of the system, the actual number of local loop traversals is of no significance in terms of an error localization.These loops lengthen the error trace, making it more complex and hard to analyze.Apparently, if loops are nested, the situation is even worse.A desire is to eliminate those of "no influence" on the rest of the system.This is a challenging problem -currently, only the highest-level loops are identified and eliminated in an automated way.
Annotated protocols are very similar to the approach used in Bandera Toolset [10] and PREfast [3] since they are based on emphasizing of the positions in the input protocols where a composition error has been found.Unlike in Bandera and PREfast, in behavior protocols the positions between two operations are highlighted to denote an error state.
Related Work
In [23], the authors address the counterexample complexity and interpretation problem by proposing a method for finding "positives" and "negatives" as sets of related correct traces and error traces.An interesting approach is chosen in [21], where the authors analyze the complexity of error explanation via constructing the "closest" correct trace to a specific error trace.In [24], the authors describe an algorithm ("delta debugging") for finding a minimal test case identifying an error in a program.This idea could be used to modify an error trace in order to find a "close enough" correct one.An optimization of the checking process is described in [22] where multiple error traces are generated in a single checking run.
Static Driver Verifier (SDV) [6] is a tool used to verify correct behavior of WDM (Windows Driver Model) [8] drivers.The driver's source code in C and the model written in SLIC (a part of the SLAM project [7]) are combined into a "boolean" program that is maximally simplified and selected rules are checked.If a rule is violated, an error trace of the program is generated and mapped back to the driver's C source code.Because WDM drivers are very complex, to make checking feasible, both the Windows kernel model and the rules used in the SDV have to be simplified.Thus the error traces generated by SDV are relatively short and easy to interpret.And, since they contain also the states corresponding to traversing through the kernel model, such parts are optionally hidden in the checking output.This solution might be also applicable to our plain error traces (Sect.3.1): The events generated inside a method call could be grouped into the "background" (Sect.2.3.3).
However, because it is not easy to identify the beginning and the end of a single method call in error trace (especially when the i.m{...} shortcuts are not used), employing this idea in the behavior protocol checker is not a trivial task.
As to the classical model checker SPIN [9], in case of violating of checking property specified in LTL, Spin allows traversing the trace to the error state while watching the variable values, process communication graph, and highlighted source code.Sometimes the error trace length makes this approach very hard to use and identification of the actual problem may be quite challenging.Although the approaches to ease the interpretation of an error trace in SPIN work well in most cases, its modelling language Promela [9] is not a suitable specifying software components.Since such specification in Promela typically yields a large state space impossible to traverse in a reasonable time.
As for other tools, Java PathFinder (JPF) [11], Bogor [17], BLAST [18], SMV [12], Moped [19], and MAGIC [20] cope with counterexamples and all provide them as error traces.Specifically, JPF, Bogor, BLAST, Moped, and MAGIC print the sequence of steps leading to an error state annotated by a corresponding line of the source code, while the SMV tool provides an error trace consisting of the input file lines written in the SMV specification language.Moped is a similar to SDV in the sense that it first translates the input program (in Java) into the language of LTL in which the counterexamples are generated.They are then translated back to the input language.The MAGIC tool checks behavior of a C program against a specification described via an LTS.Besides an error trace, it can also generate control flow graphs and LTSs using the dot tool of GraphViz package [13] (also used by the behavior protocol checker).In all cases, but especially in the case of JPF, the error trace may get quite complex and not easy to interpret.
Conclusion and Future Work
During the work on the project (Sect.2.3.1) it has turned out that, besides plain error trace, additional checking outputs are needed for speeding up error detecting and debugging process.Therefore, we introduced two more approaches: (i) state space visualization, and (ii) annotated protocols.Using all the three methods in combination was found most beneficial (locating an error was then more efficient (Sect.4)).
Problems arise when checking the composition/compliance of several components described by really complex behavior protocols.The large state space generated by such a protocol causes that an error trace is typically very long and hard to interpret.Still, in our view, this is worth to pursue since we believe that the components' compatibility problem cannot be restricted to the syntactic/type compatibility of their (bounded) interfaces [1], even though this could be checked with much smaller effort and would avoid the problems discussed in this paper; in fact, we can hardly imagine putting together a non-trivial component-based application of the size mentioned in Sect.2.3.1, if the compliance checks were based only on syntactic/type compatibility of individual interfaces.
Our future work is therefore focused on improving the methods currently used by the behavior protocol checker; in particular, a method for automated removing of unnecessary "local loops" (Sect.4) would further simplify the plain error trace checking output.
As for state space visualization, an automated method for detecting the "important" part of the state space (currently done by hand) is needed to simplify the resulting graphical representation of an error trace.
Similar to Bandera [10] and PREfast [3], the possibility to dynamically indicate the correspondence between a particular position in an error trace and the associated part of the protocol would perhaps further ease and speed up the debugging process.
Fig. 6 .
Fig. 6.First version of the frame protocol of DhcpServer (Instead of +, the | operator should have been used here as demonstrated by the error trace in Sect.2.3.3)
2. According to it, the ClientManager component is able to accept RequestNew, Update and Return method calls on the interface Listener in parallel any finite number of times.If a Return method call is accepted, the component reacts by performing a Disconnected method call on its Callback interface.Furthermore, a Disconnected method call can be emitted at any time.
|
2018-01-23T22:39:21.209Z
|
2006-08-08T00:00:00.000
|
{
"year": 2005,
"sha1": "0fc0174cf8ec490d69ad08107c6515db8f827f50",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.entcs.2006.05.023",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "33432b83035a4f3dcf2736cfdfb34cf75a202950",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
1181103
|
pes2o/s2orc
|
v3-fos-license
|
Mechanisms of plasma membrane targeting of formin mDia2 through its amino terminal domains
We investigated the poorly understood mechanism of plasma membrane targeting of formin mDia2 and found that its N terminus plays important roles in this process by binding acidic phospholipids through its N-terminal basic domain and by binding small GTPase Rif through direct interaction with the GTPase binding region and the diaphanous inhibitory domain.
INTRODUCTION
The ability of cells to move is a requisite for organismal development and survival. A common mode of cell motility entails shape changes, such as protrusion of the front and retraction of the rear of the cell. These processes are powered, in large part, through forces generated by the actin cytoskeleton. To protrude the leading edge, cells exert force on the plasma membrane through elongating actin fi laments, with their barbed ends oriented toward the membrane (reviewed in Chhabra and Higgs, 2007 ). This process is regulated by a large number of structural and regulatory proteins.
Formin family proteins are key regulators of actin polymerization. They promote actin assembly by nucleating actin fi laments de novo and by enhancing their elongation at the barbed end ( Pruyne et al. , 2002 ;Higgs, 2005 ;Paul and Pollard, 2009 ;Chesarone et al. , 2010 ). Both of these functions are mediated by the evolutionarily conserved formin homology domain 2 (FH2). FH2 dimers associate processively with the barbed ends of elongating actin fi laments, simultaneously allowing for the incorporation of actin monomers and protecting the barbed end against capping proteins. In the presence of profi lin-actin complexes, the elongation-promoting activity of formins is greatly enhanced by the upstream proline-rich formin homology domain 1 (FH1), which is thought to mediate the transfer of profi lin-actin complexes onto the barbed ends of growing fi laments ( Kovar et al. , 2006 ).
Thus, multiple mechanisms converge to fi ne-tune the subcellular localization of formins, involving virtually any formin domain. Among formins, subcellular localization has been most extensively studied for mDia1, but even for this formin the mechanism of targeting is not fully understood. Even less is known about the targeting mechanism of mDia2. Despite the close similarity and some functional overlap between these two formins, their cellular functions are different. Whereas mDia1 is mostly involved in the generation of contractile actin bundles and adhesions ( Nakano et al. , 1999 ;Watanabe et al. , 1999 ;Yamana et al. , 2006 ;Carramusa et al. , 2007 ;Ryu et al. , 2009 ), mDia2 is mostly implicated in the formation of membrane protrusions ( Peng et al. , 2003 ;Pellegrin and Mellor, 2005 ;Schirenbeck et al. , 2005 ;Yang et al. , 2007 ), although it also plays a role in cytokinesis ( Watanabe et al. , 2010 ). Accordingly, activated mDia2 functions at the interface between actin fi lament barbed ends and the plasma membrane, and the interactions with both surfaces control its subcellular localization ( Yang et al. , 2007 ). Our previous data suggested that the N-terminal region of mDia2 is required for stable association with the leading edge of the cell ( Yang et al. , 2007 ). Recent analysis of mDia2 targeting to the cytokinetic ring revealed that it involves interaction of the G region with RhoA and of the DID-DD-CC region with the scaffolding protein anillin ( Watanabe et al. , 2010 ). The mechanism of mDia2 interaction with the membrane during cell motility remains unknown, however.
In this study, we performed a detailed structure-function analysis of mDia2 to defi ne which domains are involved in its localization to the plasma membrane in an actin-independent manner. Because sequences N-and C-terminal to the FH1-FH2 module appear to play a targeting role in other formins, we considered these sequences to be candidates to target mDia2 to the membrane. In addition to the traditionally recognized N-terminal domains of mDia2, we also evaluated the role of the poorly characterized 90-aa region at the extreme N terminus preceding the G region . We refer to this region as the basic domain (BD), because of its positive charge, with a predicted isoelectric point of ∼10. We hypothesized that the BD could potentially play a role in targeting mDia2 to the membrane, because other proteins have been shown to bind charged plasma membrane phospholipids through their BDs (McLaughlin and Murray, 2005 ).
mDia2 is targeted to the plasma membrane through its N-terminal domains
The formin mDia2 is a modular protein consisting of a series of semiindependent functional domains (Goode and Eck, 2007 ). To test which of the domains of mDia2 localize to the plasma membrane in an actinindependent manner, we expressed green fl uorescent protein (GFP) fusion of mDia2 domains, or their combinations, together with the cytoplasmic marker mRFP1 in HeLa cells ( Figure 1, A and B ). We excluded the region encompassing the FH1-FH2 domains (aa 529-1025) from this analysis because it can be targeted to the plasma membrane by binding to actin fi lament barbed ends. We analyzed the plasma membrane localization of GFP fusion proteins by confocal micro scopy, using midplane optical sections of expressing cells ( Figure 1B ).
We fi rst evaluated the localization of constructs fl anking the FH1-FH2 module on both sides, the N-terminus (Nt, 1-528) or the C-terminus (Ct, 1026-1171 ), using GFP and full-length GFP-mDia2 as gous to the reported plasma membrane localization of the N termini of mDia1 and FRLα ( Seth et al. , 2006 ). Treatment with the actindepolymerizing drug latrunculin B (2 μM for 30 min) did not alter the membrane localization of GFP-Nt, consistent with its localization being actin-independent ( Figure 1B ).
To quantify the degree of plasma membrane localization of various GFP-tagged constructs, we devised a parameter termed the plasma membrane localization index (PM index; see Materials and Methods ). The PM index equals zero for nonmembranetargeted constructs, whereas enrichment at the membrane results in positive values of the PM index. PM indices ≤ 0.3 were close to the detection limit of our method, and constructs having PM indexes in this range were not considered to be targeted to the membrane. An increment of one unit in the PM index corresponds to a 100% increase in the average fl uorescence intensity of a protein at the membrane as compared to that of the GFP control. The values of the PM index for different constructs ( Figure 1C ) confi rmed the conclusions drawn from the visual inspection of confocal images. Specifi cally, the PM index for GFP, GFP-mDia2, and GFP-Ct were close to zero. In contrast, the PM index for GFP-Nt was 3.5 ± 1.9, indicating strong plasma membrane localization. This value did not change signifi cantly after latrunculin B treatment (3.7 ± 1.5, Figure 1C ), suggesting that plasma membrane targeting of GFP-Nt is independent of actin fi laments.
The BD of mDia2 is important for plasma membrane targeting The N terminus of mDia2 consists of the following characterized domains: G (91-148), DID (149-397), DD (398-468), and CC (469-528). The exact boundaries of these domains have been determined based on their crystal structures of mDia1 ( Otomo et al. , 2005 ) and have been deduced for mDia2 based on sequence alignment (Supplemental Figure 1). The fi rst 90 amino acids of mDia2 are poorly characterized. We named this region the BD, because it is rich in positively charged residues (Supplemental Figure 1) and has a predicted isoelectric point of ∼10.
To determine which of the domains of the mDia2 Nt contribute to plasma membrane targeting, we generated GFP fusion constructs of BD, G, DID, and DD-CC ( Figure 1A ). We expressed these constructs in HeLa cells and tested their plasma membrane localization by confocal microscopy ( Figure 1B ). To our surprise, only GFP-BD showed signifi cant localization to the plasma membrane ( Figure 1, B and D ) with a PM index of 1.0 ± 0.5 ( Figure 1C ), lower than that of GFP-Nt, but significantly higher than that of GFP alone. GFP-BD was also enriched in the nucleus, consistent with a previous fi nding that this region harbors a nuclear localization signal (NLS) (Miki et al. , 2009 ). The lack controls ( Figure 1, A and B ). As expected, GFP and mRFP1 had similar diffuse localizations in the cytoplasm and the nucleus ( Figure 1B ). Fulllength GFP-mDia2 was also diffusely distributed in the cytoplasm, but was excluded from the nucleus, consistent with previous observations ( Miki et al. , 2009 ). Such a distribution is believed to refl ect the autoinhibited conformation of mDia2. The C terminus of mDia2 was also cytoplasmic, suggesting that it is unlikely to contribute to membrane targeting of mDia2. In contrast, GFP-Nt was strongly localized to the plasma membrane and largely depleted from the cytoplasm, analo- Plot shows average fl uorescence intensity of the GFP (green) and mRFP1 (red) signals within a 10-pixel-wide line drawn across the cell expressing GFP-BD and mRFP1 (inset). Two green peripheral peaks correspond to a plasma membrane pool of GFP-BD, and the broad central plateau corresponds to the nuclear pool of GFP-BD. Scale bars, 10 μm. unstructured peptide. Together, these data indicate that the BD is an important membrane-targeting region of mDia2 and that its association with the plasma membrane depends on electrostatic interactions and the presence of two predicted amphipathic helices.
G region, DID, and DD-CC collectively contribute to plasma membrane localization Although the plasma membrane localization of GFP-NtΔBD was drastically impaired by the absence of the BD, GFP-NtΔBD was not completely cytoplasmic based on confocal microscopy and cell fractionation results. In other words, even though individual G, DID, and DD-CC fragments did not detectably localize to the plasma membrane, collectively they did ( Figure 1C ). Therefore, we assessed the contributions of various combinations of these domains toward targeting of GFP-NtΔBD to the plasma membrane ( Figure 3 ).
We observed negligible plasma membrane enrichment of the isolated G region (PM index = 0.2 ± 0.1; Figure 1B ) even though this region, in conjunction with a part of DID, is thought to target formins to the membrane through GTPase binding . We then asked whether the two predicted amphipathic helices of the BD that lie immediately N-terminal to the G ( Figure 2D ) form a common GTPase-binding unit with the G region in mDia2. To answer this question, we extended the G construct to include the two predicted helices of the BD ( Figure 3A ). The resulting construct GFP-BD 38-91 -G (38-148) remained cytosolic, however (PM index = 0.3 ± 0.2; Figure 3B ). Only when the entire BD was added to the G did the resulting fragment GFP-BD-G (1-148) show appreciable plasma membrane localization ( Figure 3 ), most of which can be accounted for by the BD, because the contribution of the G region was still undetectable.
The crystal structure of a complex of mDia1 and RhoC demonstrated that both the G region and adjacent DID mediate the interaction with the GTPase , providing another potential explanation for the lack of plasma membrane localization of GFP-G. Therefore, we tested whether adding the DID to the G or to BD-G would improve plasma membrane localization. GFP-G-DID, however, was still largely cytosolic (PM index = 0.3 ± 0.1), similar to GFP-G ( Figure 3 ). Likewise, the localization of GFP-BD-G-DID was comparable to that of GFP-BD-G ( Figure 3 ).
Because GFP-G-DID did not signifi cantly bind the membrane, but GFP-NtΔBD did, we considered a role of DD-CC in membrane localization. Neither GFP-DD-CC (PM index = 0.1 ± 0.1; Figure 1B ) nor GFP-DID-DD-CC (PM index = 0.3 ± 0.2) localized to the plasma membrane ( Figure 3 ), however, suggesting that the membrane targeting of GFP-NtΔBD was not caused by direct binding of DD-CC to the membrane or by dimerization with endogenous mDia2. These results further suggested that the G region is necessary for membrane targeting of GFP-NtΔBD, and DD-CC appeared to potentiate the membrane-binding ability of G-DID.
Of interest, the addition of DD-CC to GFP-BD-G-DID enhanced membrane targeting as evident from an increase of the PM index from ~0.6 to ~3.5 in the resulting Nt construct ( Figure 3 ). Yet, removal of the CC domain from the mDia2 N terminus dramatically reduced membrane targeting of the resulting construct GFP-NtΔCC (PM index = 0.6 ± 0.7). Thus, the CC domain plays a critical role in targeting the N terminus of mDia2 to the membrane.
The above data raised the possibility that the G-DID had membrane-targeting capabilities that were uncovered only in the presence of the surrounding domains. To test more directly whether the G-DID participates in membrane targeting through interaction with GTPases, we substituted residue Ser-184 of the DID by glutamic acid (S184E) within construct GFP-Nt ( Figure 3 ). Indeed, structural and of membrane localization of GFP-G and GFP-DID was not due to degradation as they were expressed at a correct molecular weight (Supplemental Figure 2).
If the BD plays an important role in plasma membrane targeting, its deletion should compromise plasma membrane localization of the other N-terminal domains. Consistent with this idea, plasma membrane targeting of GFP-NtΔBD (91-528) was severely compromised, as refl ected by a decrease of the PM index to 0.7 ± 0.4 ( Figure 1, A-C ). These results strongly suggest that the BD is a key determinant for plasma membrane localization of mDia2 Nt, and most likely also contributes to the targeting of activated full-length mDia2.
To verify the data obtained by confocal microscopy, we performed subcellular fractionation of HeLa cells expressing GFP, GFP-BD, GFP-NtΔBD, and GFP-Nt ( Figure 2A ) to determine the extent to which these constructs partition with cellular membranes. The results showed no recovery of GFP in the membrane fraction, whereas 28 ± 8% of GFP-BD, 15 ± 3% of GFP-NtΔBD, and 41 ± 8% of GFP-Nt were present in the membrane fraction, suggesting that the ability of these proteins to bind the membrane decreases in the following order: Nt > BD > NtΔBD > GFP. These results are consistent with the data obtained by confocal microscopy ( Figure 1C ), and confi rm that the BD of mDia2 associates with the plasma membrane and contributes to plasma membrane targeting of mDia2 Nt.
We hypothesized that the positively charged BD could be targeted to the plasma membrane by direct interaction with negatively charged phospholipids. To test this idea, we performed a lipidprotein overlay assay with purifi ed GST-BD and found that it binds acidic lipids but not neutral lipids ( Figure 2, B and C ). The interaction with acidic phospholipids seemed to be nonspecifi c, suggesting an electrostatic mode of interaction.
The N-terminal portion of the BD (1-37) contains most of the positively charged amino acids, but lacks clusters of hydrophobic amino acids and is, therefore, predicted to be intrinsically disordered. In contrast, hydrophobic cluster analysis ( Gaboriaud et al. , 1987 ) reveals the presence of two clusters of hydrophobic amino acids within the C-terminal portion of the BD (38-90), displaying a characteristic helical pattern ( Figure 2D ). Using a helical wheel representation of this region, we found that hydrophobic amino acids and charged (or polar) amino acids are mostly clustered on two opposite sides of the helical wheel, indicative of amphipathic helices ( Figure 2D ). To test the role of the two predicted amphipathic helices of the BD in membrane localization, we prepared constructs GFP-BD 1-64 and GFP-BD 1-37 lacking one or both of the hydrophobic clusters, respectively ( Figure 2, D and E ). Confocal microscopy of HeLa cells revealed that GFP-BD 1-64 was somewhat enriched at the plasma membrane (PM index = 0.6 ± 0.3), whereas GFP-B 1-37 was mostly cytoplasmic (PM index = 0.2 ± 0.2). These data suggest that the C-terminal helical portion of the BD is important for its localization to the plasma membrane.
To further explore the roles of the basic stretch and hydrophobic clusters of the BD in plasma membrane targeting, we generated construct GFP-NtΔ1-37 (38-528), which lacks the N-terminal basic residues stretch, and evaluated its membrane localization in cells ( Figure 2, D and E ). The PM index of GFP-NtΔ1-37 was 2.1 ± 1.4, which is signifi cantly higher than that of GFP-NtΔBD (0.7 ± 0.4) and points to an important role of two predicted amphipathic helices of the BD in plasma membrane targeting. The plasma membrane localization of GFP-NtΔ1-37 was signifi cantly lower than that of GFP-Nt (PM index = 3.5 ± 1.9), however, suggesting that the N-terminal basic stretch of BD also contributes in a signifi cant way to plasma membrane binding. The isolated construct GFP-B 1-37 did not signifi cantly localize to the membrane, possibly due to degradation of this short FIGURE 2: The BD of mDia2 is important for plasma membrane targeting and binds acidic phospholipids in vitro. (A) Subcellular fractionation of HeLa cells expressing GFP fusion proteins. Cytoplasmic (Cyt) and membrane (Mem) fractions of cells expressing indicated constructs (top row) were loaded on the gel in volume equivalents shown by numbers above the corresponding lanes. Western blotting with GFP antibody was used to detect the expressed proteins. Tubulin and IRSp53 were used as cytoplasmic and plasma membrane markers, respectively, to confi rm successful fractionation. Calculated percentage of GFP fusion proteins in the membrane fraction (%) is shown at bottom. (B) Purifi ed GST-BD (arrow) shown by Coomassie staining of SDS-PAGE gel. (C) Protein-lipid overlay assay showing GST-BD binding to acidic, but not neutral phospholipids (left panel). No binding is detected for GST alone (right panel). Abbreviations: LPA, lysophosphatidic acid; LPC, lysophosphocholine; PI, phosphatidylinositol; PI(3)P, phosphatidylinositol-3-phosphate; PI(4)P, phosphatidylinositol-4-phosphate; PI(5)P, phoshphatidylinositol-5-phosphate; PE, phosphatidylethanolamine; PC, phosphatidylcholine; S1P, sphingosine-1-phosphate; PI(3,4)P 2 , phosphatidylinositol-3,4-bisphosphate; PI(3,5)P 2 , phosphatidylinositol-3,5-bisphosphate; PI(4,5)P 2 , phosphatidylinositol-4,5-bisphosphate, PI(3,4,5)P 3 , phosphatidylinositol-3,4,5-trisphosphate; PA, phosphatidic acid; PS, phosphatidylserine. (D) Secondary structure predictions for the basic domain of mDia2. Top: Hydrophobic cluster analysis (HCA) reveals abundance of positively charged residues (blue) in the 1-20 region and two putative hydrophobic clusters in the 40-60 and 70-80 regions (outlined). In the HCA, stars are prolines, black diamonds are glycines, and empty and fi lled squares are threonines and serines, respectively. Middle: Helical wheel presentation of a putative amphipathic helix for amino acids 43-60 corresponding to the fi rst hydrophobic cluster in the HCA plot. Nonpolar residues are clustered on the upper left side of the wheel. Bottom: Diagrams of mDia2 constructs used in panel E. (E) Confocal microscopy images of HeLa cells coexpressing indicated GFP-mDia2 fusion constructs and mRFP1 as a cytoplasmic marker. Average PM indices with standard deviations and numbers of quantifi ed cells in parenthesis are shown to the right of the merged images. Scale bar, 10 μm. plasma membrane targeting of G-containing mDia2 constructs. Several small GTPases of the Rho family, including RhoA, Cdc42, Rif, Rac1, and Rac2, have been proposed to regulate mDia2 functions in cells ( Alberts et al. , 1998 ;Peng et al. , 2003 ;Pellegrin and Mellor, 2005 ;Ji et al. , 2008 ;Lammers et al. , 2008 ). We fi rst tested which of these GTPases in their constitutively active form is the most effective in enhancing plasma membrane localization of coexpressed mDia2 Nt.
We found that only Rif signifi cantly increased the PM index of GFP-Nt, whereas active RhoA, Cdc42, and Rac1 did not ( Figure 4, A and B ). Moreover, both active and inactive forms of Cdc42 lowered the plasma membrane localization of GFP-Nt, although the statistical signifi cance of these differences was low ( p = 0.01 and 0.03, respectively). These results are consistent with effi cient targeting of Rif, but not RhoA, Cdc42, and Rac1, to the plasma membrane ( Figure 4C ) and do not contradict the previous reports that all these GTPases interact with mDia2 ( Alberts et al. , 1998 ;Peng et al. , 2003 ;Pellegrin and Mellor, 2005 ;Ji et al. , 2008 ;Lammers et al. , 2008 ). We did not detect strong effects of dominant negative Rif on GFP-Nt localization ( Figure 4B ). Although from visual inspection it sometimes appeared that Cdc42 and Rac1 also enhanced plasma membrane targeting of GFP-Nt, this effect might be due to changes in cell morphology caused by the overexpression of these GTPases, as monomeric red fl uorescent protein (mRFP1) also showed equivalent membrane enrichment ( Figure 4A ). Because we normalized the GFP membrane/ cytoplasm intensity ratio against the mRFP1 membrane/cytoplasm ratio during calculation of the PM index, we took these changes into account to determine the actual degree of plasma membrane localization.
In contrast to the entire N terminus of mDia2, the plasma membrane localization of a shorter construct GFP-BD-G-DID was enhanced not only by constitutively active Rif, but also by Cdc42 and RhoA, although Rif still had the greatest effect (Supplemental Figure 3). These data show that Rif is the most relevant GTPase targeting the mDia2 N terminus to the membrane and suggest that the presence of DD-CC confers added specifi city to mDia2 for GTPase binding.
Role of N-terminal domains of mDia2 in Rif-dependent membrane targeting
Having established that Rif causes the greatest increase in plasma membrane targeting of mDia2 Nt, we decided to further characterize the interactions of Rif with N-terminal domains of mDia2. Coexpression of constitutively active Rif with various N-terminal mDia2 constructs produced several unexpected and quite striking results, along with some expected fi ndings ( Figure 5 ). In contrast to increased targeting of GFP-Nt and GFP-BD-G-DID ( Figure 4 ), Rif unexpectedly had no effect on the localization of GFP-G-DID, which biochemical studies of mDia1 Lammers et al. , 2008 ) have shown that Ser-184 occupies a central position at the binding interface between the G-DID and the GTPase, such that the S184E mutation would be expected to abrogate GTPase binding. Consistent with this expectation, the localization of the mutant GFP-Nt-S184E was severely impaired (PM index = 0.7 ± 0.3). Any residual membranebinding activity of this mutant might be ascribed to the BD.
Collectively, these results confi rm the importance of G-DID in binding to the plasma membrane and suggest that DD-CC potentiates membrane binding of G-DID by increasing its avidity through dimerization and/or by presenting it in a conformation that is more optimal for membrane binding.
Effects of small GTPases on membrane targeting of mDia2
The inability of G-DID to target the plasma membrane might be explained by insuffi cient amounts of activated GTPases at the plasma membrane. To address this possibility, we investigated whether coexpression of constitutively active small GTPases would enhance Surprisingly, the plasma membrane localization of BD-containing constructs was signifi cantly increased by coexpression with active Rif ( Figure 5 ). This observation is illustrated, for example, by the lack of response of G-DID to Rif ( Figure 5 ) in contrast to a prominent response of BD-G-DID ( Figure 4 ). Even isolated GFP-BD responded to the presence of active Rif with a slight but signifi cant increase in membrane localization ( p = 0.009). Additionally, Rif greatly enhanced membrane localization of the GFP-BD-G construct. Interestingly, active Rif also decreased the nuclear localization of GFP-BD and, even more, that of GFP-BD-G ( Figure 5A ), further supporting the idea that the response of these constructs to Rif was specifi c. The addition of the DID to BD-G, however, did not result in a further increase of membrane targeting in the presence of Rif ( Figure 5B ), suggesting that Rif enhances membrane localization of these constructs through an indirect mechanism. Active Rif also increased the plasma membrane localization of the GTP asebinding mutant GFP-Nt-S184E to an extent similar to that of GFP-BD, implying that Nt-S184E is likely targeted to the membrane by the BD. Together these results suggest that in cells the BD can respond to the presence of active Rif at the membrane and even confer this sensitivity to the G region.
Direct interaction of Rif with mDia2 requires both G and DID, but not BD
One possibility of how Rif may enhance membrane localization of BD-containing mDia2 constructs is through direct binding to the BD. We tested this possibility by performing protein-protein binding assays with purifi ed GST-Rif and various MBP-mDia2 constructs ( Figure 6 ). To date, this interaction has been tested only by yeast two-hybrid screen and coimmunoprecipitation from mammalian cell lysates, which may refl ect both direct and indirect binding patterns (Pellegrin and Mellor, 2005 ). Furthermore, multidomain fragments of the mDia2 (1-297 and 47-800) that were used for these binding assays do not allow one to separate the contributions of individual mDia2 domains to this interaction.
Our binding assays showed that MBP-Nt and MBP-NtΔBD bound GST-Rif Q75L to a similar extent ( Figure 6, A and B ), suggesting that the BD does not enhance Rif-binding in vitro. Thus stronger plasma membrane localization of GFP-BD-G-DID than of GFP-G-DID in Rifexpressing cells was likely mediated by another mechanism, distinct from direct binding of the BD to the GTPase. We also found a clear interaction between GST-RifQ75L and MBP-BD-G-DID ( Figure 6B ) but weak interaction with MBP-BD-G detectable only at higher exposure times (unpublished data ). These results suggest that the DID is necessary for strong binding of mDia2 to Rif, as previously shown for the mDia1-RhoC interaction. In addition, the fact that both NtΔBD and BD-G-DID bind Rif implies that domains common to these constructs, namely G-DID, mediate the interaction with Rif GTPase. remained mostly cytoplasmic ( Figure 5 ). These fi ndings supported our earlier conclusion that the G-DID alone is insuffi cient to mediate membrane binding.
The potentiating effect of DD-CC on membrane targeting detected in the absence of ectopically expressed Rif was even more striking in the presence of active Rif ( Figure 5B ). Thus, the PM index of GFP-Nt in the presence of Rif was signifi cantly higher than that of GFP-BD-G-DID ( Figure 5B ). Similarly, in the absence of BD, the membrane enrichment of GFP-NtΔBD was substantially higher as compared to GFP-G-DID in active Rif-expressing cells. This effect was likely dependent on G-region binding to the GTPase, because membrane localization of GFP-DID-DD-CC, which lacks the G region, was not signifi cantly increased by active Rif ( Figure 5B ). These fi ndings are consistent with the idea that the addition of DD-CC enhances the ability of the G-DID to respond to plasma membranebound Rif. cooperating with other coactivators ( Seth et al. , 2006 ;Dominguez, 2010 ). Protein interaction with phospholipids at the membrane frequently plays a role in coincidence detection, but its importance is poorly understood for mDia formins, and in particular for mDia2. Here, we found that interactions of mDia2 with GTPases and phospholipids contribute to its localization at the plasma membrane in an actin-independent manner and that this activity is mediated by the coordinated actions of several N-terminal domains in a previously unappreciated manner.
Our main fi nding is that the BD plays an essential role in plasma membrane targeting of the Nt of mDia2. Its effect appears to be specifi c, as the C-terminal basic stretch adjacent to DAD ( Wallar et al. , 2006 ) is insuffi cient to recruit the Ct of mDia2 to the plasma membrane. Interestingly, a computer algorithm that searches for potential unstructured membrane-binding sites in protein sequences ( Brzeska et al. , 2010 ) also identifi es the BD of mDia2 as a potential membrane-binding site.
Protein-lipid interactions commonly rely on electrostatic forces and hydrophobic interactions, consistent with the amphipathic nature of membrane phospholipids. Electrostatic interactions likely contribute to the association of the BD with the plasma membrane in vivo, fi rst, because the deletion of the highly basic stretch within the fi rst 37 amino acids of the BD signifi cantly impairs membrane targeting of mDia2 fragments and, second, because the BD binds to acidic phospholipids in vitro. The apparent broad specifi city of the BD for acidic phospholipids is reminiscent of the regulation of the WAVE complex, an activator of the Arp2/3 complex, by a range of charged acidic phospholipids (Lebensohn and Kirschner, 2009 ). The remainder of the BD contains two clusters of hydrophobic amino acids, predicted to form amphipathic helices. Deletion of these clusters decreases membrane targeting, suggesting that they also contribute to plasma membrane binding. These putative helices could promote membrane binding by several nonexclusive mechanisms: formation of a single folding unit with the GBD for GTPase binding; insertion into the membrane bilayer, as has been suggested for the BD of mDia1 ( Ramalingam et al. , 2010 ); clustering of basic amino acids into a common membrane-binding interface; or interaction with some membrane-associated proteins.
Although the BD is important for membrane targeting, it is not solely responsible for the strong binding of the mDia2 Nt. Generally, proteins activated by small GTPases are thought to be also recruited to the membrane by these GTPases. Indeed, RhoAdependent recruitment of mDia2 to the cytokinetic ring has been recently demonstrated ( Watanabe et al. , 2010 ). The identity of the small GTPase targeting mDia2 to the plasma membrane in DISCUSSION Accumulating evidence in various systems converges on the idea that multiple inputs regulate protein activity and subcellular localization, a concept referred to as coincidence detection. The regulation of actin fi lament nucleation may also follow this scheme. For example, N-WASP, an autoinhibited regulator of the Arp2/3 complex, can be cooperatively activated and recruited to the membrane by interactions with Cdc42 through its GBD, phosphoinositides through its adjacent BD ( Prehoda et al. , 2000 ), and SH3 domain-containing proteins through its proline-rich region (Takenawa and Suetsugu, 2007 ;Derivery and Gautreau, 2010 ).
The regulation of mDia formins resembles that of N-WASP, as they are also autoinhibited proteins regulated by small GTPases indirectly enhances the recruitment of the BD to the plasma membrane in cells, for example, by changing the plasma membrane composition through other effectors or signaling pathways.
Although both G and DID participate in GTPase-dependent targeting of mDia2 to the plasma membrane, surprisingly, they are not suffi cient, as the construct G-DID fails to localize appreciably to the plasma membrane even in Rif-expressing cells. The addition of DD-CC to G-DID, however, rescues plasma membrane targeting, possibly through dimerization, which allows for multivalent binding (increased avidity) of the BD-G-DID module. If this idea is correct, the dimerization may require both the DD and CC domains, as the removal of the CC from Nt severely decreases plasma membrane targeting. In mDia1, however, the DD is sufficient to mediate dimerization in vitro ( Otomo et al. , 2005 ) suggesting that the NtΔCC of mDia2 may also be a dimer. Another possibility is that the DD-CC-containing region has membrane-targeting capabilities of its own, as proposed for mDia1 ( Copeland et al. , 2007 ). Consistent with this idea , DID-DD-CC of mDia2 localizes to the cytokinetic ring by interacting with anillin ( Watanabe et al. , 2010 ). The N-terminal region of mDia2 containing partial DID, DD, and CC is also involved in the Abi1-dependent stabilization of mDia2 at fi lopodial tips ( Yang et al. , 2007 ). Inability of DD-CC or DID-DD-CC to accumulate at the plasma membrane, however, is not consistent with this possibility or with a scenario in which DD-CC-containing constructs dimerize with endogenous mDia2 . Therefore, we currently favor an idea that the DD-CC module, in addition to dimerization, may cause a conformational change that allows for better binding of the GTPase by G-DID or improves binding of another target (for instance Abi1) by N-terminal domains. Consistent with this idea, it has been found recently that the N terminus of mDia1 correctly interacts with its C terminus only when the N terminus contains the CC domain ( Nezami et al. , 2010 ;Otomo et al. , 2010 ).
Together, our results suggest a model for a mechanism of mDia2 targeting with implications for its activation ( Figure 7 ). We propose that the BD, which is expected to be accessible in the autoinhibited conformation of mDia2, mediates initial binding to the membrane through electrostatic, and possibly also hydrophobic, interactions. This initial binding allows mDia2 to linger at the plasma membrane until it encounters active Rif. Next, weak binding of Rif to the G region causes the displacement of the DAD from the DID, as proposed previously , which would allow the GTPase to engage the DID to form a more stable complex at the membrane. The role of the DD-CC module in mDia2 is to dimerize and optimally arrange the N-terminal mDia2 domains for effi cient Rif binding and/or for engagement of additional targeting molecules.
Thus, mDia2 targeting, and possibly activation as well, occurs through extensive cooperation of all N-terminal domains, which together serve as a coincidence detection module recognizing at least two inputs: membrane phospholipids and a small GTPase. A similar mechanism may also be used to some extent by other interphase remained unclear, however. Among several GTPases reported to interact with mDia2 ( Alberts et al. , 1998 ;Peng et al ., 2003;Pellegrin and Mellor , 2005 ;Wallar et al. , 2007 ;Ji et al ., 2008;Lammers et al. , 2008) , only Rif potently and specifi cally enhanced the membrane localization of mDia2 constructs, suggesting that RhoA, Cdc42, and Rac1 may target mDia2 to other subcellular locations. Indeed, we have observed that overexpressed Rif is much more enriched at the membrane than are other GTPases, all of which are believed to interact with the membrane through prenylation of their C-terminal CAAX motifs. Thus, additional mechanisms may be involved in enhancing the membrane localization of Rif.
The structural basis for the interaction of mDia formins with GT-Pases is best known for the mDia1-RhoC complex, the crystal structure of which has been determined ( Otomo et al. , 2005 ;Rose et al. , 2005 ). It showed that both the G region and the DID of mDia1 make specifi c contacts with the GTPase. Similar contacts were observed for a complex of an mDia2-mimicking mutant of mDia1 and Cdc42 or Rac1 ( Lammers et al. , 2008 ), although the structure of the actual mDia2 with any GTPase has not yet been determined. The biochemical analysis of the mDia2-Rif interaction (Pellegrin and Mellor, 2005 ) did not focus on whether both the G region and DID were required for binding. Here, we used purifi ed proteins to demonstrate a direct interaction between active Rif and G-DID-containing constructs of mDia2, whereas the interaction of the fragment BD-G with was much weaker. Thus, both G and DID of mDia2 are needed for optimal binding to Rif, which is analogous to the interactions of mDia1 with Rho family GTPases. Despite the ability of the BD to enhance the plasma membrane localization of the N-terminal mDia2 constructs in a Rif-dependent manner in cells, however, the BD is not involved in direct interaction between the N terminus of mDia2 and Rif. These fi ndings suggest that Rif
Confocal microscopy and image analysis
Confocal microscopy of HeLa cells coexpressing mRFP1 and GFPtagged mDia2 proteins was performed using a Leica (Richmond, IL) LCS laser-scanning confocal microscope with a 63× oil immersion objective. Images were acquired using 122-μm pinhole and were averaged over four frames in the 1024:1024 pixel format. Midplane optical sections were used for presentation and quantifi cation. Image brightness was linearly adjusted in Adobe Photoshop for optimal presentation.
The PM index was calculated using the following equation: where (GFP-mDia2) m and (GFP-mDia2) c are average GFP fl uorescence intensities at the plasma membrane and in the cytoplasm, respectively, and RFP m and RFP c are average mRFP1 fl uorescence intensities at the plasma membrane and in the cytoplasm, respectively. Intensities were calculated using MetaMorph 7.5 imaging software (Molecular Devices, Sunnyvale, CA). To determine the average fl uorescence intensity of GFP and RFP at the plasma membrane, (GFP-mDia2) m and RFP m , respectively, a three-pixel-wide line was drawn along the entire margin of the cell, and the intensity profi le was obtained along this line using the Metamorph line scan tool. To determine the average fl uorescence intensity in the cytoplasm, (GFP-mDia2) c, and RFP c , we used the MetaMorph multiline region tool to draw an irregularly shaped region between the nucleus and the membrane that includes only the cytoplasmic compartment. The obtained values for (GFP-mDia2) m , (GFP-mDia2) c , RFP m , and RFP c were used to calculate the PM index as shown mDia formins. Thus, mDia1 has a similarly charged, albeit slightly shorter, BD at the N terminus ( Ramalingam et al. , 2010 ), whereas the corresponding region of mDia3 has a slightly lower predicted pI of ∼8, because it lacks the fi rst cluster of basic amino acids. In contrast, other formins containing an N-terminal GBD (Schonichen and Geyer, 2010 ) are not associated with an upstream basic sequence, suggesting that the BD-G module is specifi c for Diaphanous-related formins.
MATERIALS AND METHODS Constructs
Truncation mutants of mDia2 were generated by PCR amplifi cation from the fulllength mDia2 template ( Yang et al. , 2007 ) and subcloned into pEGFP-C1 or -C2 vectors using either Sac 1/ Sal 1 restriction sites or Eco RI/ Sal 1 sites to produce GFPtagged proteins. The BD of mDia2 (aa 1-91) was also subcloned into the pGEX-5x-3 vector (GE Healthcare, Piscataway, NJ ) using Bam HI/ Sal 1 restriction sites to produce GST-tagged protein. Point mutation was introduced using the QuikChange Site-Directed Mutagenesis Kit (Stratagene, La Jolla, CA). Myc-tagged Rif Q75L and T33N in pcDNA3 vector was a gift from Harry Mellor (University of Bristol); Myc-tagged RhoA A14V in pEXV vector, Myc-tagged Rac1 Q61L in pRK5 vector, and HA-tagged Cdc42 G12V in pcDNA vector were gifts from Margaret Chou (University of Pennsylvania); HA-Cdc 42 T17N was a gift from Wei Guo (University of Pennsylvania) ; and mRFP1-N1 was a gift from Roger Tsien (University of California at San Diego) ( Campbell et al. , 2002 ).
Cell culture, transfection, and reagents
HeLa cells were maintained in culture medium containing 45% DMEM, 45% F-10, 10% fetal bovine serum (ThermoScientifi c, Waltham, MA), penicillin, and streptomycin. Cells were transfected overnight using Lipofectamine LTX or Lipofectamine 2000 (Invitrogen, Carlsbad, CA) according to the manufacturer's instructions, replated onto laminin-coated coverslips (20 μg/ml; Sigma, St. Louis, MO), and fi xed ∼3 or 24 h after replating with 4% paraformaldehyde in PBS. No signifi cant difference in plasma membrane localization of several mDia2 constructs was found between the two conditions. Latrunculin B (Calbiochem, EMD Chemicals, Gibbstown, NJ ) was added to the culture medium at 2 μM for 30 min. Immunostaining of GTPases was performed using mouse monoclonal Myc (clone 9E10; Abcam, Cambridge, MA) or HA antibody (Covance, Princeton, NJ), followed by Cy5-conjugated anti-mouse antibody (Jackson ImmunoResearch, West Grove, PA). For the Western blot analysis shown in Supplemental Figure S2 , lysates were prepared from transfected HeLa cells by addition of 1% Triton X-100 in PBS. After low-speed centrifugation to remove nuclei, supernatants were mixed with 6X SDS buffer and boiled for 3 min. SDS-PAGE was performed in the NuPAGE system (Invitrogen), using Bis-Tris gels. Immobilon-P transfer membranes (Millipore, Billerica, MA ) were blocked with a 5% solution of nonfat dry milk in TBS-T (Tris-buffered saline, Tween-20) and probed with rabbit GFP antibody (Abcam). Step 1: Phospholipid binding. The initial targeting event occurs while mDia2 is still autoinhibited, yet its BD is accessible to bind acidic phospholipids of the plasma membrane through electrostatic interactions. This transient binding allows mDia2 to linger at the plasma membrane until it encounters active GTPase Rif there.
Step 2: Weak Rif binding. Active Rif binds to G region of mDia2. Now, the mDia2 dimer is additionally attached to the plasma membrane via weak interaction with the membraneassociated active Rif. The BD-G-bound Rif begins to displace the DAD peptide of the C terminus from DID (black arrows). Step3: Strong Rif binding and activation. Rif, possibly in concert with additional coactivator(s), causes disruption of the DID/DAD bond. This event allows DID to bind Rif, resulting in a more stable association of mDia2 with the membrane and also relieves autoinhibition of mDia2 to allow the FH1-FH2 domains to nucleate and elongate actin fi laments.
article, except that DNAse I at 33 μg/ml (Sigma) was added to cell suspension prior to sonication. Following centrifugation at 10,000 × g for 20 min, 20-ml supernatants containing MBP-mDia2 proteins were incubated with approximately 1.0 ml of washed amylose resin (New England Biolabs, Ipswich, MA ), and those with GST-RifQ75L were incubated with 1.0 ml of washed GSH-Sepharose (Amersham) overnight at 4°C with agitation. Beads with bound protein were extensively washed with 0.1 M NaCl, 50 mM Tris pH7.5, 1 mM DTT and then fi ve times in highsalt buffer (0.5 M NaCl; 50 mM Tris, pH7.5; 1 mM DTT). GSH beads with bound proteins were also washed with buffer containing 1% Triton X-100. MBP and MBP-mDia2 proteins were eluted with 10 mM maltose solution, and concentration was measured using the Bradford reagent (BioRad, Hercules, CA).
Rif binding assay
GSH-Sepharose beads (60 μl wet volume, 20 μl dry volume) bearing 100 pmol of GST-RifQ75L were loaded with 1 mM GTP or GTP-γ-S in 25 mM Tris (pH 7.5), 100 mM NaCl, 20 mM EDTA for 30 min at 30°C and were stabilized by 70 mM MgCl 2 . Control beads with immobilized GST were mixed with empty GSH beads to equalize inputs. Twenty microliters (dry volume) of GST or GST-RifQ75L beads were incubated with 200 pmol of MBP-mDia2 or MBP control overnight at 4°C with agitation in buffer containing 50 mM Tris (pH7.5), 0.1 M NaCl, 5 mM MgCl 2 , 1 mM DTT. Subsequently, buffer with 1.0% Triton X-100 was used to wash beads fi ve times at 4°C, 10 min each, to remove nonspecifi cally bound proteins. Proteins were eluted with SDS buffer, boiled for 3 min at 100°C, and analyzed by SDS-PAGE or Western blot. Two separate experiments are shown in Figure 6 . For Western blot, bead samples were diluted in SDS buffer 100-fold before loading onto SDS-PAGE gel. A pipetting error inherent to large dilutions is the likely reason why band intensities for GST-Rif in the anti-GST blot differ between binding reaction samples. MBP rabbit antibody was a gift from Mecky Pohlschroder, University of Pennsylvania . GST antibody (Amersham) was used in conjunction with secondary anti-donkey HRP-conjugated antibody (Santa Cruz Biotechnology, Santa Cruz, CA).
Protein-lipid overlay assay
PIP microstrips (Echelon Biosciences, Salt Lake City, UT) were processed according to the manufacturer's instructions with some modifi cations. Briefl y, strips were blocked with 10% nonfat dry milk in TBS-T buffer (10 mM Tris-HCl, pH 7.4; 150 mM NaCl; 0.1% [vol/vol] Tween-20) and incubated with 50 nM GST-BD in 2% milk in TBS-T. After extensive washes with TBS-T, strips were probed with goat polyclonal GST antibody (GE Healthcare), washed, and probed with anti-goat HRP-conjugated secondary antibody (Santa Cruz Biotechnology), followed by fi nal TBS-T washes. Secondary antibody was detected using ECL Plus reagents (GE Healthcare).
earlier in the text. The PM index equals zero for cytoplasmic proteins, whereas plasma membrane-targeted constructs have a positive PM index. At least ten cells per construct were analyzed, except for cotransfection of Rif with GFP-DID-DD-CC, for which seven cells were analyzed. To calculate the PM indices of expressed GTPases, the intensity of the immunofl uorescence signal was used instead of GFP fl uorescence. Statistical signifi cance was determined using Student's t test in Microsoft Excel with a two-tailed heteroscedastic comparison; box-and-whisker plots were generated in SigmaPlot.
Subcellular fractionation
Membrane and cytoplasmic cell fractions were separated as described ( Chandra Roy et al. , 2009 ) with minor modifi cations. Briefl y, HeLa cells were washed with ice-cold PBS and scraped in hypotonic buffer (10 mM Tris-HCl, pH 7.5; 1 mM EGTA; 1 mM MgCl 2 ) supplemented with 1 mM PMSF and protease inhibitor cocktail (one tablet per 10 ml; Roche, Indianapolis, IN ). Cells were disrupted by 10 passages through a 22-gauge needle, and lysates were clarifi ed by centrifugation at 4300 × g for 10 min. Supernatants were subsequently centrifuged at 50,000 × g for 1 h to produce cytosolic and membrane fractions. The membrane fraction was washed three times with the hypotonic buffer and dissolved in 1% SDS in buffer A (50 mM Tris-HCl, pH 7.5; 140 mM NaCl; 10% glycerol; 1% Triton X-100) supplemented with 1 mM PMSF and protease inhibitor cocktail. The resulting cytoplasmic and membrane fractions were loaded on a NuPAGE 10% Bis-Tris gel (Invitrogen) in increasing volumes for the membrane fraction. GFP-mDia2 proteins were detected by Western blotting using GFP antibody (Abcam), and effi ciency of fractionation was confi rmed by probing gels with α-tubulin antibody (Sigma) and IRSp53 monoclonal antibody (a gift from Giorgio Scita, IFOM-IEO [FIRC Institute of Molecular Oncology/ European Institute of Oncology], Milan, Italy ). Secondary HRPtagged antibodies (GE Healthcare) were detected using the ECL Plus reagents (GE Healthcare ). Protein band intensities were measured using Adobe Photoshop. The intensity of the band corresponding to the cytoplasmic fraction was compared to a linear fi t for the intensities of graded membrane fractions to estimate the percentage of the GFP-fusion protein in the membrane fraction.
Protein expression and purifi cation
Escherichia coli BL21 star (DE3; Invitrogen) transformed with GST-BD was inoculated into 200 ml of Luria-Bertani (LB) medium and grown overnight at 37ºC; then 1.2 l of LB medium was added to the culture and grown to OD 600nm = 1.0. After stimulation of protein expression by 0.5 mM isopropyl-β-d -thiogalactopyranoside (IPTG) for 3 h at 28°C, bacteria were pelleted at 5000 × g , slowly resuspended by stirring at 4°C in 30 ml of buffer T (50 mM Tris-HCl, pH 7.7; 100 mM NaCl; 1 mM DTT; 45 mg of protease inhibitors [Sigma], and 1 mM PMSF), ultrasonicated, and clarifi ed by centrifugation at 15,000 × g for 15 min. GSH-Sepharose beads (0.8 ml; Amersham Pharmacia, Piscataway, NJ) were incubated with GST-BD-containing supernatant and washed fi ve times in 40 ml of buffer T. GST-BD was eluted from beads with 20 mM glutathione (pH 8.6) and dialyzed overnight against buffer T. Concentration was determined by the Bradford assay. E. coli BL21 RP strain (a gift from Wei Guo) was used for expression of MBP-mDia2 proteins and GST-RifQ75L. Protein expression was stimulated by adding 0.5 mM IPTG to 0.2 l of bacterial culture at OD 600nm = 0.8 followed by overnight incubation at 18°C. Bacterial lysates were prepared as described earlier in this
|
2016-05-04T20:20:58.661Z
|
2011-01-15T00:00:00.000
|
{
"year": 2011,
"sha1": "bd357566f7013ea0961914d4ea2e744d1ef571e5",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.1091/mbc.e10-03-0256",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd357566f7013ea0961914d4ea2e744d1ef571e5",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
221986369
|
pes2o/s2orc
|
v3-fos-license
|
Dose response of a novel exogenous ketone supplement on physiological, perceptual and performance parameters
Background Interest into the health, disease, and performance impact of exogenous ketone bodies has rapidly expanded due to their multifaceted physiological and signaling properties but limiting our understanding is the isolated analyses of individual types and dose/dosing protocols. Methods Thirteen recreational male distance runners (24.8 ± 9.6 years, 72.5 ± 8.3 kg, VO2max 60.1 ± 5.4 ml/kg/min) participated in this randomized, double-blind, crossover design study. The first two sessions consisted of a 5-km running time trial familiarization and a VO2max test. During subsequent trials, subjects were randomly assigned to one (KS1: 22.1 g) or two (KS2: 44.2 g) doses of beta-hydroxybutyrate (βHB) and medium chain triglycerides (MCTs) or flavor matched placebo (PLA). Blood R-βHB, glucose, and lactate concentrations were measured at baseline (0-min), post-supplement (30 and 60 min), post-exercise (+ 0 min, + 15 min). Time, heart rate (HR), rating of perceived exertion (RPE), affect, respiratory exchange ratio, oxygen consumption (VO2), carbon dioxide production, and ventilation were measured during exercise. Cognitive performance was evaluated prior to and post-exercise. Results KS significantly increased R-βHB, with more potent and prolonged elevations in KS2, illustrating an administrative and dosing effect. R-βHB was significantly decreased in KS1 compared to KS2 illustrating a dosing and exercise interaction effect. Blood glucose elevated post-exercise but was unchanged across groups. Blood lactate significantly increased post-exercise but was augmented by KS administration. Gaseous exchange, respiration, HR, affect, RPE, and exercise performance was unaltered with KS administration. However, clear responders and none-responders were indicated. KS2 significantly augmented cognitive function in pre-exercise conditions, while exercise increased cognitive performance for KS1 and PLA to pre-exercise KS2 levels. Conclusion Novel βHB + MCT formulation had a dosing effect on R-βHB and cognitive performance, an administrative response on blood lactate, while not influencing gaseous exchange, respiration, HR, affect, RPE, and exercise performance.
Introduction
Ketones bodies are metabolic end products of lipid metabolism traditionally produced during fasting/starvation, severe caloric restriction, or low carbohydrate diets [1]. Endogenous ketone production is predominantly regulated via insulin and glucagon levels on adipose and hepatic tissue, respectively [2]. However, dietary restriction can be difficult for some to follow [1]. Consequently, exogenous ketone bodies have been developed to circumvent traditional barrier to elevating circulating ketone bodies [1,3,4]. Currently, available exogenous ketone bodies include medium chain triglycerides (MCT), betahydroxybutyrate (βHB)-salts and/or amino acids, and esters, all commercially available and generally recognized as safe (GRAS approved) by FDA. MCTs are six to twelve carbons in length and typically derived from natural food products [1,5]. MCTs enter hepatic portal circulation and are rapidly metabolize to acetyl CoA resulting in subsequent hepatic ketogenesis and elevations in circulating βHB. βHB-salts or -amino acids are synthetically derived βHB molecules which are stabilized via chemical bonds electrolytes (primarily sodium) and/or amino acid. βHB-salts and/or -amino acid directly elevated circulating levels of βHB without hepatic metabolism. Esters have a synthetic 1,3 Butanediol backbone esterified to βHB as 1,3 Butandiol-βHB Monoester or AcAc as 1,3 Butanediol Acetoacetate Diester. Gastric esterases cleave βHB and/or AcAc from 1,3 butanediol backbone, directly elevating βHB and/or AcAc. The 1,3 butanediol backbone enters hepatic circulation and is subsequently catabolized via alcohol and aldehyde dehydrogenases into βHB. Consequently, all exogenous ketone bodies result in subsequent elevations in circulating ketone body elevations, but with diverse kinetics, tolerance, and application impact [1,3,6,7]. Sport and/or athletic practitioners have been interested in exogenous ketone bodies dating back for almost a halfcentury when MCT's their potential metabolic and performance impact in isolation or combination with other exogenous nutrients under the hypothesis that providing additional and diverse metabolite availability may augment sport related performance outcomes [5]. Mixed results and acute gastrointestinal barriers attenuated interest. However, in 2016, Cox et al. seminally described that 1,3 Butanediol-βHB Monoster administration regulated systemic and tissue specific metabolism, which results in physical performance enhancement in athletic cyclists [8]. Concurrently, ketone bodies were demonstrated to induce multifaceted effects across physiologic and cellular signaling which include metabolic, inflammatory, oxidative stress, epigenetic, immune, etc.… regulation [9]. Consequently, exogenous ketone bodies are now being explored for various health, disease, and performance applications [7,[10][11][12].
Study design
A randomized, double-blind, placebo-controlled, crossover design was employed consisting of five separate laboratory visits. During their first visit to the laboratory each participant underwent a familiarization session during which participants were informed of all experimental procedures and familiarized with all performance measures to reduce the possibility of a learning effect. The familiarization trial was identical to experimental trials except that participants consumed no supplement prior to exercise. During the second visit, each participant's maximal oxygen consumption (VO 2 max) was determined using a progressive multistage treadmill running protocol.
During subsequent visits, the three main experimental trials consisted of a 5-km running time trial (TT) with cognitive tests before (30 min) and after (+ 5 min), and were completed in a randomized (www.rando mizer .org) counterbalanced sequence separated by 7 days. On experimental days, participants consumed either one (KS1: 22.1 g) or two (KS2: 44.2 g) servings of the ketone supplement (βHB + MCT) or a flavor matched placebo (PLA) drink 60 min prior to performing a 5-km running TT on a treadmill. Capillary glucose, lactate, and ketones were measured at baseline, 30 min post supplement ingestion (pre-cognitive test battery), 60 min post supplement ingestion, immediately following the TT (+ 0 min), and 15 min following the TT (immediately following the cognitive test battery). Other variables that were measured include (a) 5-km running time, (b) RPE (RPE-Overall; RPE-Chest; RPE-Legs), (c) heart rate, (d) affect, (e) session RPE, (f ) session affect, (g) 500-m split times during the 5-km TT, (h) and reaction time as well as response accuracy for the Stroop Word-Color Test and Switching Task. RPE, heart rate, and affect were taken every 500-m during the 5-km TT. In addition, during the TT oxygen consumption (VO 2 ), carbon dioxide production (VCO 2 ), minute ventilation (V E ) and respiratory exchange ratio (RER), were assessed and derived from indirect calorimetry (Fig. 1). Testing sessions were conducted within the Exercise Science Laboratory of Grove City College at the same time each day at a room temperature between 19-21 °C and a relative humidity of 35-40%.
Participants
Thirteen recreational male distance runners participated in this study (Table 1). Participants were recruited directly from local running clubs and by community advertisement. Participants were included if they: (1) completed a 5-km run in under 25 min within the last 3 months, (2) were running a minimum of 32 km per week, (3) were between 18 and 49 years old, (4) had > 2 years of running experience, and (5) were consuming a Standard American Diet [26]. Participants were excluded from the study if they (1) had a history of smoking, (2) had any known metabolic (e.g., diabetes) or cardiovascular disease, (3) presence of orthopedic, musculoskeletal, neurological, psychiatric disorders and/or any medical conditions that prohibit exercise, (4) use of any prescription medications, and (5) following a low-carbohydrate or ketogenic diet. Participants were prohibited from using any ergogenic aids for one month preceding the study and were asked to refrain from taking any performance enhancing supplement(s) during the course of the study. Participants were instructed to refrain from caffeine and alcohol consumption for 48 h and racing or training for 24 h, and food and drink for 3 h before each exercise test. Before enrolling in the investigation, participants were fully informed of any risks and discomforts associated with the experiments prior to giving their written informed consent to participate. The experimental protocol was approved by the Institutional Review Board of Grove City College prior to implementation.
Pretrial preparation
Participants were instructed to maintain their usual training frequency during the study intervention without increasing or decreasing the training load. The participants were instructed to maintain a training log (mode, duration, and intensity of each workout) for 1 week before the first experimental trial. They were provided with a copy of their pre-trial log and instructed to have the same training routine during the intervention period.
In addition, participants were asked to record their training every week during the study (mode, duration, and intensity of each workout). Furthermore, to quantify the subject's training session intensity, participants were asked to record their session RPE (sRPE) after every training session [26] (pre-trial and within trail), using the OMNI Walk/Run 0-10 Perceived Exertion Scale [27].
Training load for each session was calculated by sRPE x duration of session (minutes) [28]. The sum of each session's training load provided the quantification of weekly training load. Training load was assessed each week to measure compliance. Furthermore, participants' habitual pre-trial and within trial dietary intake was assessed weekly using a 3-day weighed dietary record, consisting of 2 weekdays and 1 weekend day. Participants were provided with a copy of their pre-trial log and instructed to have the same dietary intake during the remainder of the study. During the familiarization session participants were given precise oral and written instructions individually on how to accurately record amounts and types of food and beverages. Participants were provided with a digital portable scale (Ozeri ZK 14-S Pronto, San Diego, CA) and instructed to weigh all food items separately if possible or to estimate the amounts. Diet information was entered into a commercial nutrient analysis software (Nutritionist Pro ™ , Axxya Systems, Stafford, TX).
Familiarization and anthropometric measurements
At the first laboratory visit, all the experimental procedures were explained to the participants. The participants underwent an orientation involving practice of the 5-km TT and familiarization of the cognitive test battery, the various measurement instruments, equipment, affect measures and perceived exertion. Affect was measured using a validated 11-point Feeling Scale [29], with participants informed that their responses should reflect the affective or emotional components of the exercise and not the physical sensation of effort or strain. The OMNI Walk/Run Perceived Exertion Scale [27] was used to measure the physical perceptions of exertion for overall body (RPE-O), legs (RPE-L) and chest (RPE-C). Following the orientation session, anthropometric measures were obtained including height (cm), weight (kg), fat free mass (kg) and fat mass (% and kg). Height (cm) was measured using a physician's scale (Detecto, Webb City, MO). Participants body mass (kg) and body composition (fat and lean mass) was measured using a Tanita bioelectrical impedance analyzer (BIA) (MC-980Uplus, Tanita Corporation of America, Arlington Heights, Illinois). Finally, the participants performed a 5-km familiarization trial on a motorized treadmill (Trackmaster TMX425C treadmill, Newton, KS).
Maximal aerobic capacity
On the second laboratory visit, participants performed an incremental test to exhaustion on a motorized treadmill (Trackmaster). Oxygen consumption (VO 2 ) and carbon dioxide production (VCO 2 ) were measured using an automated metabolic analyzer system (TrueOne 2400, ParvoMedics, Sandy, UT) calibrated prior to each exercise test using standard calibration gases (16% O 2 and 4% CO 2 ). Participants wore a Polar heart rate monitor (H10, Polar Electro, Kempele, Finland) during exercise to measure heart rate. After a thorough explanation of the experimental procedures, each participant was instructed to walk on the treadmill for 3 min as a warm-up at a selfselected speed (0% grade). Immediately following the 3-min warm-up, the speed was increased to 5-8 mph for 3 min (0% grade) to achieve the participants' comfortable running pace. After 3 min of running at 0% grade, the grade was increased 2.5% every 2 min throughout the test protocol while speed was kept constant. The treadmill test was terminated by the subject at the point of volitional exhaustion. At the end of the test, the highest average VO 2 value recorded over a 30 s period of exercise was considered the participants VO 2max .
Experimental protocol
On the next three visits, participants were randomly assigned to ingest one of three beverages 60 min before the 5-km TT. This timing and dosing strategy is based on our own pilot experiments (unpublished data) showing that capillary R-βHB concentration peaked at 60 min after ingestion of a single bolus of the supplement. The supplement used in this study consisted of βHB-salt + Medium Chain Triglyceride (KETO//OS 2.1 Orange Dream, Pruvit, Melissa, TX, USA). Participants consumed one (KS1: 22.1 g) or two servings (KS2: 44.2 g) of the ketone supplement (βHB + MCT) powder mixed with approximately 237 ml of cold (~ 6 °C) water. One serving of the supplement contained a calculated 7 g βHB racemic salt (50% R-βHB and 50% S-βHB) and reported 7 g MCT. A complete list of the ingredients and the relative dose of each ingredient is provided in Additional file 1: Table S1. When receiving the PLA, participants consumed an equal amount of water with MiO Orange Tangerine Liquid Enhancer (0 mg caffeine, 0 kcal; Kraft Foods; MiO, Northfield, IL, USA). The ketone supplement, and PLA drink were similar in volume, texture, and appearance. The taste of the drinks was slightly different, and there remains the possibility that the participants were able to identify the drinks. In order to ensure a double-blinded design, each drink was presented to participants in an opaque sports bottle. To avoid the placebo effect in the experimental trials, we did not inform the participants about the names of the drinks and we presented all drinks as having similar ergogenic properties.
Blood sampling
Fingertip (capillary) blood samples for blood ketones (R-βHB; Precision Xtra, Abbott Diabetes Care Inc., Almeda, CA) and blood glucose (Precision Xtra, Abbott Diabetes Care Inc., Almeda, CA) concentrations were measured at baseline, 30 min post supplement ingestion (30 min; precognitive test battery), 60 min post supplement ingestion (immediately before start of TT), immediately following the TT (+ 0 min), and 15 min following the TT (immediately following the cognitive test battery). Blood lactate concentration (Lactate Plus, Nova Biomedical) was assessed at baseline, 60 min post supplement ingestion, and immediately post exercise (+ 0 min). Samples were collected using a lancet following cleaning of the fingertip with an alcohol swab and then dried. The first droplet was wiped away with a cotton swab to remove any alcohol and the subsequent droplets were used for analysis.
5-km running time trial
To determine exercise performance, participants performed a 5-km running TT on a motorized treadmill (TMX425C treadmill; Trackmaster, Newton, KS, USA). Before the start of the run, participants completed a 5-min self-paced warm-up run. Participants were instructed to finish the run as fast as possible. The gradient was set at 0.0% grade. Participants were provided with feedback on the distance (at regular 500-m intervals) covered during each TT and were not informed of the overall performance time until completion of the study. During the 5-km TT, participants were permitted to adjust their speed how and whenever they saw fit during the TT via control buttons located on the treadmill. The speed indicator and timing devices were concealed from the participant's view throughout the TT. Therefore, participants regulated their treadmill pace according to their perceived exertion associated with the intensity of the exercise and their subjective feelings of their running capabilities [30]. Heart rate (Polar Electro, Kempele, Finland), RPE (RPE-Overall; RPE-Chest; RPE-Legs) and affect (Feeling Scale) were recorded at 500-m intervals during the 5KTT. Ratings of perceived exertion and affect for the entire exercise session (session RPE and session affect) were obtained 5 min following the TT. Metabolic gases were continuously collected during the entire TT using a metabolic cart for assessment of RER, VO 2 , VCO 2 , V E , RR, and substrate oxidation.
Cognitive test battery
Thirty minutes post supplement ingestion (30 min) and five minutes after each 5-km TT (+ 5 min), participants performed a battery of cognitive tests to assess executive cognitive function. A familiarization test was performed during the first laboratory visit to reduce the possibility of a learning effect. During this time researchers thoroughly explained the different cognitive tests to the participants, but data were not recorded. Executive function was measured before and after exercise using a computerized automated neuropsychological assessment metric (ANAM ® ) test (ANAM-4, Vista Life Sciences, USA). The ANAM ® is a brief, self-directed, computerized neuropsychological assessment battery that assesses neuropsychological functioning. The ANAM software has been shown to have test-retest reliability [31]. Before each cognitive test battery, time-keeping devices such as watches and cell phones were removed, and during the task, participants did not receive any feedback on performance or time lapsed. Participants were seated in a comfortable chair in a sound-insulated room. Testing was performed under optimal conditions (i.e., appropriate lighting, as quiet as possible, and isolation from unnecessary stimuli). An identical test battery was administered before and after each trial. The battery took approximately 10 min to complete. Participants were instructed to complete the battery as quickly and accurately as possible. For each test, reaction time (in milliseconds) and reaction time for only correct responses (accuracy) were collected.
The test battery consisted of the following validated tests: the Stroop Word-Color Test (congruent and incongruent) and Switching Task (manikin and mathematical processing). The Stroop Test measures cognitive flexibility, processing speed, and executive function [32]. The cognitive mechanism involved in this task is directed to attention, and the participants must manage their attention by inhibiting one response to do something else. For the congruent Stroop Word-Color Test, a series of XXXX's appeared on the computer screen in one of three colors ("red, '' ''blue, '' , or "green"). Participants were instructed to press the corresponding key (1 for "red", 2 for "green" and 3 for "blue" on the keyboard based on color. For example, if the series of XXXX's appeared in red font then participants were instructed to press 1 on the keyboard. For the incongruent Stroop Word-Color Test, a series of individual words ("RED", "GREEN", or "BLUE") appeared on the computer screen in a color that did not match the name of the color depicted by the word. Participants were instructed to press the response key on the keyboard assigned to the color of the word on the screen. For example, if the word "BLUE" that was written in red ink appeared on the computer screen then participants were instructed to press 1 on the keyboard.
Next, the participants performed the Switching Task. The Switching Task was designed to measure divided attention, mental flexibility, and executive function [33]. The Switching Task requires users to alternate between two tasks: The Manikin and Mathematical Processing. Only one type problem (Manikin or Mathematical Processing) appears on the computer screen. For mathematical processing, participants were presented with a three-digit math equation (e.g., "5 + 4 − 2") and if the sum was greater than "5" they were instructed to click the right mouse, and if the sum was less than "5" then they were instructed to press the left mouse. For the Manikin, participants were presented with an animated character (Manikin) holding a sphere in the left or right hand. If the manikin was holding the sphere in the right hand then participants were instructed to click the right mouse, and if the manikin was holding the sphere in the left hand then participants were instructed to click the left mouse. For each trial, the manikin shifts positions, so that it may be facing towards the viewer, away from the viewer, or to the side. Cognitive test battery was comprised of n = 10 participants for the Stroop Word-Color Test (congruent and incongruent) and n = 11 participants for the Switching Task (manikin and mathematical processing) due to technical issues in n = 3 and n = 2, respectively.
Statistical analysis
Statistical analyses were performed using SPSS version 24.0 (SPSS Inc., Chicago, IL). Statistical significance was set a priori at p < 0.05. Descriptive statistics were calculated for all variables. Normality and absence outliers were verified by using the Shapiro-Wilk test, normality plots, and residual plots. Performance, physiological, and perceptual data collected during the 5-km TT (5-km running time, mean exercise heart rate, RER, VO 2 , VCO 2 , V E , RR, carbohydrate and fat oxidation rates, affect, RPE-Chest, RPE-Legs, RPE-Overall, session RPE and session affect) were analyzed using a one-way repeated measures analysis of variance (ANOVA). A 3 (condition, KS1 vs KS2 vs PLA) × 10 (every 500 m) repeated measures ANOVA was conducted to assess the effect of time, treatment, and interaction between time and treatment, on heart rate, affect, RPE-Chest, RPE-Legs, RPE-Overall, and time covered at each 500-m interval during the 5-km TT. A 3 (condition, KS1 vs KS2 vs PLA) × 5 (rest, 30 min post ingestion, 60 min post ingestion, immediately post exercise, and 15 min post-TT) repeated measures ANOVA was conducted to assess the effect of time, treatment, and interaction between time and treatment, on capillary glucose and ketones. A 3 (condition, KS1 vs KS2 vs PLA) × 3 (baseline, 60 min post supplement ingestion, and immediately post exercise) repeated measures ANOVA was conducted to assess the effect of time, treatment, and interaction between time and treatment, on capillary lactate. A 3 (condition, KS1 vs KS2 vs PLA) × 2 (time, pre vs post) repeated measures ANOVA was conducted to assess the effect of time, treatment, and interaction between time and treatment, on the Stroop Word-Color Test and Switching Task. A one-way repeated measures ANOVA was used to analyze differences over time for training load and nutrient intake before and during the intervention. The smallest worthwhile change (SWC) was predetermined to be 0.65%. This is midway between the smallest worthwhile change in day-to-day variability in competitive middle distance and distance runners [34] and the estimated coefficient of variation in our laboratory [25,30,35]. Post hoc analyses of significant main and interaction effects were conducted where appropriate using the Bonferonni adjustment to determine which conditions were significantly different. The assumption of sphericity was confirmed using Mauchly's test. Greenhouse-Geisser epsilon corrections were used when the sphericity assumption was violated. Partial-eta squared (η 2 p) was used to report effect size with 0.01 considered small, 0.06 medium, and 0.14 large effects. All data are reported as Mean ± SD.
Blood βHB, glucose, and lactate
Blood R-βHB had a significant time X treatment interaction (p < 0.001). Follow up tests indicated that blood R-βHB for the KS2 condition was significantly higher 30 min post supplementation compared to KS1 and PLA (p = 0.031 and < 0.001, respectively). Furthermore, blood R-βHB was significantly higher (p < 0.001) 30 min after beverage ingestion in KS1 compared to PLA. Additionally, blood R-βHB 60 min post supplementation was higher for KS1 and KS2 compared to the PLA (all p's < 0.001). Finally, blood R-βHB for the KS2 condition was higher both immediately post exercise (+ 0 min) (p = 0.002 and < 0.001, respectively) and post cognitive test (+ 15 min) (p = 0.002 and < 0.001) compared to KS1 and PLA. In addition, blood R-βHB for the KS1 condition was higher both immediately post exercise (+ 0 min) (p < 0.001) and post cognitive test (+ 15 min) (p < 0.001) compared to PLA (Additional file 1: Table S3; Fig. 2a).
Blood glucose in the KS1 and KS2 treatments increased significantly from baseline to 30 min post drink (p = 0.006 and 0.009, respectively), decreased significantly from 30 to 60 min post drink only in the KS 2 treatment (p = 0.049), increased significantly from 60 min to immediately post exercise (+ 0 min) in all three treatments (p = 0.013, 0.017, 0.038, respectively for KS1, KS2, and PLA), and decreased significantly from immediately post exercise (+ 0 min) to post cognitive test (+ 15 min) in all three treatments (p = 0.013, 0.040, 0.011; respectively for KS1, KS2, and PLA; Fig. 2b; Additional file 1: Table S3). However, there was no significant effect for treatment (p = 0.830) or time X treatment interaction (p = 0.355).
Blood lactate increased significantly from baseline to immediately post exercise in all conditions (+ 0 min; p = 0.001; Fig. 2c; Additional file 1: Table S3). There was a significant time X treatment interaction (p = 0.020). Follow up tests indicated that blood lactate was significantly higher in KS1 (p = 0.024) and non-significantly, but trending higher in KS2 (p = 0.056) immediately post exercise (+ 0 min) compared to PLA (Fig. 2c).
Physiological, perception, and physical performance response
No trial order effect was observed for the 5-km TT performance between visit 3, 4, and 5 (p = 0.342). The ketone supplement had no effect on 5-km run TT performance (KS1, 1289.0 ± 104.9 s; KS2, 1307.3 ± 98.8 s; PLA, 1291.1 ± 77.1 s; p = 0.386; η 2 p = 0.076; Fig. 3; Table 2). Overall, the KS1 and KS2 conditions resulted in nonsignificant ~ 2 s (0.16%) faster and ~ 16 s (1.2%) slower TT performance compared to PLA, respectively. However, six subjects ran faster and seven slower in KS1, while three subjects ran faster and eight subjects slower than PLA which were greater than SWG. There was no difference in VO 2 , VCO 2 , RER, V E , RR, estimated carbohydrate oxidation rate, estimated fat oxidation rate, heart rate, affect, RPE-C, RPE-L, and RPE-O between KS1, KS2, and PLA during the 5-km TT ( Table 2). Running speeds for each 500-m split during the 5-km TT did not differ between trials but did increase progressively throughout the TT (main effect for time, p < 0.001).
Cognitive performance
Average response accuracy and reaction time for congruent and incongruent Stroop trials at each testing time for the KS1, KS2, and PLA condition are presented in Fig. 4 and Additional file 1: Table S4. No significant main effects of treatment or treatment X Time interaction were found for reaction time or response accuracy for congruent trials (p > 0.05; Additional file 1: Table S4). No significant main effects for treatment or treatment X time interaction were found for response time for the incongruent trial (p = 0.266 and 0.077, respectively). However, a significant time X treatment interaction was found for average response accuracy in the Stroop incongruent test (p = 0.043). Post-hoc analyses revealed that the KS1 and PLA conditions showed a significant faster reaction time (p = 0.002 and 0.010, respectively) from pre-TT (30 min) to post-TT (+ 5 min) while the KS2 treatment indicated and a non-significant faster reaction time (p = 0.582). Additionally, response time accuracy was significantly faster in KS2 pre-TT (30 min) compared to KS1 (p = 0.018), with none significant trends for faster response in KS2 compared to PLA pre-TT (p = 0.051). A significant main effect of time for response accuracy and reaction time during congruent and incongruent trials was observed (p = 0.001), with improved executive function from pre (30 min) to post test (+ 5 min) (p = 0.001) in all conditions. Performance time (reaction time and response accuracy) on the Switching task, decreased significantly from pre-TT (30 min) to post-TT (+ 5 min; p = 0.039 and 0.026, respectively), but there was no significant effect for treatment (p = 0.120 and 0.116, respectively) or treatment X time interaction (p = 0.154 and 0.254, respectively; Additional file 1: Table S4).
Discussion
A consistent finding across all exogenous ketone bodies is their ability to induce rapid and widespread changes in metabolism, including consistent changes in circulating metabolic concentrations, ketones, glucose, lactate, amino acids, and/or free-fatty acids. Unsurprisingly, all analyses to date have demonstrated that exogenous ketones significantly elevate circulating blood ketones, although to differing extents pending the exogenous agent [5,8,[13][14][15][16][17][18][19][20][21][22][23]. Synthetic 1,3 BD and/or esters raise R-βHB to a greater extent than ketone salts [3,6]. However, few studies have looked at the dosing effect of exogenous ketones [6,8]. Here we demonstrate that two dosages of βHB + MCT result in greater R-βHB peak concentrations (60mins post ingestion: KS2: 0.73 mM, KS1 0.60 mM) and significant impact on duration of R-βHB elevations, illustrating a clear dosing effect on circulating ketone pharmacokinetics with βHB + MCT administration. Importantly, many exogenous ketone formulations come as racemic mixtures (50% R-βHB: 50% S-βHB) due to lower production cost of utilizing racemic precursors [1]. However, all commercial ketone meters and wet-lab assays only detect R-βHB and most don't test for acetoacetate, limiting detecting of total ketone load in ours and other ketone evaluations. None the less, R-and S-βHB have differential metabolic consequences with the latter being less readily oxidizable [6,36], suggesting that determining R-βHB are likely more relevant in assessing metabolic substrate induce energetic shifts on performance [37]. Ketone elevations are often accompanied by significant or directional reductions in basal blood glucose or post-exercise glucose elevations, with rare exceptions [22]. While we did not observe significant administration effect on blood glucose in the present analyses, we did see non-significant blunting of post-exercise glucose elevations with KS2 (31.1% rise, compared to 45% in KS1 and 40.5% rise in PLA), suggesting a potential dosing effect of βHB + MCT on blood glucose. However, more inconsistent are the ability of exogenous ketones to blunt exercise lactate production. Ketone salts and/or βHB + MCT have not shown an ability to attenuate exercise lactate productions, consistent with the present findings [13][14][15][16]25]. However, attenuated lactate production is more consistently observed with ester administration [8,18,20], although it does not always reach significance [21,22]. We found that neither administration nor dose of βHB + MCT influenced heart rate, RR, RER, VO 2 , VCO 2 , VE, or estimated metabolic oxidation rates. HR has elevated with ketone salts administration [15,16] or reductions in HR with ester administration [18,20], with the vast majority consistent with our findings across exogenous ketone supplements [19,[21][22][23]25]. While most exogenous ketone evaluations have not observed changes in RER in agreement with our observations, some analyses have observed differences [8,13,14]. These differences in RER have been extrapolated into shifts in fat and carbohydrate oxidations [8,13], as were calculated in the present analyses. However, limiting all these oxidation calculation are observations of RER of βHB and AcAc are 0.89 and 1.00, respectively, directly confounding calculations of glucose and fat oxidation [38], but potentially explaining the shifts in RER observed in some evaluations [8,13,14]. Attempts to control for exogenous ketone administration in oxidation calculations should be approached with caution as it can often be difficult at present and lead to over-estimations in calculated fat-oxidation [8]. Exogenous ketones bodies have been explored in a various exercise performance trials including MCTs [5], ketone salts [13][14][15][16], esters [8,[17][18][19][20][21], and 1,3BD [22,23]. Across seventeen separate MCT exercise performance analyses, only two and five analyses demonstrated ergogenic and ergolytic effect, respectively, attributed primarily to gastrointestinal distress [5]. Four analyses (3/4 on performance) have been conducted with ketone salts with either neutral [14] or negative [13] impact on endurance performance, and neutral impact on sprint performance [16]. Lack of efficacy across ketone salts trials have been largely attributed to an inability to reach sufficient circulating R-βHB concentrations hypothesized to be important for facilitating sufficient energetic needs and metabolic shifts [7,39]. Eleven separate analyses have been conducted on R/S 1,3 Butanediol (1,3-BD, [22,23]), Beta-hydroxybutyrate R 1,3 Butanediol (Monoester; [8,[17][18][19]21]), and/or R/S 1,3 Butanediol Acetoacetate (Diester; [20]) with three [8,17,21], six [18,19,[21][22][23], two [20,21] showing positive, neutral, or negative impacts on exercise performance, respectively. These positive findings across 1,3 BD and Monoester were attributed to sufficiently elevated R-βHB (≥ 2 mM R-βHB), reduced lactate production, and attenuated glycogen utilization [8,17,21]. However, neutral and/or negative effects maybe explained in part due to gastrointestinal symptoms [18,19,21,37], while other hypothesis include insufficient circulating R-βHB [19][20][21][22][23], choice of exercise, administration without carbohydrates [23], and/or placebo comparison. Our group is the only to have conducted analyses on βHB + MCT which demonstrated non-significant impact on run performance [25].
Here we repeated those findings, while also demonstrating that dosing did not affect run performance potentially explained by insufficient ability to reach hypothesize ergogenic substrate threshold [39] as gastrointestinal symptoms were not indicated in this study cohort (unreported). However, consistent with prior findings [19,25], we also found divergent effect of ketone administration run performance across subjects which were greater than the SWD, suggesting responders and non-responders. Ketone-induced perceptual and cognitive outcomes have become a growing area of interest due to ketones multifaceted impact on the brain. We report that βHB + MCT administration nor dose impact perceptual outcomes in line with independent observations with other ketogenic agents [14, 15, 18-20, 22, 23, 25], Table 2 Metabolic, respiratory, heart rate, perceptual, and physical performance response.
Gas exchange, respiratory rate (RR), calculated oxidation rates, heart rate (HR), affect, rate of perceived exertion (RPE), and time to completion data during 5-km time trail (5-km TT; n = 13). Values are Mean ± SD. One Dose Beta Hydroxybutyrate Salt and Medium Chain Triglycerides, KS1; Double Dose Beta Hydroxybutyrate Salt and Medium Chain Triglycerides, KS2; Flavored Matched Control, PLA; oxygen consumption, VO 2 ; carbon dioxide production, VCO 2 ; heart rate, HR; respiratory exchange ratio, RER; ventilation, V E ; respiratory rate, RR; rate of perceived exertion, RPE; rate of perceived exertion for overall body, RPE-O; rate of perceived exertion for chest, RPE-C; rate of perceived exertion for leg RPE-L. Italicized: Did not correct for ketone oxidation (Frayn et al. 1980s but discrepant from others [16,21]. Only two separate analyses have evaluated cognition performance showing positive impact with monoester [18] and neutral impact of ketone salts [13]. We demonstrate faster response time accuracy in KS2 prior to exercise. Interestingly, post exercise, all groups had equivalent cognitive function to pre-exercise KS2 group, suggesting a dosing response on basal cognitive function equivalent to the exercise-induced effect on PLA group. However, limiting our interpretation of these findings are a lack of pre-supplement cognitive function evaluations across groups.
Conclusion
βHB + MCT novel ketone formulation demonstrated a dosing effect on blood R-βHB and cognitive function, an administration response on blood lactate, and no administration or dosing impact across physiologic, perceptual, and physical performance parameters. However, clear responders and non-responders were found. To our knowledge, this is the first analyses comparing the multicomponent dose-response of βHB + MCT across blood metabolites, gaseous exchange, respiratory rate, heart rate, perception, and physical and cognitive performance parameters. Further analyses should consider similar dose-response impact across other exogenous ketones administration, as well as divergent factors influencing responders versus non-responders, to help understand how to optimally administer these agents across health, disease, and performance applications.
|
2020-07-16T09:03:29.227Z
|
2020-07-13T00:00:00.000
|
{
"year": 2020,
"sha1": "3cbe2824df80046823e0606dbe0670623f0de3d6",
"oa_license": "CCBY",
"oa_url": "https://nutritionandmetabolism.biomedcentral.com/track/pdf/10.1186/s12986-020-00497-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfd22caa75102a438b20ad7866ee420ba3d4e794",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259080214
|
pes2o/s2orc
|
v3-fos-license
|
Precipitation variability using GPCC data and its relationship with atmospheric teleconnections in Northeast Brazil
The present study investigates the influence of different atmospheric teleconnections on the annual precipitation variability in Northeast Brazil (NEB) based on the annual precipitation data from the Global Precipitation Climatology Center (GPCC) from 1901 to 2013. The objective of this study is to analyze the influence of different atmospheric teleconnections on the total annual precipitation of NEB for the 1901–2013 period, considering the physical characteristics of four subregions, i.e., Mid-north, Backwoods, Agreste, and Forest zone. To analyze the influence of different atmospheric teleconnections, GPCC data were used, and the behavior of the teleconnections was assessed using Pearson correlation coefficient, Rainfall Anomaly Index (RAI), and cross-wavelet analysis. The Pearson correlation was used to analyze the influence on the annual precipitation for the studied region. RAI was used to calculate the frequency of atmospheric patterns and drought episodes. The cross-wavelet analysis was applied to identify similarity signals between precipitation series and atmospheric teleconnections. The results of the Pearson correlation assessed according to Student's t test and cross-wavelet analysis showed that the Atlantic Multidecadal Oscillation (AMO) exerts a more significant influence on the Backwoods region at an interannual scale. In contrast, the Pacific Decadal Oscillation (PDO) exerts greater control over the modulation of the climatic patterns in NEB. The results of the study are insightful and reveal the differential impacts of teleconnections such as the AMO, PDO, MEI, and NAO on precipitation in the four sub-regions of NEB. The Atlantic circulation patterns strongly influence the interannual and interdecadal precipitation in the Agreste, Backwoods, and Mid-north regions, possibly associated with the Intertropical Convergence Zone (ITCZ) position. Finally, this study contributes to understanding internal climatic variability in NEB and planning of water resources and agricultural activities in such a region.
Introduction
Atmospheric teleconnections serve as a bridge between different oceanic regions, facilitating the transfer of energy and influencing global climate dynamics (Zhou et al. 2022).They help restore imbalances in the climate system's energy budget, which result from the meridional distribution of solar insolation and sea surface temperature (SST) anomalies associated with the El Niño-Southern Oscillation (ENSO) and internal climate variability (Stan et al. 2017).These phenomena have motivated numerous observational studies, either on a global scale or focusing on specific regions, with some research demonstrating that teleconnections involve both oceanic and atmospheric fields (e.g., Grimm and Saboia 2015;Ndehedehe et al. 2018;Lim et al. 2018;Park and Li 2018).
A variety of studies have investigated the atmospheric patterns generated by SST variability in the Atlantic and Pacific Oceans, highlighting their important role in global climate dynamics on interannual and multidecadal temporal scales (Chiang and Friedman 2012;Lau 2015;Cabré et al. 2017).These patterns can evolve and be modulated by the interaction of different teleconnection processes on various time scales.The Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO) are the primary dominant modes on decadal and multidecadal scales, while the North Atlantic Oscillation (NAO) and Multivariate ENSO Index (MEI) contribute to regional internal climatic variability on an inter-annual scale (Vining et al. 2022).The modulation of the PDO and the NAO involves the interaction between tropical SSTs, which helps to better understand the interannual impacts of ENSO warm and cold events in relation to the PDO and AMO phases (Han et al. 2022).
Spatiotemporal variations of extreme precipitation regimes are both caused by climate change.Hence, understanding their characteristic scales in space and time is crucial to allocating and managing local water resources (Chang et al. 2018).Furthermore, understanding the role of climate variability in precipitation modulation is important for seasonal predictability and a better understanding of global climate fluctuations, resulting in an improved explanation of rainfall variability (Jemai et al. 2017).Climate variability and large-scale climate teleconnections strongly impact the regional climate and hydrological variability in many parts of the world (Xiao et al. 2016;Huo et al. 2016).Therefore, finding the association between meteorological elements and the oscillatory pattern of climatic teleconnections can be very helpful in improving the accuracy of hydro-meteorological predictions and help in the prediction of extreme weather events, such as drought or floods, and also in the management of water resources (Araghi et al. 2016;Nascimento et al. 2022).
Precipitation in Northeast Brazil (NEB) presents high spatiotemporal variability and irregular rainfall (Brito et al. 2021;Silva et al. 2022).Overall, the irregular rainfall in NEB can have significant social, economic, and environmental impacts, and it is important to develop strategies to mitigate its effects and adapt to the changing climate.The main problems caused by irregular rainfall are drought, flooding, soil erosion, reduced availability of water, impacts on biodiversity, and health impacts.Regarding the relationship between rainfall and atmospheric teleconnections, according to Brahmananda Rao and de Brito (1985), during winter, the circulation characteristics over the North Atlantic Ocean seem to be related to the circulation characteristics of other regions in the Northern Hemisphere.This suggests that NEB rainfall may have interesting teleconnections with the circulation characteristics of other regions in the Northern Hemisphere.
Therefore, it is crucial to investigate the influence of atmospheric teleconnections on the precipitation pattern that may lead to the intensification of internal climatic variability or climatic changes (Silva et al. 2020).The population in such a region is vulnerable to the impacts of climate change primarily because of the socioeconomic and political context in which they live.According to Delazeri et al. (2022), climate change can exacerbate access to basic needs, making it harder for vulnerable populations to obtain these necessities.Climate change-induced events such as floods, droughts, and hurricanes can displace communities, forcing them to migrate to other regions.These migrations can lead to increased economic and social vulnerabilities, especially for those who are compelled to leave their homes without sufficient resources, resulting in instability and poverty (Da Silva et al. 2022).Moreover, the population and production in the region rely on rainfall events to sustain agricultural production and daily water consumption, as rainwater is captured in cisterns for storage (Dantas et al. 2020).
Mainly, this is because NEB is an agriculturally based developing region facing the challenge of feeding rapidly growing populations in the coming decades (Thornton et al. 2014).The performance of different meteorological systems and the deficiency of public policies in managing water resources or severe weather warnings favor the occurrence of economic losses and human lives in NEB (Silva et al. 2017).Understanding rainfall variability is essential for dealing with the scarcity of water resources due to increasing water demand, population growth, and economic development (Surendran et al. 2019).
As a major aspect of the water circulation system, the spatiotemporal distribution of regional precipitation is inevitably affected by climatic factors, which may induce a series of hydrologic disasters, including drought and flood (Verdon et al. 2004;Dufek and Ambrizzi 2008;Kundzewicz et al. 2010;Leng et al. 2015), and extreme precipitation events that are among the most disruptive of atmospheric phenomena (Zin et al. 2010).To analyze these events, welldistributed and consistent data with the ability to capture the rainfall regime in the planet's most remote and poorly instrumented regions are needed.Nowadays, the Global Precipitation Climatology Centre (GPCC) (Schamm et al. 2014) provides gridded gauge-analysis products derived from quality controlled station data, which are adequate for analysis of the effect of atmospheric teleconnections on rainfall variability.Several studies have used the time series of GPCC to investigate the evolution of atmospheric teleconnections in precipitation (e.g., Molavi-Arabshahi et al. 2015;Okumura et al. 2017;Wang et al. 2018), and the results are satisfactory.
It is important to note that many studies have used advanced statistical analysis, such as wavelet (Santos et al. 2019(Santos et al. , 2023;;Alizamir et al. 2023) to detect statistically significant interannual and interdecadal oscillations in the patterns of precipitation variability and atmospheric teleconnections (e.g., Chang et al. 2018;Santos et al. 2018Santos et al. , 2013;;Su et al. 2017;Jemai et al. 2017) and this tool has been shown to be very useful for this purpose (Zhao et al. 2022;Fan et al. 2022).
Brahmananda Rao and de Brito (1985) analyzed teleconnections between the rainfall over NEB and the Winter Circulation of Northern Hemisphere.Teleconnections are noted to be stronger during the winter season, and interannual variations of rainfall over NEB are associated with variations in the Northern Hemisphere winter circulation.Recently, Costa et al. (2018) showed that global climate oscillations have a non-stationary relationship with the NEB rainy season, impacting the hydrology of the study area at different time scales.In this sense, given the spatiotemporal variability of precipitation over NEB, the objective of this study is to evaluate the influence of different atmospheric teleconnections on the total annual precipitation of NEB for the 1901-2013 period, considering the physical characteristics of four subregions, i.e., Mid-north, Backwoods, Agreste, and Forest zone.
Study area
The study region, located between the parallels 01°02′30″N and 18°20′07″S and between the meridians 34°47′30″W and 48°45′24″W (Fig. 1), encompasses the states of Alagoas, Bahia, Ceará, Maranhão, Paraíba, Piauí, Pernambuco, Rio Grande do Norte, and Sergipe.These states account for a total area of 1,561,177 km 2 , representing 18.26% of Brazil's total area, with a population density of 30.54 inhab./km 2 .The study area's delimitation concentrates around 89.5% of Brazil's semiarid region, covering most of the states in the region except for Maranhão (IBGE 2010).This study analyzed precipitation variability using GPCC data and its relationship with atmospheric teleconnections in NEB, employing the division proposed by Andrade (1980), which divided NEB into four physiographic mesoregions: Midnorth (Meio-Norte), Forest zone (Zona da Mata), Agreste, and Backwoods (Sertão).The criteria used to define the four sub-regions in NEB are climate, vegetation, presence of mountains, and geographical location.These factors help us understand the specific characteristics of each sub-region, such as their unique strengths and challenges.By considering these criteria, policymakers and regional planners can develop targeted strategies to promote sustainable development and improve the overall well-being of the region's inhabitants (Moreira et al. 2007).Thus, each subdivision analyzed has distinct physical characteristics and precipitation regimes.
The Forest zone is a coastal region with a humid tropical climate, featuring well-distributed rainfall throughout the year and coastal mountains and plateaus.The predominant vegetation is the Atlantic Forest, with extensive production of sugarcane.The Agreste is a region with the presence of mountains and plateaus, has a mild climate, and the predominant vegetation is a transition between the Atlantic Forest and the Caatinga.This region is a transition zone between the Forest zone and the Backwoods.The Forest zone is located in the east and has the highest precipitation values in NEB (Brasil Neto et al. 2020).
The Backwoods is a region with the presence of mountains and plateaus, a hot and dry climate, with irregular and concentrated rainfall in a few months of the year, typical of the semiarid climate.This subregion presents irregular and scarce rainfall, besides periods of drought, and its typical vegetation is the Caatinga and Savannah (Silva et al. 2018).The Mid-north is a more northern region, composed of Maranhão and Piauí.It has a hot and humid climate, with the presence of Cerrado and Caatinga.In the Mid-north, precipitation varies from 2000 to 1500 mm/year.
The predominant climates in NEB are the humid equatorial climate that covers a small part of the state of Maranhão and the border of the state of Pará (Silva Junior et al. 2020); the humid coastal climate that covers the coast from the states of Bahia to the Rio Grande do Norte; a tropical climate that predominates in the states of Bahia, Ceará, Maranhão, and Piauí; and the semiarid tropical climate that occurs in most of the NEB states, except for Maranhão (Safanelli et al. 2023).
NEB is also characterized by irregular rains and prolonged drought occurrences that affect the main economic activities in the agricultural sector and livestock (Brasil Neto et al. 2022).The region's geographical position, relief, and pressure systems are among the main climatic factors determining the distribution of climatic elements in NEB and the seasonal variation (Kayano and Andreoli 2009).Among the large-scale phenomena that act over NEB, the Intertropical Convergence Zone and El Niño-Southern Oscillation are the most important (Hastenrath and Lamb 1977;Chiang et al. 2002;Giannini et al. 2004).
Dataset used
The data series used in this research is based on monthly precipitation data from the GPCC for the 1901-2013 period (available data), with 0.5° × 0.5°, 1.0° × 1.0°, and 2.5° × 2.5° spatial resolutions (Schneider et al. 2016).In this study, we used the GPCC 7.0 product with a 0.5° × 0.5° spatial resolution.GPCC provides global monthly rainfall estimates, which are solely based on ground observations from around 75,100 stations worldwide that feature at least 10 years of records (Basheer and Elagib 2016).These data were used to estimate the annual total precipitation on wet days (PRCP-TOT).
Pearson correlation coefficient
The Pearson method was applied to analyze the behavior of the teleconnections and their influence on the annual precipitation for the studied region.The population correlation coefficient (parameter) ρ and its sample estimate are closely related to the normal bivariate distribution, whose probability density function is given by: is the population parameter in which COV(X, Y) is the covariance between X and Y, σ X is the standard deviation of X, and σ Y is the standard deviation of Y.The Maximum Likelihood Estimator is given by the expression: where n is the number of observations of the sample, X is the arithmetic mean of X, and Y is the arithmetic mean of Y.The correlation coefficient, ρ, can also be interpreted in terms of ρ 2 , which is known as the coefficient of determination.When multiplied by 100, ρ 2 yields the percentage of the variation in the dependent variable Y that can be explained by the variation in the independent variable X.In other words, it quantifies the proportion of the variance in Y that is predictable from X, that is, how much variation is shared by both variables.The coefficient of determination is the ratio of the variance explained by the linear regression model (Y = α + βX, where α and β are constants) to the total variance in Y.
The significance of the estimated correlation coefficient is verified through hypothesis testing.The statistic to test the hypothesis H 0 : ρ = 0 against H 1 : ρ ≠ 0 has distribution t with n − 2 degrees of freedom, that is: In this work, the correlation analysis between the precipitation series of the 113 GPCC grids with the four distinct teleconnections was performed, and the values were spatialized over the entire NEB.In addition, it is worth noting that different significance levels between the obtained correlations were tested, and the results with confidence levels greater than 0.01 were highlighted.
Rainfall Anomaly Index (RAI)
In this work, the RAI was used to analyze precipitation on a monthly, seasonal, and annual scales and address droughts that affect agriculture, water resources, and other sectors (Kraus 1977).This index considers the classification of precipitation values to calculate positive and negative precipitation anomalies and was chosen because it is flexible in the precipitation analysis.The following equation defines such an index: where N is the annual precipitation for a given year, N 1 is the average annual precipitation of the historical series, M is the average of the ten most extensive historical precipitation time series, and X is the average of the ten shortest historical precipitation time series.In this work, RAI series were calculated based on the average rainfall of the entire NEB and the four mesoregions.In addition, to categorize the RAI values into different severity classes, the rainfall classification defined by Van-Rooy (1965) was used, as shown in Table 2. (4) , and
Cross-wavelet analysis
In this study, we employed cross-wavelet analysis to examine the coherence between the selected teleconnection indices and precipitation, as represented by the RAI, across different timescales.The combination of cross-wavelet analysis and RAI allowed us to identify the periods during which the teleconnections and precipitation were significantly related, as well as the timescales of their interactions.By analyzing these relationships, we were able to better understand the influence of atmospheric teleconnections on precipitation variability in NEB and provide a more comprehensive assessment of the underlying climatic processes.Crosswavelet analysis shows the covariance of energy between two time series and reveals information about the relationship between their phases.As in Fourier transform analysis, the wavelet power spectrum can be extended to analyze two time series, Xn and Yn (Grinsted et al. 2004).Considering the continuous form, it is possible to define the Continuous Wavelet Transform of these two series as WXY = WXWY*, where the asterisk denotes the conjugate complex; furthermore, we define the wavelet cross power spectrum as | | W XY | | .The theoretical distribution of the cross-spectrum background energy of the wavelets of two time series P X K and P Y K is defined in Torrence and Compo (1998) as: Equation ( 5) shows the theoretical distribution of the cross-wavelet transform power spectrum over two time series (Torrence and Compo 1998).Thus, Zv(p) is the confidence level associated with probability p, for a Probability Density Function, defined by the square root of the product of two χ 2 (Chi-square).In this paper, cross-wavelet analyses were performed considering the RAI series of each of the four regions with the series of the four teleconnections.In this sense, 16 coherence analyses were developed between these signals, highlighting the robustness of the work developed.Figure 3a, b show the correlation between PRCPTOT and AMO and PDO, respectively, while Fig. 3c shows the time series of the normalized PDO + AMO from 1901 to 2013.The relationship between PRCPTOT and AMO for the 1901 to 2013 period shows significant negative correlation coefficients in the northern, central, and southern parts (Fig. 3a).The multidecadal variability in NEB presents a more powerful influence in the Backwoods than the continental influence on precipitation.
Correlation between the PRCPTOT and atmospheric teleconnections
On the other hand, the PDO shows a significant negative correlation in the northern, eastern, and western sectors of NEB, indicating an inverse pattern of precipitation with the PDO (Fig. 3b).During the warm phase of the PDO, precipitation in NEB is below average, and the cold phase results in wetter conditions.The normalized PDO + AMO shows a cooling between 1901-1924 and 1949-1989 and warming between 1926 and 1946 and from 1990 onwards (Fig. 3c).These results indicate that PDO + AMO influences the climate in NEB.The warm PDO mode is associated with more frequent El Niños, and the warm AMO mode on an annual basis correlates with generalized warming.Thus, when the PDO and AMO are in their warm modes, one can expect more warmth, while when both are in their cold modes, one expects climate cooling over the region.The NAO has two pressure systems that affect the direction of the westerly winds, i.e., the low-pressure system located in Iceland and the high-pressure system in the Azores.During the 1901-2013 period, the NAO shows significant negative correlation coefficients in the central-western sector of NEB (Fig. 4a).The positive phase of the NAO corresponds to the intensification of high-altitude westerly winds, which arrive with greater speed at subpolar latitudes and guide storms across the Atlantic between Newfoundland and Northern Europe (Wallace and Gutzler 1981).The NAO was in a predominantly positive phase at the beginning of the twentieth century, while the negative phase was more pronounced between the 1940s and 1970s (Weisheimer et al. 2017).Hurrell (1995) states that the positive and negative phases of the NAO are strongly associated with the location and intensity of the jet stream and, as a consequence, with the trajectory of depressions in the North Atlantic.Figure 4b shows the variability of RAI during the different phases of the MEI.It can be seen that NAO and RAI show high variability and that NAO remains at the same stage for some years.On the other hand, during the positive phase of the NAO, the climatic conditions in NEB become more conducive to the occurrence of drought episodes (Fig. 4b).
Cross-wavelet analysis of RAI and atmospheric teleconnections
The following results show the periodicity of the RAI with AMO, PDO, NAO, and MEI through cross-wavelet analysis.
Figure 6 shows that AMO and RAI present periodicity on the 30-year scale.This result is more noticeable in the Forest zone in the 1930-1980 period, Agreste in 1930-1990, and
Variability of the RAI in NEB mesoregions
Figure 7 shows the variability of the RAI in the four NEB mesoregions.In the Backwoods, the years 1903Backwoods, the years , 1908Backwoods, the years , 1932Backwoods, the years , 1990Backwoods, the years , 1993Backwoods, the years , and 2012 Conversely, 1960Conversely, -1980 was a period of high rainfall in NEB, and high river flows in the US, while Sahel rainfall and Atlantic hurricane formation were reduced (Knight et al. 2006).Knight et al. (2006) showed that the positive phase of the AMO is associated with a northward shift of the ITCZ over the Tropical Atlantic, together with a northward equatorial crosswind anomaly.The study further points out that in the twentieth century, periods of high precipitation in NEB coincided with the negative phase of the AMO (1900AMO ( -1920AMO ( and 1960AMO ( -1980)), while the positive phase coincided with below-average precipitation.
Correlation between the PRCPTOT index and atmospheric teleconnections
According to Andreoli and Kayano (2007), the precipitation anomalies in NEB can be attributed primarily to the action of the anomalous Walker circulation, adjusted through the rearrangement of convection in the eastern equatorial Pacific.During El Niño years, the displacement of the Walker circulation hinders the formation of clouds and reduces rainfall during the rainy season.During El Niño episodes, positive precipitation anomalies occur in typically arid and semiarid regions, such as the west coast of South America, the southern United States, and East Africa (Anyamba 2001;Holmgren et al. 2006).
Furthermore, the El Niño phenomenon, which is characterized by anomalous warming of the surface waters in the central and eastern equatorial Pacific, is a key factor influencing the occurrence of drought in the semiarid region of Brazil.These studies help to better understand the influence of El Niño on hydroclimatic variability in the region, particularly in the semiarid region of Brazil located in subregions Backwoods and Agreste that are particularly vulnerable to the impacts of El Niño due to their already arid climate and limited water resources.The reduced rainfall during drought periods can lead to water scarcity, crop failure, and food insecurity, affecting both rural and urban populations.The results of this study corroborate those obtained by other researchers who have analyzed historical data on ocean temperatures and precipitation patterns (Hernández Ayala 2019; King et al. 2023).These studies have shown that El Niño events are associated with a decrease in rainfall in northeastern Brazil, particularly in its semiarid region.Overall, the El Niño phenomenon plays a significant role in shaping the hydroclimate of the semiarid region of Brazil, with important impacts on the livelihoods and well-being of local communities.
However, recent studies suggest the existence of several El Niño types.For example, the eastern Pacific coast experienced stronger impacts from the 1997-1998 ENSO than those expected from the 2015-2016 ENSO, despite having similar tropical SST anomalies (Capotondi et al. 2015;Jacox et al. 2016;Tovar et al. 2018).The understanding of El Niño-Southern Oscillation (ENSO) teleconnections in the extratropics is based on the paradigms of tropical and subtropical responses to thermal forcing in the tropical Pacific (Gill 1980;Sardeshmukh and Hoskins 1988) and the propagation of resulting Rossby waves into the extratropics (King et al. 2023).Understanding the spatiotemporal variability of annual, monthly, and seasonal rainfall is crucial for determining the risk of droughts, soil erosion, floods, and developing soil conservation plans (Musabbir et al. 2023).
Cross-wavelet analysis of RAI and atmospheric teleconnections
The NAO and the RAI present interannual periodicity at different scales, with greater predominance in the Midnorth mesoregion.In the Forest zone, the MEI and the RAI present periodicity of 4-6 years in 1980-1990, with the MEI and RAI in opposite phases; in the Agreste, a periodicity of 4 years is observed in three distinct periods: in 1915-1920, with the RAI responding to 1.5 years of the period; in 1930-1945, with the RAI lagged 45° of the MEI, that is, the RAI responds to 6 months of the period; and in 1980-1990, with the MEI and RAI in opposite phases.In the Backwoods, the MEI and RAI show a periodicity of 4 years in 1915-1925 and 1980-1990, with the MEI and RAI in opposite phases; in the Mid-north, a periodicity of 4-6 years is observed in 1915-1925, 1930-1940, 1970-1975, and 1985-2005, and periodicity of 15 years in 1975-1995, with the MEI and RAI in opposite phases.
Variability of the RAI in NEB mesoregions
In the Mid-north, the years 1915Mid-north, the years , 1919Mid-north, the years , 1932Mid-north, the years , 1951Mid-north, the years , and 1983 were extremely dry and coincided with the occurrence of El Niño episodes of moderate intensity (1915 and 1951) and strong intensity (1919 and 1983).The years 1917The years , 1924The years , 1974The years , and 1985 were considered extremely wet, with La Niña episodes of strong (1917), moderate (1924), and weak (1974) intensities.In the Agreste, the 1901-1905 period was considered extremely dry and coincided with strong (1902)(1903) and moderate (1904)(1905) El Niño intensity.The years 1924The years , 1964The years , 1985The years , and 2000 were considerably wet, with moderate La Niña events in 1924 and 2000.
The Backwoods and Mid-north are the regions most susceptible to climatic fluctuations and drought episodes.For the 1901-1913 period and according to the RAI, 47 drought episodes were identified in the Backwoods and 45 drought episodes in the Mid-north.In the Agreste, 30 drought episodes were identified, and in the Forest zone, 36 episodes of drought.According to Lee et al. (2023), the El Niño/ Southern Oscillation (ENSO) is a combined phenomenon of fluctuating sea surface temperature and atmospheric circulation over the central and eastern Pacific Ocean.It has a critical influence on climate patterns all over the world.During the El Niño events, the monthly rainfall anomalies are below normal.
Conclusions
In this research, the influence of different atmospheric teleconnections on the total annual precipitation of NEB for the 1901-2013 period, considering the physical characteristics of subregions Mid-north, Backwoods, Agreste, and Forest zone, was analyzed.The results obtained showed that different indices of teleconnections can be used to understand the influence of global scale teleconnections on precipitation in NEB.In addition, the statistical techniques of cross wavelet and RAI, coupled with oceanic and atmospheric patterns, indicated changes in the climatic indices of the Pacific Ocean and the Atlantic Ocean, as well as climatic variability and the influence of external forces in the evolution of NEB climate patterns.
The results show differences in the modulation of the atmospheric teleconnections in the sub-regions of NEB.The AMO modulates precipitation in NEB in the four subregions on an interannual scale, with greater influence in the Backwoods.Still, the PDO exerts greater control over the modulation of weather patterns in NEB.In the Agreste and Forest zone, the PDO impacts precipitation on an interannual-and decadal-scale, and in the Mid-north and Backwoods, on an interannual scale in discontinuous periods.It is also pointed out that external and anthropogenic forcings affect the local convection, interfering with the Atlantic and Pacific oceanic conditions.
The MEI shows greater variability in the Mid-north mesoregion, indicating the intensification of La Ninã phenomena in the northwestern sector of NEB on an interannual scale and on different time scales.The NAO impacts the RAI on an interannual-and decadal-scale in the Mid-north and on an interannual scale in the Backwoods, Agreste, and Forest zone.
Fig. 2 Fig. 3
Fig. 2 Variation of annual average precipitation in the mesoregions and NEB
Figure 2
Figure 2 shows the variation of annual average precipitation in the mesoregions and NEB from 1901 to 2013.Upon analyzing this figure, it is evident that the highest precipitation values are concentrated in the Mid-north and Forest zone mesoregions, respectively, with rainfall exceeding 2000 and 1400 mm/year.The Backwoods and Agreste mesoregions exhibit the lowest precipitation values, with rainfall ranging between 500 and 900 mm/year.Figure3a, b show the correlation between PRCPTOT and AMO and PDO, respectively, while Fig.3cshows the time series of the normalized PDO + AMO from 1901 to 2013.The relationship between PRCPTOT and AMO for the 1901
Fig. 4 aFig. 5 a
Fig. 4 a Spatial distribution of the correlation of PRCPTOT (mm) with NAO, and b time series of NAO and PRCPTOT, period 1901 to 2013.The dotted areas correspond to correlations showing statistical significance at the 0.1 level corresponding to correlation coefficients
Fig. 6
Fig. 6 Cross-wavelet transform of the standardized AMO and RAI time series (upper pane) and cross-wavelet transform of the standardized MEI and RAI time series (lower pane).The relative phase relationship is shown as arrows (with in-phase pointing right, anti-phase Phase transitions from one mode to the other are abrupt and occur within a year or two (Intergovernmental Panel on Climate Change-Fourth Assessment Report AR4), which makes these oscillations related to oceanic patterns or thermohaline circulation (D'Aleo and Taylor 2007).Cool AMO phases occurred in the 1900-1920 and 1960-1980 periods, while a warm phase occurred in the 1930-1950 period.These periods coincide with examples of anomalous regional climate: for example, 1930-1950 showed decreased rainfall in NEB, reduced river flows in the United States, enhanced Sahel rainfall, and hurricane formation.
Fig. 7
Fig. 7 RAI variability in the four NEB mesoregions
Table 1
Indices, abbreviations, and descriptions of the teleconnections analyzed High and the Subpolar Low.The positive phase of the NAO reflects below-normal heights and pressure across the high latitudes of the North Atlantic and above-normal heights and pressure over the central North Atlantic, the eastern United States, and western Europe.The negative phase reflects an opposite pattern of height and pressure anomalies over these regions (NCDC 2019) Multivariate ENSO Index MEI Weighted anomaly average of six meteorological variables in the tropical Pacific: sea surface temperature, sea level pressure, surface air temperature, components of surface wind zonal and meridional, and component total cloudiness fraction of the sky (NCAR 2019) Table 1 shows the indices of teleconnections used in this study.More details about these indices are available on the website https:// www.esrl.noaa.gov/ psd/ data/ clima teind ices/ list, of the National Centers for Environmental Prediction (NCEP).
Table 2
Rainfall classification according to RAI
|
2023-06-06T15:02:42.695Z
|
2023-06-04T00:00:00.000
|
{
"year": 2023,
"sha1": "45816513daa53eb31c02790e1e0c5d0893efa862",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00382-023-06838-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "7a67054a055a8e3fcddec7206ba56d4789e7be44",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
}
|
3589847
|
pes2o/s2orc
|
v3-fos-license
|
Raised Serum Levels of Syndecan-1 (CD138), in a Case of Acute Idiopathic Systemic Capillary Leak Syndrome (SCLS) (Clarkson’s Disease)
Patient: Female, 49 Final Diagnosis: Systemic capillary leak syndrome (SCLS) Symptoms: Hypotension Medication: — Clinical Procedure: None Specialty: Allergology Objective: Rare disease Background: Systemic capillary leak syndrome (SCLS) (Clarkson’s disease) is a rare disorder of unknown etiology, characterized by transient episodes of hypotension, and the microvascular leak of fluids into the peripheral tissues, resulting in edema. Between 80–90% of patients with SCLS have a concomitant monoclonal gammopathy. Although translational in vitro studies have implicated vascular endothelial barrier dysfunction in the etiology of SCLS, the etiology and disease associations in clinical cases remain unknown. Case Report: We report a case of SCLS in a 49-year-old woman who initially presented with an upper respiratory tract infection, which was complicated by edema and compartment syndromes in the extremities that required fasciotomies. Serum levels of the cell surface heparan sulfate proteoglycan, syndecan-1 (CD138), a measure of endothelial surface glycocalyx (ESG) damage, were measured by enzyme-linked immunoassay (ELISA), peaked at up to 500 ng/mL (reference range, 50–100 ng/mL) and normalized on disease remission. Conclusions: This case report supports the view that damage to the microvascular endothelium, has a role in the pathogenesis of acute SCLS. This case also indicated that monitoring serum levels of syndecan-1 (CD138) might be used to monitor the progression and resolution of episodes of SCLS.
Background
Systemic capillary leak syndrome (SCLS) (Clarkson's disease) presents clinically with transient hemoconcentration, hypoalbuminemia, hypotension, and generalized edema, without known cause [1]. In 1960, Clarkson et al. first described an idiopathic form of this disorder characterized by repeated episodes of edema and hypotension associated with increased capillary permeability that resolved spontaneously [2]. Although triggers for acute episodes of SCLS have not yet been identified, they are frequently preceded by viral infections or intense physical exertion, suggesting a role for inflammatory mediators in acute SCLS [1].
Common complications of SCLS include hypotensive shock, acute kidney injury resulting from hypovolemia, and muscle compartment syndromes due to massive soft tissue edema, which can lead to mortality in untreated cases. Treatment for acute episodes of SCLS is supportive, including an intravenous infusion of crystalloids, colloids, and vasopressors to maintain adequate blood pressure, but with avoidance of excessive intravenous fluids that can exacerbate edema. Clinical observation and monitoring for the early signs and symptoms of an acute attack of SCLS, and treatment in a highly equipped intensive care unit (ICU) are all important measures to improve the chances of patient survival. A monoclonal gammopathy, typically IgG kappa, has been described in up to 90% of patients with acute SCLS [3,4]. From the published literature, most cases of SCLS present in middle-aged adults, and SCLS is equally reported in both sexes [5,6]. Although at least 300 cases of SCLS have been reported in the literature at this time, because the condition is rare and the diagnosis may be missed, it is likely that cases of SCLS are under-reported.
Findings from in vitro studies suggest that vascular endothelial hyperpermeability contributes to the clinical presentation of SCLS [7,8]. Also, serum samples from patients in the acute phase of SCLS, when added to normal microvascular endothelial cells in culture disrupted cell-cell contacts and induced morphological changes consistent with vascular barrier dysfunction [7,8]. Microvascular endothelial cells provide the foundation for the vasculature, and on their luminal side, the microvascular endothelial cells secrete endothelial surface glycocalyx (ESG), which is of critical importance for the stabilization of hemodynamic equilibrium [9]. The basal side of the endothelium, lined by a basement membrane, forms important contacts with the extracellular matrix. These microvasculature structures function as a barrier between the blood and the interstitial fluid. During acute episodes of SCLS, the microvascular endothelial barrier is highly permeable for fluid, plasma, and protein molecules of up to 900 kDa, which can enter into the interstitial space, causing edema [10]. The specific molecules that mediate endothelial hyperpermeability in SCLS are unknown.
Although a flu-like prodrome has been reported in up to 88% of cases of SCLS, specific infectious or other triggers for attacks can only be identified in approximately 60% of cases [3,5]. Common complications of acute attacks of SCLS include acute kidney failure (89%), muscle compartmental syndromes with rhabdomyolysis (43%), thromboses, pulmonary edema, and painful peripheral neuropathies. The five-year survival rate for SCLS has been reported to be between 73-78% [3,5].
This report is the second known case of SCLS in Norway [9]. In addition to reporting a rare condition, this report includes details of the medical history of the family members, in an attempt to identify predisposing factors for SCLS. In this case, a family history of lymphoproliferative disorders, cardiovascular disease, cancer, and diabetes was identified. A transient increase in the cell surface heparan sulfate proteoglycan, syndecan-1 (CD138) was identified during the acute presentation, which normalized during the recovery phase. These findings suggest that reduced ESG function could contribute to vascular endothelial hyperpermeability in SCLS.
Case Report
In 2009, a 49-year-old woman reported an upper respiratory tract infection with rhinorrhea, cough, and fever of between 38-39°C, for two days. The patient had experienced increasing lethargy, fatigue, and loss of appetite for several days and was confined to bed. She became dehydrated and noted reduced urine output. On the fifth day of her symptoms, she experienced a brief syncopal attack with subsequent nausea and vomiting, but she was awake with normal mentation, but this episode resulted in emergency admission to hospital by ambulance and given intravenous Ringer's solution.
On hospital admission, the patient had no detectable pulse or measurable blood pressure in her extremities. She experienced brief, intermittent episodes of syncope but was fully awake and mentally coherent between these episodes. She initially presented with blueish extremities but had no peripheral edema or skin rashes. Her blood pressure was 60/40 mm Hg, her pulse was regular at between 85-105 beats per min, her temperature was 35.9ºC, her respiration rate was 42 breaths per min, and her weight to height ratio was 50 kg to 1.65 m. Serum glucose was 9,8 mmol/L, with dipstick urinalysis showing urine protein of 3+, and erythrocytes of 2+. Electrocardiogram (ECG) and echocardiography were normal. Analytical blood results before, during, and after the initial presentation are summarized in Table 1.
Because of her hypotension and elevated white blood cell count, a diagnosis of afebrile sepsis was first suspected, and she was treated empirically with antibiotics; she was also anticoagulated with enoxaparin 40 mg because she was considered to be at increased risk for thrombosis; intravenous hydrocortisone 250 mg was given for presumed adrenal insufficiency. She received 20 liters of intravenous crystalloid to achieve hemodynamic stability, which resulted in rapid development of generalized peripheral edema. Results marked in red are above the upper reference limit, black within reference ranges, blue are below the reference range. Treatments are marked in green. Normal reference values are indicated in the top row. Hgb -hemoglobin (g/dL); Hct -hematocrit (%);WBC -white blood count (×10 -9 /L); Neutrophils -neutrophil count (×10 -9 /L); Plt -platelets (×10 -9 /L); T.P./Alb -total protein (g/L)/albumin (g/L); Creatinine -(micromol/L); AST -aspartate aminotransferase (U/L); ALT -alanine aminotransferase (U/L); CK -creatinine phosphokinase (U/L); IVIG -intravenous immunoglobulin; SAG -saline-adenineglucose-mannitol (SAGMAN) red cell concentrate.
Day of illness
Five hours after her hospital admission, she required fasciotomy in both lower extremities to treat acute compartment syndrome. The muscles of the legs appeared to be clinically normal, and a pressure gauge was inserted into the muscle to enable incision closure and wound monitoring. Although the diuretic furosemide was given, urine output remained minimal during the first 24-48 hours following hospital admission.
On the sixth day of her illness, pleural effusions and retropharyngeal edema were diagnosed radiologically. However, her leg pain and interstitial muscle pressure increased progressively, and a bilateral fasciotomy was again performed. Necrotic areas were now seen in the leg muscles.
Creatine kinase (CK) levels increased over the next two days, peaking on the ninth day of her illness (13,980 U/L). Sporadic increases in liver and muscle enzymes were detected. Following resuscitation with large amounts of intravenous fluids and plasma extravasation from the surgical fasciotomy sites, the patient developed anemia requiring transfusion on the tenth day of disease. A slight increase in fibrinogen was noted on admission while D-dimer was normal. Erythrocyte sedimentation rate (ESR), prothrombin time (PT), INR, and platelet counts were all within normal limits during her hospital course. On the ninth day of her disease, complement levels, C3 (0.46 g/L) and C4 (0.05 g/L) were decreased. More extensive complement analysis performed one year later revealed lack of mannosebinding lectin (MBL), as shown in Table 2.
Based on clinical signs, symptoms, and laboratory tests, the differential diagnosis included sepsis, idiopathic anaphylaxis, hereditary angioedema, polycythemia vera, and cardiac insufficiency. However, these diagnoses were excluded because the combination of signs and symptoms of severe hemoconcentration, hypoalbuminemia, protracted hypotension, and generalized edema with accompanying compartment syndromes were pathognomonic of systemic capillary leak syndrome (SCLS).
In support of the diagnosis of SCLS, C1 esterase inhibitor levels and function were normal, and blood cultures were negative. There were no triggers found for an anaphylaxis reaction, and there were no typical allergic signs and symptoms of anaphylaxis, such as urticaria, stridor, or wheezing. Although small amounts of proteinuria were detected, urinary protein loss was unlikely to account for the acute drop in serum albumin. The urinary protein before the episode was equivalent to that obtained at the time of hospital admission, whereas serum albumin was initially normal on admission, and urine output was minimal during this time. Also, the urinary protein to creatinine ratio was only 0.4, which was well below levels required for the diagnosis of nephrotic syndrome. Serum immunoelectrophoresis showed a monoclonal IgG-kappa paraprotein (1-2 g/L); IgG paraprotein is frequently (80-90%) found in the serum of patients with SCLS (1). Free kappa or lambda light chains were within normal reference ranges, and the kappa to lambda ratio was normal. While IgG and IgA were in the lower reference range, IgM was in the high normal range. Serum levels of the cell surface heparan sulfate proteoglycan, syndecan-1 (CD138), a measure of endothelial surface glycocalyx (ESG) damage, were analyzed by enzyme-linked immunoassay (ELISA) (Diaclone, Besancon, France) and peaked at up to 500 ng/mL (reference range, 50-100 ng/mL), and normalized on disease remission (Figure 1). Treatment with intravenous immunoglobulin IgG (IVIG) commenced on the seventh and eighth day of illness, based on previous reports of its efficacy in acute SCLS [12]. Clinical improvement was found within the first two hours of IVIG treatment, with resorption of generalized edema and an increase in urine output, and no signs of pulmonary edema developed on the following days. Antibiotics were withdrawn. Syndecan ng/mL Five weeks following hospital admission, the patient left the hospital and underwent a slow process of rehabilitation. However, at present, due to leg pains and muscular weakness, she currently requires orthopedic support for walking, remains weak, and her quality of life is reduced. The patient also suffers from frequent upper respiratory tract infections but has had no further episodes of SCLS in the past eight years. The M-component of her monoclonal gammopathy has progressively declined to almost undetectable levels, and Hb, Hct, creatinine, and CK have remained within normal limits (Table 1). She has received prophylaxis with terbutaline and theophylline, as recommended in the literature for the treatment of SCLS. The patient has been informed of her medical condition and has been made aware of the early symptoms of SCLS with information regarding this condition, should she require further hospital admissions.
Relevant past medical history and family history
Examination of the patient's prior medical history showed that she was born nine weeks before term, weighing 1,070 g. Pregnancy and delivery were complicated by toxemia, Rhesus incompatibility, and exchange transfusion. She had appendicitis at two-years-of-age, which was complicated by sepsis. From early childhood, she had repeated urinary tract infections with cystitis, and at least four episodes of cysto-pyelonephritis, but she had no permanent kidney damage. During her pregnancies an increased blood glucose was noted.
In her paternal family, there were several cases of premature death due to heart disease, cancer, and diabetes. Her mother had toxemia associated with all three pregnancies, and died at 76 years-of-age and had a weak serum IgM-lambda monoclonal gammopathy of undetermined significance (MGUS). Six of mother's brothers and sisters died of cardiovascular disease, and one of the mother's sisters died at 72-years-of-age of multiple myeloma after 12 years with a known IgA-kappa monoclonal gammopathy of undetermined significance (MGUS).
Discussion
The pathogenesis of systemic capillary leak syndrome (SCLS) (Clarkson's disease) is not understood, and the site of the vascular endothelial hyperpermeability in SCLS remains unknown. Although transient increases in circulating vascular endothelial growth factor (VEGF), angiopoietin-2 (Angpt-2), and monocyte/macrophage-associated inflammatory mediators including C-X-C chemokine motif 10 (CXCL10), tumor necrosis factor (TNF)-a, and interleukin (IL)-6 during acute episodes of SCLS suggest a role for these cytokines in the mechanism of vascular leak, the specific pathways resulting in vascular endothelial hyperpermeability in SCLS remain unknown. Preliminary study findings from our laboratory (unpublished data) suggest that individuals affected by SCLS have cutaneous hyper-responsiveness to inflammatory mediators that affect the microvasculature and that endothelial cells derived from patients with SCLS exhibit this behavior persistently in vitro.
The barrier function of the microvasculature is a complex physiological process in which adhesive cell-cell junction contacts prevent plasma and cell extravasation. Under normal homeostatic conditions, microvascular endothelial cells secrete endothelial surface glycocalyx (ESG), which fortifies the endothelial barrier via several mechanisms. ESG is a fibrous matrix rich in negatively-charged proteoglycans containing sialic and neuraminic acids, which prevent binding of circulating leukocytes and platelets. Vascular endothelial cells actively build and replenish the structural components of ESG, as well as blood group antigens and endothelial superoxide dismutase (eSOD). Components from plasma, including albumin, antithrombin, and calcium ions, are interspersed throughout the ESG meshwork, which helps to maintain intravascular osmotic pressure, coagulation, and platelet adhesion [13,14]. ESG also protects endothelial cells from shear stress induced by circulating blood, and ESG degradation due to endothelial cell damage may contribute to the severity of plasma leak in diverse clinical conditions including, dengue virus infection and heart failure [15][16][17].
Syndecan-1 (CD138) is a transmembrane proteoglycan component of ESG that binds to hyaluronan on the luminal side of the endothelial cell membrane and cytoskeletal components on the cytosolic side. In conditions associated with ESG disruption, for example, sepsis, trauma, and ischemia, syndecan-1 (CD138) is shed and circulates in the serum, reflecting both damage to the ESG and weakening of the endothelial barrier [18]. Damage to ESG promotes white blood cell and platelet adherence to the endothelial wall, initiating thrombosis. Plasma fluid leaks into the interstitial space through trans-endothelial cell pores and through widened intercellular gaps, which partially depend on the availability of syndecan-1 (CD138) [9].
In the case of the patient in this report, fluctuating levels of syndecan-1 (CD138) were found during the presentation of SCLS. Serum syndecan-1 (CD138) levels were within the normal range for the first several days of the acute presentation, peaked at five times the upper limit of normal on the sixth day, and then returned to baseline after that. An upper respiratory infection preceded the attack of SCLS in this patient, and it has previously been reported that viral membrane components, including neuraminidase, may lead to ESG disruption through digestion of neuraminic acids [19]. Therefore, viremia may have resulted in disruption of ESG in this patient. Serum syndecan-1 (CD138) levels increased to nearly ten times the upper limit of normal on days 9-12, which could reflect reperfusion injury resulting from mobilization of extravasated fluid during the recovery phase and restoration of intravascular volume, a phenomenon that has been described by previous clinical studies [20]. Syndecan-1 (CD138) levels returned to baseline during the convalescent phase, indicating the resolution of the acute endothelial dysfunction.
In this patient, several additional laboratory abnormalities were found during the acute phase in our patient that have not been previously reported in association with SCLS. Low levels of C3 and C4 were detected on the ninth day of the disease, which could reflect extravasation due to increased microvascular permeability, rather than complement consumption. Mannose-binding lectin (MBL) was low or undetectable in our patient. MBL adheres to mannose on the surface of many pathogens forming the complex, MBL-associated serine protease (MASP). MASP leads to the cleavage of C4 into C4a and C4b; C4b fragments then bind to microbes, initiating formation of C3-convertase. The subsequent complement cascade catalyzed by C3-convertase creates a membrane attack complex, which causes lysis of the pathogen as well as apoptosis and cell necrosis. MBL binds to glycoprotein in viruses and is instrumental in the first-line defense against infection [21]. MBL deficiency has been reported to be associated with recurrent infections in humans but has not previously been reported in SCLS. In the case of the patient in this report, MBL deficiency may have accounted for her history of frequent infections, including cystitis and pyelonephritis, as well as the frequent upper respiratory tract viral infections, such as the episode that triggered her attack of SCLS. Low levels of IgG and IgA, with normal levels of IgM, are findings that have previously been reported in cases of SCLS [22,23].
Finally, an important aspect of this case report was the discovery that several family members also had a history of monoclonal gammopathy of undetermined significance (MGUS). Although none of this patient's relatives had a history of episodes of SCLS, to our knowledge, such familial clustering of paraproteinemia has not been previously reported in SCLS, and the pathophysiological significance of this finding requires further study [24].
Conclusions
This case report supports the view that damage to the microvascular endothelium and its ESG have a role in the pathogenesis of acute systemic capillary leak syndrome (SCLS) (Clarkson's disease). This case also indicated that monitoring serum levels of syndecan-1 (CD138) might be used to monitor progression and resolution of episodes of SCLS. Further studies are needed to determine the mechanisms underlying the degradation of endothelial surface glycocalyx (ESG) and vascular damage in SCLS.
Statement
The patient was enrolled in a National Institutes of Health (NIH) institutional review board (IRB)-approved study protocol (I-09-0184) after written informed consent was obtained. The patient also consented to the publication of this case report.
|
2018-04-03T03:58:52.776Z
|
2018-02-16T00:00:00.000
|
{
"year": 2018,
"sha1": "bce980b6870222718b7963f599d6191aa0b74871",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc5823032?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "bce980b6870222718b7963f599d6191aa0b74871",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10285352
|
pes2o/s2orc
|
v3-fos-license
|
Experimental tests for the Babu-Zee two-loop model of Majorana neutrino masses
The smallness of the observed neutrino masses might have a radiative origin. Here we revisit a specific two-loop model of neutrino mass, independently proposed by Babu and Zee. We point out that current constraints from neutrino data can be used to derive strict lower limits on the branching ratio of flavour changing charged lepton decays, such as $\mu \to e \gamma$. Non-observation of Br($\mu \to e \gamma$) at the level of $10^{-13}$ would rule out singly charged scalar masses smaller than 590 GeV (5.04 TeV) in case of normal (inverse) neutrino mass hierarchy. Conversely, decay branching ratios of the non-standard scalars of the model can be fixed by the measured neutrino angles (and mass scale). Thus, if the scalars of the model are light enough to be produced at the LHC or ILC, measuring their decay properties would serve as a direct test of the model as the origin of neutrino masses.
From a theoretical perspective, there exist several options to explain the smallness of the observed neutrino masses. Perhaps the simplest -but certainly the most popular -possibility is the seesaw mechanism [5,6]. Many variants of the seesaw exist, see for example the recent review [7]. However, most realizations of the seesaw make use of a large scale, typically the Grand Unification Scale, to suppress neutrino masses and are, therefore, only indirectly testable.
On the other hand, many neutrino mass models exist, in which the scale of lepton number violation can be as low as the electro-weak scale or lower. Examples are supersymmetric models with violation of R-parity [8,9], models with Higgs triplets [10] or a combination of both [11], leptoquarks [12] or radiative models, both with neutrino masses at 1-loop [13,14] or at 2-loop [12,15,16] order. Radiative mechanisms might be considered especially appealing, since they generate small neutrino masses automatically, essentially due to loop factors.
In this paper we will concentrate on a model of neutrino masses, proposed independently by Zee [15] and Babu [16], in which neutrino masses arise only at 2-loop order. The model introduces two new charged scalars, h + and k ++ , both singlets under SU(2) L , which couple only to leptons. One can easily estimate, see fig. (1) and the discussion in the next section, that neutrino masses in this setup are of order O ∼ 1 eV for couplings f and h of order O(1) and scalar mass parameters, m S , of order O(100) GeV. Given that current neutrino data requires at least one neutrino to have a mass of order O(0.05) eV, one expects that the new scalars should have masses in the range O(0.1 − 1) TeV. The model is therefore potentially testable at near-future accelerators, such as the LHC or ILC.
Babu and Macesanu [17] recently re-analyzed this model in light of solar and atmospheric neutrino oscillation data. They identified the regions in parameter space, in which the model can explain the experimental neutrino data and tabulated in some detail constraints on the model parameters, which can be derived from the non-observation of various lepton flavour violating decay processes. Here, we extend upon the results presented in [17] by pointing out that (a) current neutrino data can be used to derive absolute lower limits on the branching ratios of the processes l α → l β γ. Especially important in view of future experimental sensitivities [18] is « ¬ Figure 1: Feynman diagram for the 2-loop Majorana neutrino masses in the model of [15,16].
that Br(µ → eγ) ≥ 10 −13 is guaranteed for charged scalar masses smaller than 590 GeV (5.04 TeV) in case of normal (inverse) neutrino mass hierarchy. And (b) decay branching ratios of the non-standard scalars of the model can be fixed by the measured neutrino angles (and mass scale). Thus, if the scalars of the model are light enough to be produced at the LHC or ILC, measuring their decay properties would serve as a direct test of the model as the origin of neutrino masses.
The rest of this paper is organized as follows. In the next section, we discuss the Lagrangian of the model, as well as its parameters in light of current oscillation data. In this part we will make extensive use of the results of [17]. In section 3, we calculate flavour violating charged lepton decays, l a → l b l c l d and l α → l β γ, discussing their connection with neutrino physics in some detail. Then, we consider the decays of the new scalars at future colliders, presenting ranges for various decay branching ratios as predicted by current neutrino data. We then close with a short discussion.
Neutrino masses at 2-loop
As mentioned above, the model we consider [15,16] is a simple extension of the standard model, containing two new scalars, h + and k ++ , both singlets under SU(2) L . Their coupling to standard model leptons is given by Here, L L are the standard model (left-handed) lepton doublets, e R the charged lepton singlets, α, β are generation indices and ǫ ij is the completely antisymmetric tensor. Note that f is antisymmetric, while h ′ is symmetric. Assigning L = 2 to h − and k ++ , eq. (1) conserves lepton number. Lepton number violation in the model resides only in the following term in the scalar potential Here, µ is a parameter with dimension of mass, its value is not predicted by the model. However, vacuum stability arguments can be used to derive an upper bound for this parameter [17]. For m h ∼ m k this bound reads The setup of eq. (1) and eq. (2) generates Majorana neutrino masses via the twoloop diagram shown in fig. (1). The resulting neutrino mass matrix can be expressed as with summation over x, y implied. The parameters ω xy are defined as ω xy = m x h xy m y , with m x the mass of the charged lepton l x . Following [17] we have rewritten h αα = h ′ αα and h αβ = 2h ′ αβ . I(r) finally is a dimensionless two-loop integral given by 1 For non-zero values of r, I(r) can be solved only numerically. We note that for the range of interest, say 10 −2 ≤ r ≤ 10 2 , I(r) varies quite smoothly between (roughly) 3 ≤ I(r) ≤ 0.2. Eq.(4) generates only two non-zero neutrino masses. This can easily be seen from its index structure: Det(M ν ) = Det(f αx ω xy f yβ ) = Det(f αβ ) = 0. The model therefore can not generate a degenerate neutrino spectrum. One can find the eigenvector for the massless state, it is proportional to where N = (1 + ǫ 2 + ǫ ′2 ) −1/2 is a normalization factor. Here we have introduced With M ν .v 0 = 0 one can express the parameters ǫ and ǫ ′ also in terms of the entries of the neutrino mass matrix. A straightforward calculation yields ǫ ′ = tan θ 12 sin θ 23 cos θ 13 − tan θ 13 cos θ 23 e −iδ . 1 We correct a minor misprint in eq.(7) of [17]. 2 We use the notation m ≃ ∆m 2 ⊙ and M ≃ ∆m 2 Atm , as well as m ν3 ≃ 0 for inverse hierarchy. This has the advantage that θ 12 = θ ⊙ , θ 23 = θ Atm and θ 13 = θ R for both hierarchies.
Note, that eq. (9) does not depend on neutrino masses, and that current data on neutrino angles require both ǫ and ǫ ′ to be non-zero. On the other hand, in the case of inverse hierarchy, m ν 1,2,3 ≃ (M, ±M + m, 0), eq. (8) leads to Again, both ǫ and ǫ ′ have to be different from zero. Note that δ in eq. (9) and (10) is a CP-violating phase, which reduces to a CP-sign δ = 0, π in case of real parameters.
With the equations outlined above, we are now in a position to give an estimate of the typical size of neutrino masses in the model. For an analytical understanding, the following approximation is quite helpful. Since m e ≪ m µ , m τ , ω ee , ω eµ and ω eτ are expected to be much smaller than the other ω αβ . Then, in the limit ω ee = ω eµ = ω eτ = 0, eq. (4) reduces to where From eq. (11) it is easy to estimate the typical ranges of parameters, for which the model can explain current neutrino data. In case of normal hierarchy, a large atmospheric angle requires ω µµ ≃ −ω µτ ≃ ω τ τ . Thus, we find the constraint On the other hand, a solar angle of order tan θ ⊙ ≃ 1 √ 2 requires ǫ ∼ ǫ ′ ≃ 1/2, see eq. (9). Inverse hierarchy still requires ω µµ ≃ ω µτ ≃ ω τ τ , although with a different relative sign, while ǫ and ǫ ′ have to be much larger, i.e. ǫ ∼ ǫ ′ ≃ M m , see also eq. (10).
What is the maximal neutrino mass the model can generate? Using eqs (3) and (13), this upper limit can be estimated choosing h µµ maximal. Motivated by perturbativity, we choose h µµ = 1. 3 Then, m k > ∼ 800 GeV is required (see the next section), and with m h = 100 GeV, I(r) ≃ 0.3 results. Putting finally f µτ = 0.03 we arrive at m ν 3 ≃ 0.05 eV. Since all other parameters in this estimate have been put to extreme values, f µτ ≥ 0.03 will be required in general. Obviously, even considering only neutrino data, the parameters of the model are already severely constrained. 3 One could also choose h µµ = √ 4π . However, as pointed out in [17], even h µµ = 1 at the weak scale will result in non-perturbative values of h µµ at scales just one order of magnitude larger.
Flavour violating charged lepton decays
Due to the flavour off-diagonal couplings of the k ++ and h + scalars to SM leptons, the model has sizeable non-standard flavour violating charged lepton decays. An extensive list of constraints on model parameters, derived from the observed upper limits of these decays, can be found in [17]. Here we will discuss decays of the type l α → l β γ and their connection with neutrino physics. As the experimentally most interesting case we concentrate on µ → eγ. A short comment on τ decays is given at the end of this section.
Consider the partial decay width of l α → l β γ induced by the h + scalar loop shown in fig. (2). In the limit of m β ≪ m α it is given by We will be interested in deriving a lower bound on the numerical value of eq. (14) in the following. Note, that although there is a graph similar to the one shown in fig. (2) with a k ++ in the intermediate state, there is no interference between the two contributions (in the limit where the smaller lepton mass is put to zero). Thus, in deriving the lowest possible value of Br(µ → eγ) we will put the contribution from k ++ to zero. Any finite contribution from the doubly charged scalars would lead to stronger bounds on m h than the numbers quoted below. Using eqs (7), (11) and (12) we can rewrite eq. (14) as With ǫ non-zero, constrained by eq. (9) or eq. (10) in case of normal or inverse hierarchy, Br(µ → eγ) has to be non-zero as well. Its smallest numerical value is found for the largest possible value of h µµ and I(r).
In order to calculate I(r) we need to fix r consistent with all experimental constraints. This is done in the following way. The decay width l a → l b l c l d induced by virtual exchange of k ++ , see fig. ( The most relevant constraint for the current discussion is derived from the upper bound on τ → 3µ decay, which yields, For h µτ ( mτ mµ ) = h µµ = 1, this bound implies m k > ∼ 770 GeV. For any fixed value of h µµ , we can therefore fix the minimum value of r, i.e. the maximum allowed value of I(r), which in turn fixes the lower bound on Br(µ → eγ). Fig. (3) shows the resulting lower limit on Br(µ → eγ) as a function of the charged scalar mass m h for the case of normal hierarchy. In this plot, we have assumed that the parameters µ, h µµ (and ∆m 2 Atm ) take their maximal (minimal) allowed values, thus we consider this limit conservative. We would like to stress again, that any non-zero contributions to the decay µ → eγ from k ++ can only increase Br(µ → eγ). Fig. (4) and (5) show the dependence of the limit on Br(µ → eγ) on the three neutrino angles. Both plots are for the case of normal hierarchy. Larger values of θ 12 (θ 23 ) result in larger (smaller) upper bounds. Smaller ranges of these parameters obviously lead to tighter predictions. For θ 13 , below approximately sin 2 θ 13 < ∼ 0.01 the dependence of Br(µ → eγ) is rather weak. Fig. (6) shows the calculated lower limit on Br(µ → eγ) for the case of inverted hierarchy, both, versus the reactor angle and versus m h . Due to the fact that ǫ must be larger than ǫ ≃ sin θ 23 / tan θ 13 , the expected values for Br(µ → eγ) turn out to be much bigger than for the case of normal hierarchy. Even Br(µ → eγ) < ∼ 10 −11 requires already TeV-ish masses for m h .
The most conservative limits for m h are always found for δ = π, sin 2 θ 12 = (sin 2 θ ⊙ ) M in , sin 2 θ 23 = (sin 2 θ Atm ) M ax and sin 2 θ 13 = (sin 2 θ R ) M ax . For the current bound of Br(µ → eγ) ≤ 1.2 × 10 −11 , we find m h ≥ 160 GeV (m h = 825 GeV) for Finally, we would like to mention that the decays τ → µγ and τ → eγ can be constrained in a similar way. However, the resulting lower limits, also of order O(10 −13 ), are far below the near-future experimental sensitivities and thus less interesting.
Accelerator tests of the model
In this section we will briefly discuss some possible accelerator signals of the model. With the couplings of h + and k ++ tightly constrained by neutrino physics and flavour violating lepton decays, it turns out that various decay branching ratios can be predicted. Observing the corresponding final states could serve as a definite test of the model as the origin of neutrino masses.
In [17] it has been estimated that at the LHC discovery of k ++ might be possible up to masses of m k ≤ 1 TeV approximately. In the following we will therefore always assume that m k ≤ 1 TeV and, in addition, m h ≤ 0.5 TeV. Given the discussion of the previous section, this range of masses implies that µ → eγ should be seen at the MEG experiment. The h + will decay to leptons with a partial decay width of, in the limit m α = 0, We can re-express eq. (19) in terms of the parameters ǫ and ǫ ′ as , .
in this plot is motivated by the expected sensitivity of the next generation of reactor experiments [19,20]. The width of the band of points in these plots indicates the uncertainty with which the corresponding ratios can be predicted.
In case of normal (inverse) hierarchy, assuming best fit parameters for the neutrino angles, eq. (20) indicates that the branching ratios for e, µ and τ final states of h + decays should scale as 2/12 : 5/12 : 5/12 (1/2 : 1/4 : 1/4). Inserting the current 3 σ ranges of the angles, following eqs. (9) and (10) for normal (inverse) hierarchy. The different predicted branching ratios for final states with electrons should make it nearly straightforward to distinguish normal and inverse hierarchy. Measuring any branching ratio outside the range given in eq. (22) would rule out the model as possible origin of neutrino masses. The doubly charged scalar of the model decays either to two same-sign leptons or to two h + final states. The partial width to leptons is, for m α , m β = 0, whereas the decay width to two h + can be calculated to be Here, β(x 2 ) = √ 1 − 4x 2 is a kinematical factor. The couplings h αβ in eq.(23) are constrained by neutrino physics, see eq.(13), and by lepton flavour violating decays of the type l a → l b l c l d . For m k ≤ 1 TeV the couplings h ee , h eµ and h eτ are constrained to be smaller than 0.4, 4 · 10 −3 and 7 · 10 −2 [17]. Thus, the leptonic final states of k ++ decays are mainly like-sign muon pairs (and possibly electrons).
An interesting situation arises, if m k ≥ 2m h . In this case, one can measure the lepton number violating parameter µ of eq.(2) by measuring the branching ratio of k ++ → h + h + . Combining eq. (23) and eq. (24) we can write Here, h ee ≪ h µµ has been assumed. (For non-zero h ee replace simply h µµ → h µµ +h ee in eq. (25).) Plots of constant Br(k ++ → h + h + ) in the plane (m k , m h ) are shown in fig. (8). Here, µ = f m h , with f = (6π 2 ) 1/4 has been used. Fig. (8) shows the resulting branching ratios for 2 values of h µµ , fixing in both cases the couplings f αβ such that the atmospheric neutrino mass is correctly reproduced. For h µµ < ∼ 0.2 the current limit on Br(µ → eγ) rules out all m h < ∼ 0.5 TeV, thus this measurement is possible only for h µµ > ∼ 0.2. Note that smaller values of µ lead to smaller neutrino masses, thus upper bounds on the branching ratio for Br hh k can be interpreted as upper limit on the neutrino mass in this model.
Conclusion
The observed smallness of neutrino masses could be understood if it has a radiative origin. In this paper, we have studied some phenomenological aspects of one wellknown incarnation of this idea [15,16], in which neutrino masses arise only at 2-loop order.
|
2014-10-01T00:00:00.000Z
|
2006-09-29T00:00:00.000
|
{
"year": 2006,
"sha1": "6d25dd8816b230bff0a6ef14a8b9d7d3f8ca4b44",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0609307",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c9ac7b4daa272acdebafc83c18ac5bf157a7fe43",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
230530813
|
pes2o/s2orc
|
v3-fos-license
|
The Effectiveness of Module and Webinar on the Management of Dental Health Emergency in Children with Down Syndrome
Background: In this COVID-19 pandemic, dental care is one of the affected health treatments due to its high risk of exposure to the virus. This demotivates many parents to participate in regular dental visit, particularly in children with Down Syndrome, due to the high risk of exposure. Therefore, health education is needed for the parents to provide appropriate first aid in the event of dental health emergency in children with down syndrome. Purpose: To increase the knowledge of parents with Down Syndrome children towards Dental Emergency. Methods: This was a semi-research study using descriptive methods and comparison of the pre-test and post-test. This research involved 241 persons. The intervention carried out using modules and emergency dentistry procedures for children with Down Syndrome, packaged in an interactive webinar designed for parents and caregivers. We used paired t-test to determine the significance of the improvement in understanding of the subject matter. Results: There was a significant increase in the understanding of the subject matter from a mean of 5.9 to 9.5 (P <0.05). Conclusion: The program was effective in improving the understanding on the management of dental health emergency in children with down syndrome.
INTRODUCTION
In this COVID-19 pandemic, dental care is one of the affected health treatments due to its high risk of exposure to the virus. The Ministry of Health of the Republic of Indonesia issued a policy regarding restrictions on dental care through Presidential Decree No. 11 of 2020. 1 As of November 2020, there were 448,118 confirmed cases of COVID-19 in Indonesia. 2 With the high number of cases of COVID-19 infection in Indonesia, dental services will be reduced to a minimum. Several health care centres in Indonesia limit dental care except for emergency cases. 3 The stigma of transmitting COVID-19 infection in dental practice also affects patients' psychology. Many patients delay non-emergency treatment to see a dentist. Of the number of patients who come, most of them came because of unbearable pain, broken teeth, and swelling. 4 If patients feel that their teeth are tolerable, they would not to go to the dentist. 5 Children with Down Syndrome are one of the vulnerable populations who need special care in dental care. 6 Down syndrome is an anomaly that occurs in genetic autosomes and chromosomes, especially chromosome 21. This case occurs in 1 in 700-800 births in the world. 7 This chromosome anomaly affects the physiology and characteristics of people with Down Syndrome, including their oral cavity. 8 Management of oral health in children with Down Syndrome certainly requires the help of experts such as paediatric dentists.
In the COVID-19 pandemic, it is not recommended to bring a child with Down Syndrome to a health service centre for dental treatment due to unconducive situations and the children condition. Many conditions affect systemically, for example, heart defects, which is one of the comorbidity of the In fact, children with Down Syndrome are also inseparable from dental and oral health problems such as pulpitis, gingivitis, even some emergencies in dental health. Ghaith et al in 2019 reported that 88.7% of respondents with Down Syndrome experienced dentomaxillary trauma. 9 Dentomaxillary trauma is currently the highest incidence dental health problem in children with down syndrome, Indonesian Journal of Dental Medicine Volume 3 Issue 2 2020; 22-24 https://e-journal.unair.ac.id/IJDM especially when gathering with other children during community activities or at home. There is no specific activity for that. Therefore, parents or caregivers need to be prepared. The risk of falling while playing or getting hit is very high. It also carries the risk of dentomaxillary trauma or trauma to the tooth and its supporting tissue. Common occurrences include bleeding gums as a result of trauma, and tooth loss as a result of falling. In this regard, administrators, caregivers, and parents do not have sufficient knowledge in such emergency cases in dental health. This makes improper management for dentomaxillary trauma. Coupled with the reluctance of parents to have regular dental visit in this COVID-19 Pandemic.
The program was aimed to increase the knowledge of parents or caregivers of children with Down Syndrome in providing first aid management of dental health emergency using modules and SOP provided by the researchers.
MATERIALS AND METHODS
This was a semi-research study using descriptive methods and comparison of the pre-test and post-test. This research involved 241 persons, and they are members of Parents of Children with Down Syndrome Community in Rungkut District, Surabaya city. The program implementation consisted of modules and emergency dentistry procedures for children with Down Syndrome, packaged in an interactive webinar designed for parents and caregivers.
The module contains definitions of dental emergencies, what conditions are considered an emergency, what parents can do as first responder, what tools and materials must be provided at home, to arrange home and school conditions to be safe and reduce the risk of dentoalveolar trauma.
Before participating in the program, the participants were asked to take pre-test. After that, a paediatric dentist delivered the materials followed by question-and-answer session. The program took two hours. Once the program completed, the participants were asked to take post-test.
RESULTS
The program was conducted in August 2020, involving a total of 241 respondents or participants. Included in it are 50 members from the Parents of Children with Down Syndrome Community in Surabaya. It was a webinar, and it was open to public. Therefore, many of participants were from outside of Surabaya.
We found that 75.9% of respondents who have children with Down Syndrome come from outside of Surabaya. 58.1% of respondents continue to do activities outside the home as usual, even though they actually feel anxious (52.3%) about the COVID-19 virus. However, 75.1% of respondents chose to visit the dentist in emergency cases only. Table 2 shows the pre-test and post-test results on the respondent's understanding of the subject matter. The paired t-test analysis was carried out to see the significance of improvement in participants' understanding. Table 2 shows a significant increase between the pre-test and post-test results. It increased from 5.9917 to 9.5975.
DISCUSSION
These results show that the participants found no problem going out, but they were reluctant to visit dentist during the pandemic. The participants believed that dental medical procedures expose them to a greater risk of infection via aerosol, splatter, and droplets. 10 This shows a good response in terms of awareness of the parents of children with Down Syndrome, considering that one of the conditions affected by this chromosomal anomaly is a heart disorder problem which is a comorbid from Considering the high incidence of dentomaxillary trauma in children with Down Syndrome and limited access to leave the house, the parents and caregivers are expected to be able to provide proper first-aid management in the event of trauma before visiting a dentist. The researchers made the module about the management of dental health emergency in children with Down Syndrome at home in collaboration with the Department of Paediatric Dentistry, Faculty of Dental Medicine, Universitas Airlangga. The goals were to make prevent the parents from panicking, to make the parents understand of the situation, and to make the parents able to take necessary first-aid measures.
The module contains interesting visual content that is easy for parents and caregivers to remember. Previous study reported that visual media is an effective education approach, not only provide information in writing, but also in illustrated figures for ease of understanding. 12 In addition, the module was also presented by a Paediatric Dentist experienced in children with Down Syndrome. It attracted parents and caregivers to pay more attention to the material that we had a dynamic and fruitful discussion. 13 This is evidenced by the increase in the post-test result. Initially, respondents had an average of 5.9. This means that the respondents had fairly poor understanding on the management of dental health emergency in children with Down Syndrome. The post-test result was 9.5. This suggest that the respondents had a significant improvement in the understanding of the subject matter. The paired t-test result showed a significance between the pre-test and post-test results (P < 0.05). Therefore, the module and the webinar on the subject matter has successfully improve the understanding of the participants.
|
2021-01-06T04:17:31.740Z
|
2020-12-07T00:00:00.000
|
{
"year": 2020,
"sha1": "5c1e030a446d3fe926e54638ce2f26c19dfbd7d5",
"oa_license": "CCBY",
"oa_url": "https://e-journal.unair.ac.id/IJDM/article/download/23676/15297",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ea85db939693646e34ca889ee10b99169b03e8ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201636061
|
pes2o/s2orc
|
v3-fos-license
|
Decision-making, Risk, and Gist Machine Translation in the Work of Patent Professionals
This is the first study on how patent professionals use gist machine translation (MT) in their work. Inductive, qualitative research methods were adopted to explore the role of gist MT specifically in decision-making. Results show that certain decisions by patent professionals rely on gist MT, that the decision to involve human translation is often based on a risk assessment, and that certain factors in the patent environment give affordances for the use of gist MT. The study contributes to the body of knowledge on patent MT users and on gist MT users in general.
Introduction
Machine translation (MT) for patents has been developed for a few decades and a broad body of research is devoted to the technologies and techniques for producing patent MT. The professionals who work with patents -patent attorneys, counsels, examiners, etc. -use this MT in its raw, unedited form to obtain a basic understanding, or gist, of patent documents that they need but that are in languages they do not understand. Although their use of this raw MT (termed gist MT in this article) has been widespread for approximately a decade, very little research has been conducted on these MT users. In fact, while the number of studies on one group of professionals who use MT in their work, translators, has increased in recent years, research on other professional groups who use the technology remains scarce. The main objective of this article is to provide the first study focused specifically on the users of patent MT. The article presents the results of a qualitative, exploratory study based on interviews with a small group of patent professionals who use MT in their daily work. Three themes were investigated for the article: the types of decisions patent professionals make based on machine-translated information, the risk assessment they use when deciding between relying on gist MT or opting for human translation, and finally, the environmental factors that appear to give affordances for the use of gist MT in this context. Two important aspects of patent MT are not in the scope of this study. First, the article does not focus on the issue of quality of MT output, as that has already been studied in numerous other articles. Instead, I wanted to concentrate on exploring other factors that influence gist MT use. Second, another key application of patent MT is its use by professional post-editors to enhance their translation process. These users are not included in the scope of this study.
The article will help to inform research and solution development in the patent MT field. It will also contribute to studies of different professions' use of gist MT and to a general understanding of gist MT users. Better knowledge of how MT is used in different contexts and what contributes to successful use will help us to define what makes a potential use case good, or conversely poor, for gist MT use. In addition, research on experienced users of this form of artificial intelligence can give us insights into the needs of users of other AI technologies.
The structure of the article is as follows: the next section contains a review of related work. This is followed by a description of the data and methods used in this study. Section 4 discusses the types of decisions that patent professionals report making based on gist MT. Section 5 describes the risk assessments that informants appeared to undertake when deciding on ordering human translation. Section 6 focuses on the factors in the work environment that appear to support the use of gist MT. Final conclusions and suggestions for further studies are presented in Section 7.
Related work
To the best of my knowledge, thus far no studies have been conducted on how patent professionals use gist MT in their everyday work. A few experimental studies have been done. Larroyed (2018) and Tinsley et al. (2012) describe evaluation experiments in which one evaluation is performed by real patent professionals. A number of studies describe technical solutions for patent MT, and some of those include discussions of some aspects of MT in patent professionals' work, for example, Tinsley (2017), Rossi andWiggins (2013), andList (2012). In addition to these, a few studies that focus on patent searchers also allude briefly to MT in patent search, including Joho et al. (2010) and McDonald-Maier (2009).
To date there is only a small body of research on professional areas where gist MT is used. Professional translation has been studied to some extent, though in that industry MT is predominantly used for dissemination and not for gisting. Industries with reported use of gist MT include customer support, academia, medicine and the legal field. Customer support groups began to offer multilingual access to knowledge base articles through gist MT in the early 2000s. However, although several articles describe these solutions (e.g. Stewart et al., 2010;Dillinger and Gerber, 2009), very little user experience research has been undertaken, as stated in one of the few studies on actual users (Burgett, 2015: 30). A growing body of research focuses on the use of MT in academia. Much of this focuses on the effects of MT on education and students, but some of the studies also cover educators' viewpoints, such as Bowker and Eghoetz's (2007) study on the acceptability of MT in a university setting and Bowker and Buitrago's (2019) book on using MT in research. Health care is another field where gist MT is beginning to be researched. Liu and Watts (2019) give a good overview of current studies on mobile MT use in health care. Most recently, John Tinsley describes the emergence of new use cases for gist MT in two different industries: legal and life sciences (Beninatto and Stevens, 2019). Both cases are similar to the patent case in that MT is mainly used to sift through large numbers of documents to categorize and then locate the ones that need further scrutiny and possibly human translation.
Work in the area of risk and translation has examined risk assessment and management strategies either as part of the individual translator's work (Pym, 2015;Pym and Matsushita, 2018), or from the perspective of the translation process and service provider (Canfora and Ottmann, 2018). Canfora and Ottmann (2016) present a model for risk management for internal translation processes, including a risk matrix combining the probability of risk and the potential consequences. A recent paper by Nitzke, Hansen-Schirra and Canfora (2019) introduces a model for assessing the risk associated with using post-edited or gist MT. Nonetheless, the focus of that study is primarily the post-editing context, while scenarios involving unedited MT remain mostly unexplored.
Methods
The main data for this study was gathered through interviews with nine patent professionals working in Scandinavia. The term patent professional in this study refers to professionals in the intellectual property rights (IPR) field who use their expertise in patents to assist and guide others (internal or external groups) in their IPR processes. These professionals hold a variety of titles, such as Patent Counsel, Patent Attorney, and Patent Examiner. The informants for the study are presented in Table 1.
Type of informant N
Patent professionals working in companies that are active in filing and prosecuting patent applications 5 Patent professionals working in an IPR service provider 2 Patent professionals working in a government patent office 2 Total 9 Table 1. Informants interviewed for the study I included informants from the key areas where patent professionals work: private companies, IPR service providers, and governmental organizations. The largest group consisted of professionals working in companies that file patent applications. This is somewhat reflective of the 2010 survey by Joho et al., in which 88% of respondents reported working predominately with internal clients (Joho et al., 2010: 16), which indicates that they worked in patent-filing and prosecuting companies. In addition to the interviews, I gathered background information through talking to people involved in creating and maintaining patent MT solutions.
The average age of the informants was 47 and the average length of experience in the IP field was considerable, 17 years. The group was highly educated; all had at least a master's level education and four of the nine held a PhD degree. This is similar to the educational levels reported in Joho et al. (2010).
The interviews were all semi-structured discussions that occurred either at informants' workplaces or through Skype audio calls. They were conducted in the time frame of April 2018 to February 2019. The two themes of context and transparency were explored in the interviews. I used a variety of sources in the development of the questions. ISO standard 9241-210:2010 (ISO, 2010) defines context of use through the broad categories of users, goals and tasks of users, and environment, and this was a good starting point. I relied on descriptions of patent processes in official documents (PRH, 2018;EPO, 2018) and other sources (Alberts et al., 2017;Oesch et al., 2014;Joho et al., 2010) to identify the touchpoints users might have with MT and to develop questions around those touchpoints. The questions also developed somewhat over the course of the datagathering phase.
Most of the interviews were recorded, transcribed with the aid of automatic transcription tools, and then post-edited. One interview was not recorded due to technical difficulties, so the data from that interview consisted of my notes taken during the interview. A total of 12 hours of interviewing was conducted, and 229 pages of transcription and note data compiled for analysis.
The data was analyzed by closely following the thematic analysis method outlined by Braun andClarke (2013, 2006, n.d.) with additional guidance from Merriam and Tisdell (2016). The data was approached from a semantic perspective, wherein "coding and theme development reflect the explicit content of the data" (Braun and Clarke, n.d.) rather than searching for underlying meanings in the data. One reason for this was that the topic of the use of technology at work was fairly straightforward. Also, the focus was on the context, as described by informants, instead of each informant's personal experiences.
At a point later in the analysis process, a summary of findings was compiled and a member check performed by three of the informants. They were asked to compare the results against their own experiences and to comment on any incongruences they may detect. These comments were then reviewed and incorporated into the analysis.
A qualitative method was chosen for this study for specific reasons. First, it was necessary because this is the first study on how this group uses gist MT, and research at such an early stage often requires inductive, exploratory methods. When designing the study, there simply was not enough knowledge on these users to allow for the crafting of a quantitative study such as a survey. A second reason was that the small body of research on gist MT users in general tends to rely on surveys and laboratory experiments. I believed there was a need for in-depth studies that would give us a more nuanced view of the use of gist MT. I selected interviewing for data-gathering because it proved difficult to persuade exceedingly busy patent professionals to participate in a study using more time-consuming qualitative methods such as diaries. The interviews required a commitment of only 1.5 hours, which seemed to be more tenable. Rossi and Wiggins (2013: 116) argue that "In the patent field, MT is used as a support tool for performing novelty, validity, infringement or state-of-the-art searches, and to provide a first understanding of the content of retrieved publications." However, is gaining a "first understanding" really the only way patent professionals use MT, or do they actually make decisions based on gist MT? For example, Henisz-Dostert's study of scientists' use of MT to understand Russian scientific articles reported that, contrary to predictions by early scholars that MT would be used only for scanning, scientists used MT "more as a tool of information than as a tool for the selection of information." (Henisz-Dostert, 1979: 180). One goal of this study was to explore the ways in which patent professionals use gist MT and the decisions they make with its help.
Relevance
One of the primary uses for patent MT, as defined by Tinsley (2017: 411) is "[to] provide an ondemand 'gist' translation of foreign patents for information purposes to determine relevance." The primacy of using gist MT for this purpose was Proceedings of The 8th Workshop on Patent and Scientific Literature Translation also found in this study. Informants described how they used gist MT when searching for 'prior art' (patent documents that show that an invention is not new and therefore present an obstacle to patenting it). For each patent document (either a patent or a patent application) found in their search, they need to decide whether it is relevant to the IPR process they are working on or not. Informants discussed four main IPR processes in which they use machine-translated information to determine relevance: (1) the patenting process (Does this invention show enough novelty that it could be patented?), (2) freedom to operate (Can we launch our products in this market or are there patents that we would be infringing on if we launched?), (3) monitoring (Is this patent application sufficiently relevant to our work that I should monitor its progress?), and (4) infringement (Is this patent infringing on one of our patents or are we in danger of infringing on someone else's patent?).
The results of this study reveal that the decision on relevance is very often made without the help of human translation. Therefore, the first decision made is not whether or not a patent document should be sent for human translation, but whether or not it appears to be relevant, and that is determined largely on the basis of gist MT: I would say it's [the use of MT] successful in 90 percent of the time because the conclusion is, this is not relevant…So rejecting things from further analysis I think is done 9 out of 10 reviews of the machine translated documents. (PP4) 1 It is important to note that the decision on relevance is not as minor a decision as it may seem. The consequences of mistakenly discarding a patent document that seems irrelevant can be considerable, as was reported by informants: .
Monitoring
A second type of decision that is very often made based solely on gist MT is the decision to tag a patent application for monitoring. If an application is deemed relevant, patent professionals may decide to follow its progress throughout its prosecution. Besides being used to determine enough relevance for monitoring, Gist MT is also used to understand communications on the application's prosecution or to review changes in the application. Tagging an application for monitoring also often means that the decision on human translation is postponed, because the application will most likely change before it is granted:
Patenting and opposition
A third area in which informants reported using gist MT in decision-making was during the patenting process. Within the European context, the role of MT in the examination process is explained in official guidelines: In order to overcome the language barrier constituted by a document in an unfamiliar non-official language, it might be appropriate for the examiner to rely on a machine translation of said document…A translation has to serve the purpose of rendering the meaning of the text in a familiar language…Therefore mere grammatical or syntactical errors which have no impact on the possibility of understanding the content do not hinder its qualification as a translation.
(EPO, 2018, Part G, Chap. IV-4) Patent examiners typically share the results of their patentability search with patent applicants, and any relevant patent document that is in another language is provided in machinetranslated form. Unless the applicant decides it is so important that they will provide a human translation, prosecution proceeds based on the machine translation. MT is occasionally used in opposition proceedings as well: PP9
Deciding on human translation: an exercise in risk assessment
As far as translation is concerned, the most important decision patent professionals or patent applicants make is whether to rely on gist MT for understanding a relevant document or to have it translated by a human. Nitzke et al. (2019) proposes that this type of decision involves risk and that an assessment of those risks should be part of the process of decision-making. Evidence of such a risk assessment emerged in this study, with patent professionals weighing various factors before deciding on gist MT or human translation. The factors that supported human translation of a patent document included the riskiness of the IPR process in which the document would be used, the assumed relevance and importance of the otherlanguage document, and the potential consequences if a misunderstanding would occur due to an error in the gist MT. The factors supporting the use of gist MT were lower costs, quicker access to information, and trust that the patent document is adequately understood. This assessment was summarized by some informants: Each side of this assessment is examined more closely below.
Arguments for human translation
One of the top considerations for triggering human translation was the IPR process the otherlanguage relevant document would be used in, with some processes being seen as more high-risk than others. Cases that involved infringement or freedom to operate might involve considerable costs and legal involvement, and these were consequently cited as cases in which human translation is often needed:
Arguments for relying on gist MT
The main arguments for using gist MT are clearly that translation is very quick and does not generate extra costs. MT is provided at no cost by various national and international patent offices such as the Japan Patent Office and the EPO, and it is commonly included by default in commercial patent search tools. Its use is also made easy through tight integration to patent search tools and processes. A complete understanding of the arguments for relying on MT in the risk assessment, however, requires consideration of another important element: how strong is the patent professional's trust that they have a sufficient understanding of a patent document? Much of this depends on the quality of the machine translation, of course. However, past studies have shown that other factors can enhance users' abilities to understand MT, and those were reported as helpful in this study as well. Two factors appeared to contribute to trust in understanding in this case study: the fact that patent professionals rely on other resources than the gist MT, and the background knowledge that patent professionals possess. These are discussed further in the following subsections.
Understanding does not depend on MT alone
The understanding of the machine translation of a patent document can be seen as a process of trying different alternatives until a sufficient level of understanding is achieved. The first alternative is to combine the gist MT with other resources, such as drawings and chemical formulas in the original-language patent documents, to enhance understanding. This combining of MT and auxiliary, often multimodal, information to obtain an understanding of other-language texts has been reported in other studies on MT users (Nitzke et al., 2019;Pituxcoosuvarn et al., 2018;Suzuki and Hishiyama, 2016;Way, 2013;Gaspari, 2004;Henisz-Dostert, 1979 (PP6) The next alternative professionals can turn to are the other patent professionals they collaborate with. Instead of ordering a human translation of a text that is not sufficiently understood, they can ask the patent professionals who work more closely with the inventors (for example, the patent professionals in the country which the patent originates from) to clarify unclear passages for them.
Background knowledge aids understanding
As mentioned previously, the informants of this study were both highly experienced in the IPR field and well educated. Their contextual knowledge and competences in languages appeared to be important factors in helping them understand and use machine-translated information effectively.
The importance of MT users' knowledge of context in helping them understand machinetranslated texts has been reported in a number of studies. Henisz-Dostert (1979) found that a user's familiarity with the subject matter was seen as the main factor in determining the understandability of machine-translated texts. Other studies that have highlighted the importance of contextual knowledge include Bowker and Buitrago (2019), Yasouka and Björn (2011), Yamashita et al. (2009) and Smith (2003.
In the patent context, contextual knowledge is often divided between the patent professionals, who know the patent genre, and the inventors or researchers behind the patents, who know the subject matter better. These competences, their role in helping to understand machine-translated patents, and the division of expertise between patent professionals and inventors were a common theme in the interviews.
And when you understand…if we're talking about patent publications there's a certain structure and there's a certain format that they're in, then it's in a way easy easier to follow. (PP2) Several previous studies have examined the role of users' language competence in gist MT scenarios (Nurminen and Papula, 2018;Nurminen, 2016;Henisz-Dostert, 1979). In the current study, this background competence also appeared to be a factor in successful use of MT. Although none of the informants spoke English as their native language, all used English daily in their work. Their MT use was mainly from other languages into English, not into their native languages. Besides English, all informants had varying levels of competence in other languages, with German being the most often mentioned, followed by French, Spanish, Swedish, Italian, Dutch, and Japanese. Several informants indicated that competence in the source language helped them to understand texts that were machine-translated from those languages: And quite often I actually combine a machine translation and original reading…the complementarity of understanding the structure of the language better than the machine, and the machine understanding more words than I do, is a good complementarity. (PP4) However, the reality is that the major languages patent documents are translated from are Chinese, Japanese, and Korean because these countries are significant producers of patents. China became the world's largest patent producer in 2011. By 2017, China had filed 1.3 million patent applications, more than double the number filed by the second country, the U.S. (WIPO, 2018: 40). The predominance of China was mentioned in all interviews. We can assume from this that competence in the three major patent languages of Chinese, Japanese, and Korean would be particularly useful for patent professionals.
Affordances in the patent context
Thus far this article has presented a scenario in which patent professionals can and do use gist MT to make certain decisions. The article has also discussed the factors involved in their decisions to rely on gist MT or to order human translation. However, in the analysis of this study's data, certain contextual factors emerged which appeared to make affordances for the use of gist MT. These affordances must be considered when discussing this specific case because they appear to play an important role in making the use of gist MT tenable, and an understanding of this ecosystem's use of gist MT is incomplete without them. The following sections explore two factors of affordance, risk tolerance and legitimacy.
Risk-tolerant environment
In the book titled Translation Quality Assessment, Andy Way states that MT systems need to be evaluated with the knowledge of what the system would be used for. Way also notes that " [o]f course, some objectives could be more tolerant of MT errors than others" (Way, 2018: 170). Certain features of the patent environment, while perhaps not fully error-tolerant, appeared to make affordances for the risks and potential errors tied to the use of gist MT.
The patenting process is long and iterative, with multiple parties often reviewing the same or similar texts. Different stakeholders may have different interpretations of a patent application's scope and claims. To address these issues, the process contains space for discussion and mechanisms for stakeholders to examine and challenge each other's work. One of these is explained in the Finnish Patent and Registration's Patent Guide: Even though inventions must show absolute novelty, it is not possible for patent authorities to clarify all public information when examining an application. For this reason, the examination process is augmented by the third-party observation and opposition processes, in which third parties, for example competitors, can bring to the attention of the authorities issues that did not emerge during the examination of the patent application. (PRH, 2018: 19) The nature of this process means that there are also multiple stages where errors in the understanding of gist MT can be detected and corrected. This was described by one informant: Well Besides the risk tolerance present in processes, the informants in this study displayed a tendency to accept the risk involved with using MT and making decisions based on it. One reason for this might be that the informants were vastly experienced. Another reason might be that the IPR field contains other risks besides the use of gist MT, so the organizations they work in might have a higher willingness to take risks, or "risk appetite" as defined by Nitzke et al. (2019). Finally, the acceptance of risk might be an acknowledgement that the risk is simply necessary due to the impossibility of relying on human translation for the large volumes of documentation they regularly encounter, as voiced by one informant: Yes, there is always risk involved. But we have so many patents to go through. Hundreds and hundreds at a time. It would be impossible if all of those had to be translated by a human. Always a risk though. (PP3)
Legitimacy of MT
One aspect of the use of MT in the patent environment that I did not expect when I began my research was the legitimacy that it enjoys. One of Merriam-Webster's definitions for legitimate best reflects what it means in this context: "conforming to recognized principles or accepted [emphasis by author] rules and standards." 3 Three different themes in this study illustrated this legitimacy: MT use was transparent, the boundaries of its legitimacy were documented and generally agreed upon by users, and its users had a relatively high level of 'MT literacy.'
Transparency
Transparency in gist MT use has been addressed in a few reports, most recently in a 2019 Globally Speaking Radio podcast in which John Tinsley reported that the legal profession is beginning to use MT for e-discovery, and that its use is fully transparent in that context: "So you go into the court and say to the judge, 'We are taking this position on the basis of a machine translation of this document into English,' and that's legally defensible" (Beninatto and Stevens, 2019). At least in the European context of this study, the first sign of transparency was the inclusion of MT in EPO guidelines. Second, descriptions by study informants depicted an environment in which the role of MT is transparent to all. They also reported that MT is transparent to secondary users of patent MT, the internal and external clients the patent professionals work with. The results of searches these clients receive from patent professionals often include documents that are machine-translated. These are clearly marked as machine translations and they often also include the date and MT tool that produced the text. Patent professionals discuss MT with the secondary users, as in this example:
MT literacy
In 1993 Church and Hovy defined six "desiderata" for a good use case for MT. Among the six were: "it should set reasonable expectations" and "it should be clear to the users what the system can and cannot do" (Church and Hovy 1993: 257 Toral et al., 2018) do not seem to be occurring in patent MT. In the present study, MT was considered to be one tool among others and people were aware of its uses and limitations. I heard no reports of overreaching claims on MT capabilities.
Conclusions
The main objective of this study was to explore the types of decisions patent professionals make based on machine-translated information, the risk assessment they use when deciding between relying on gist MT or using human translation, and the environmental factors that appear to support the use of gist MT in this context. The results revealed that patent professionals routinely make decisions on relevance and monitoring based on gist MT, and that the patenting process also relies on it. In the key decision of initiating human translation, patent professionals tend to weigh the riskiness of the IPR process in which the translated patent document would be used, the assumed relevance and importance of the document, and the potential consequences of misunderstanding against the lower costs, quicker access to information, and trust in a good enough understanding of the patent document. That understanding is often based not only the gist MT, but also other factors, such as auxiliary information sources and patent professionals' contextual and linguistic knowledge. The environmental factors of risk tolerance and legitimacy for gist MT also support the use of MT.
The study contributes to our knowledge of how people, and specifically professional groups, use gist MT. It explores factors that can enhance the use of gist MT, and this understanding will help us to define the characteristics of good, as well as poor, contexts for gist MT use. In addition, this analysis contributes to the growing body of research on users of various types of artificial intelligence.
This study had certain limitations. The group of informants was small and somewhat homogeneous, and this influenced the results. Data was gathered through only one method, interviewing. The results also focused on patent work in one geographical area and one specific point in time, and the results cannot be considered representative of the larger population of patent professionals. Nevertheless, as the first exploratory study of this very experienced group of MT users, it fulfilled one of the main purposes of inductive research in that it uncovered new themes and hypotheses on how a specific group uses gist MT and on the contextual factors that contribute to their use of it.
Further studies on this gist MT user group could target an expanded group of informants, including more diverse participants, other patent MT user groups, and less experienced patent MT users. Studies incorporating other methods such as contextual inquiry, diaries, or quantitative methods such as surveys could verify some of the findings of this study and might reveal further insights on this user group. In addition, it is hoped that we will see a growth in the research, and number of researchers, devoted to studying all types of users of gist MT.
|
2019-08-22T05:27:50.282Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "29d3700cff8e4027346fc36081211000695179d2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "29d3700cff8e4027346fc36081211000695179d2",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
248919645
|
pes2o/s2orc
|
v3-fos-license
|
Silver Linings Around the Increased Use of Telehealth After the Emergence of COVID-19: Perspectives From Primary Care Physicians
Introduction/Objectives: With the emergence of COVID-19, the transition from in-person care to widespread use of telehealth raised many well-described challenges for primary care providers (PCP). The purpose of this study was to improve understanding of how this increased use of telehealth impacted PCPs in positive ways, and specifically focus on any “silver linings” of using telehealth. Methods: We interviewed PCPs working at a large Midwestern academic medical center between June and July 2020 and asked for perspectives about the use of telehealth during the pandemic. Verbatim transcripts were coded and analyzed using deductive dominant thematic analysis that allowed for categorization of data and identification of emergent themes. Results: PCPs noted 3 main benefits of using telehealth: (1) demonstrated remote care was feasible, (2) patients expressed gratitude; and (3) payers fully reimbursed for telehealth visits. PCPs also described “silver linings” they perceived for patients: (1) easier access to care, (2) more convenient follow-up care, and (3) ability to get quick specialty referrals. Conclusions: Study participants offered encouraging feedback regarding the potential for telehealth to offer a convenient and patient-centric alternative to in-person care. As a healthcare delivery mode, telehealth can remove personal and social barriers to care for many patients, but reimbursement parity and more evidence is needed to inform best practices for ongoing telehealth use in primary care. With the continuing use of telehealth, it will be important to monitor health outcomes as well as consider how these modalities may need to be adapted to mitigate potential care disparities.
Introduction
The use of telehealth expanded rapidly in primary care as providers pivoted their work to address patient care needs due to the emergence of Coronavirus Disease 2019 . 1,2 Defined as "the use of electronic information and communication technologies to provide and support healthcare," 3 telehealth has enabled the continued delivery of important clinical services while keeping aligned with public health guidance to minimize disease transmission of the highly contagious COVID-19. Of note, the surge in telehealth visits that started at the beginning of the pandemic has persisted, with current usage in primary care still 35 times higher than before the start of the pandemic, and as much as 38 times greater in other medical specialties. 4 Although the increased availability of telehealth visits has expanded access to care for many patients and has helped improve patient-provider communication, 5,6 this rapid transition from face-to-face to virtual care delivery has also introduced major challenges for primary care providers (PCPs) such as workflow disruptions, staffing changes, and difficulties adapting to the use of new technologies. 7 At the same time, these challenges have been linked to lower work engagement, increased risks for physician burnout, and compromised patient care. 8,9 While many studies have examined the increased use of telehealth since the emergence of COVID-19, much of this prior work has focused on the challenges and negative impacts of this change. In this analysis, we considered PCPs' perspectives about the positive aspects, or "silver linings," of this transition.
Study Sample and Data Collection
As previously described, 10 the study sample included 20 PCPs from the Department of Family and Community Medicine and the Division of General Internal Medicine and Geriatrics who were affiliated with a large Midwestern academic medical center (AMC). Between June and July 2020, we conducted one-on-one video interviews with PCPs. Following a semi-structured interview guide, we asked PCPs about the early impacts of COVID-19 on their work, including what they perceived were benefits of delivering primary care via telehealth. Interviews lasted an average of 35 min and were audio-recorded, transcribed verbatim, and de-identified prior to analysis. All interviewees provided verbal informed consent prior to their participation. The study was approved by the authors' Institutional Review Board.
Data Analysis
We analyzed interview transcripts using a deductive dominant thematic approach to allow for data to be categorized based on general themes derived from the interview guide, as well as to allow for identification of emergent themes. This approach also enabled us to compare themes across interviews and to characterize the ways in which PCPs described any "silver linings" that accompanied their transitions to using telehealth for primary care provision.
Using this approach, 2 members of the research team first reviewed 2 of the transcripts and developed an initial coding dictionary based on the questions asked in the semistructured interview guide and additional topics that emerged during interviews. The codebook was then refined, and the remaining transcripts were coded. Frequent meetings of the research team ensured consistency of coding across transcripts. For the results we present here, we focused on the "benefits of telehealth" code. All data analysis processes were supported by the ATLAS.ti (version 8.4.4) qualitative data analysis software.
Study Participants
Characteristics of the 20 PCPs who participated in our study are presented in Table 1. On average, interviewees had 16 years of primary care experience (range = 3-41 years).
Perceived "Silver Linings" for Primary Care Physicians
Across interviews, PCPs' comments about the rapid increase in use of telehealth because of the emergence of COVID-19 were largely positive, and they specifically mentioned 3 "silver linings" for themselves that they associated with this experience: (1) remote care delivery was feasible; (2) patients expressed gratitude; and (3) payers adapted insurance coverage to fully reimburse for telehealth. We describe these themes in further detail next, and present additional verbatim quotations in Table 2.
First, interviewees emphasized that their experience during the pandemic convinced them and others that delivering medical care remotely was feasible. A physician noted, "As much as patients demand and want to be seen in person PCPs also commented about how it was a "silver lining" to hear and see how grateful their patients were to receive virtual care during the pandemic. One physician reflected, "I think patients generally have been very thankful that we're able to offer any kind of care in any modality through this whole thing. And, I feel like a lot of our patients worry a lot about our safety and making sure that we're healthy. Some of them have expressed a lot of relief that I can still see them." Another physician echoed this sentiment, noting, "Everyone's been grateful I'm saying and some had actual needs that they weren't addressing or didn't think to address." Third, interviewees reported being pleased that the rapid increase in the use of telehealth in response to the COVID-19 pandemic had resulted in improved insurance coverage for virtual visits. As 1 physician explained: "I think that actually forcing payers to come up with solutions that kind of adapt to our real world and our technologies probably wouldn't have happened without the pandemic. And so as a result like we have these novel tools and we are more able to use them. I think that the telehealth visits are our silver linings because we have had all these tools available to us but we didn't use them because there wasn't an impetus and then we didn't use them because there was no billing strategy." Another PCP similarly commented, "One thing that's good is like at least temporarily, Medicare and other insurers are covering video visits better now. So, my hope is that they'll just continue to do that and recognize that it's really a valuable tool for patients and providers."
Perceived "Silver Linings" for Patients
PCPs also provided examples of the "silver linings" they perceived patients experienced that resulted from the delivery of care via telehealth: (1) it was easier for patients with certain medical conditions to receive care; (2) it provided patients with a convenient follow-up care modality; and (3) it enabled patients to get quick specialty referrals and visits. Below we describe these themes in greater detail, and additional verbatim quotations are presented in Table 3.
Physicians' responses suggested they perceived a benefit for patients around the ability for patients with certain medical conditions to receive care via telehealth instead of needing to come in person to the office. As 1 physician Finally, PCPs suggested that telehealth provided a "silver lining" for patients when it enabled quick referrals for specialty care, timely consults, and less time to get appointments. One physician explained, "Patients love it [fast e-consults]. So, I'm talking to a patient and I would say hey, we need an appointment for this. They don't have to wait for six months for a dermatologist to see them or a nephrologist. I'll just say e-consult this case within two days. We are I take care of a large population of patients with autism, and many of them really struggle to come into the office due to environmental challenges or changes to structure in the normal day or whatever it may be. And I've actually found many like, I've had many people say, well this is so great, can we do this all the time? You know, just because it takes away that extra burden. (G) They may be more comfortable when they're sitting at home versus in your clinic and talking to you about their mental disorders. You know, they're in a more comfortable environment. (I) I do a lot of mental health and so those visits, the fact that those visits increased during this time was I think helpful for a lot of people because they're like hey, you know, I need to talk to somebody about this, but I probably going to get an answer actually 24 hours, not even two days." Another physician reflected: "Doing a quick telemedicine visit is now more accepted from everybody. If I need to squeeze someone in, adding a video visit is easier."
Discussion
Our study of the experiences of PCPs using telehealth during the first 4 months of the COVID-19 pandemic (i.e., when restrictions on in-person visits were in effect and then lifted), suggested that physicians perceived multiple "silver linings" related to the increased use of telehealth. Specifically, they highlighted benefits for their own practices such as learning that telehealth was feasible, hearing appreciation from their patients for telehealth visits, and receiving appropriate reimbursement for those visits. They also perceived benefits for their patients and reported that telehealth appeared to enhance care access for patients with chronic conditions and disabilities, made it easier for patients to have virtual follow-up visits with care team members, and enabled expedited patient referrals to specialists. With respect to the broader literature on telehealth expansion during the pandemic, 11 our findings document the unforeseen advantages of utilizing telehealth in routine primary care. Further, our study answers the call for more research examining the contextual factors surrounding telehealth applications. 12 Specifically, our qualitative findings demonstrate how the rapid transition to telehealth enabled PCPs to engage with their patients to address their clinical and non-clinical needs during the pandemic. However, despite these benefits, adaptations to existing workflows are likely necessary to sustain telehealth as an equitable primary care modality. 13,14 Given the time frame of our study, telehealth usage was rapidly increasing, and during the declared public health emergency, government agencies and payers allowed video and audio-only visits to be covered at equal parity to inperson visits. 15,16 PCPs in our study indicated that reimbursement parity was important in promoting telehealth use, similar to what has been reported in previous research, 17 and also noted that sustaining this parity could allow for the continued use of telehealth modalities. PCPs commented that they were hopeful that payers would continue with this reimbursement approach so that telehealth options could remain available to patients who preferred to receive care remotely as well as supporting ongoing efforts to reduce the spread of COVID-19. Furthermore, establishing reimbursement mechanisms that allow for video and audio-only visit parity both now and in the future can help to address certain aspects of the "digital divide" 13,17,18 that exists between patients who do and do not have access to broadband internet.
Our findings also suggest that telehealth provides a convenient alternative to in-person visits, consistent with the results of other studies. 19 Previously identified clinical 20 and non-clinical factors that can prevent patients from adhering to in-person visits such as not having transportation, childcare, or the ability to take time off from work 21-23 were identified by our study participants as having been reduced by the increased availability of telehealth after the emergence of COVID-19. To the extent that telehealth may be able to address some of these personal and social barriers to care, PCPs can reasonably expect to see decreases in noshow rates and in patient cancelations of scheduled appointments. 24,25 Moreover, PCPs in our study indicated they were accepting of telehealth and would consider using virtual modalities to provide ongoing follow-up care for patients who either preferred to use telehealth or had conditions that limited their ability to have an in-person visit. This is consistent with previous studies that found physicians and patients had favorable attitudes toward telehealth, 26 and that patients were satisfied with the quality of telehealth they received. 27 Our research may also have important implications for designing telehealth options in primary care that better support patient-centered care. By removing some of the burdens to accessing in-person care (e.g., transportation), telehealth reportedly enabled PCPs in our study to simplify follow-up care for patients who did not require in-person visits. As the use of telehealth continues, more research will be necessary to understand how PCPs and staff members can effectively integrate telehealth with in-person visits and further characterize the workflows and processes that can facilitate a safe and patient-centered care experience that involves this care modality. 28 For instance, patients may need to better understand how sensitive information can be securely shared with their providers, as well as any potential risks of using video platforms to receive primary care. It will also be important to understand explicitly how telehealth can expedite e-consults and specialty referrals without compromising patient safety. This information may shed light on the specific needs of those PCPs who will continue to use telehealth to conduct e-consults as well as refer patients needing follow-up care.
These study findings may have important practical implications for PCPs who are currently using or planning to use telehealth both during this COVID-19 pandemic and into the future. Given that the physicians in our study reported positive impacts of their use of telehealth, primary care practices may benefit from conducting hybrid video and in-person visits to reduce barriers to patient follow-up and to accommodate patients' preferences for specific visit modalities. However, future protocols for telehealth use will also need to take into consideration the technology access and capabilities of patients as not all individuals have the option to have a video visit with their doctor. 29,30 Providers and practices may be able to implement simple screening questions to ensure patients have access to the technology and connectivity that they need and reduce the inequities that could result. As the pandemic continues, it will clearly be important continue to monitor and study the use and impacts of telehealth as we build evidence about this care modality and its integration into primary care practice.
Limitations
Our study has several limitations. First, this study was conducted at a single healthcare system in which physicians had access to a shared electronic medical record (EMR). It is possible that some of the benefits noted in this study, including feasibly scheduling follow-up visits and specialty care, may not generalize to free-standing primary care clinics that do not share a common EMR with neighboring healthcare organizations nor have existing partnerships with clinical specialties (e.g., oncology). Nonetheless, given our saturation with respect to the themes we present and the consistency of our findings in the context of previous research, we are confident that the results we report can be applied to help facilitate the use of telehealth for both patients and providers. Second, given the challenges of the COVID-19 pandemic, we were unable to conduct interviews with patients to understand their experiences with telehealth and to see if their perspectives corroborated with those reported by the PCPs who participated. Of note, a recent nationally representative survey study of U.S. adults (n = 2080) found that two-thirds of respondents preferred at least some video visits in the future, 31 suggesting our findings may indeed align with patient perspectives. Third, all of the PCPs we interviewed reported very little prior experience with telehealth prior to the emergence of COVID-19. It is possible that the perspectives of providers with previous experience may have contributed additional information and insights to this study. Finally, in the context of the ongoing pandemic, it is likely that physicians' perspectives about telehealth will evolve, highlighting the importance of ongoing research in this area.
Conclusions
We found that in spite of the rapidity with which physicians had to embrace and increase the use of telehealth with the emergence of COVID-19, the impacts of this experience also included "silver linings" for the providers who participated in our study. These PCPs noted benefits to both themselves and their patients that included convenience, responsiveness, and new options for follow-up care. From a policy standpoint, ongoing reimbursement parity for telehealth will be essential to support primary care's capacity to deliver preventive and follow-up services via telehealth and ensure equitable enhance access to this care.
|
2022-05-21T06:23:23.153Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "edacd5f162d8275a0f5b471986617dddba96bd21",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21501319221099485",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4de26192e97f22a30c5cb6280a0f0cd0f4628d7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202142311
|
pes2o/s2orc
|
v3-fos-license
|
Contribution of GT-Mus to the 6.7 keV Emission Line from the Galactic Ridge
We performed the spectral analysis of Suzaku data on GT-Mus which was observed to emit 6.7 keV during its flaring period. GT-Mus was observed by the Suzaku team on December 12, 2007, with observation identity (402095010) for 96 kilo seconds. We downloaded GT-Mus data from the high energy astrophysics Suzaku archive. Our data reduction and analysis were done using XSELECT version 6.9 and XSPEC version 12.8. Errors reported in this work were done using the XPEC error command. We generated the spectrum and deduced a strong He-like (6.70 keV) emission from the source with Equivalent width of 282 ± 0.02 eV. This observed 6.7 keV emission line has an equivalent width which compares favorably with the equivalent width of 6.7 keV emission line from the galactic ridge (300-980 eV) depending on the Galactic position. From our analysis, we generated the light curve of the source which showed strong evidence of stellar flare. We therefore conclude that this observed stellar flare might be responsible for the observed 6.7 keV emission line. We however suggest that GT-Mus (HD101379) and other RS CVn stars that emit in 6.7 keV line could contribute to the 6.7 kev emission line from the galactic ridge during their flaring periods, since they exhibit the same level of chromospheric activities.
Introduction
Most red dwarf stars of late spectral type which increases in brightness within a few minutes are flares stars. Some researchers accept that flaring in stars occurs due to magnetic activities. The brightness can be seen across the electromagnetic spectrum from radio to X-ray wavelength. Examples of flare stars are found in some cataclysmic variable (CVs), Coronally active binaries (ABs) and Rs Cvn stars etc. though some brown dwarf stars have been reported to flare as well. The population of these red dwarf stars and their high flare activity suggest thatthese objects can give an essential contribution to the galactic background radiation [1][2][3][4]. This background radiation in the galactic plane and galactic center is referred as galactic x-ray emissions. Prominent features of these emissions are hard x-ray emissions which are associated with 6.4KeV, 6.7 KeV and 6.9 KeV. This hard galactic ridge X-ray emission has before now, remained a major puzzle in the galactic X-ray astronomy [5].
RS CVn are closed detached binaries which are tidally locked and rotate rapidly around each other at inclination of ~ 81° with orbital periods of about 2.87 days [6]. The rapid rotation of the component stars combined with deep convection envelopes produces a variety of magnetic activities and magnetic reconnection in the corona led to the release of energetic long duration sporadic stellar flare [7]. Hall classified RS CVn system as one in which the hotter star is of spectral type F-G, IV-V, in which the orbital period of the system is between one and fourteen daysand the cooler star shows a strong Call Hand K emission in its spectrum [8]. A prominent feature of RS CVn systems is their intense activity at radio, ultraviolet and x-ray wavelengths. Their Xray luminosities which are inthe order of 10 24 watts had been observed [9][10][11][12][13]. It was also proposed that the coronal model for this X-ray emission is different from that of the Sun only in scale [13]. Further research also demonstrates that there was a strong correlation between its rotational period and Xray emission [14][15]. VLA observations showthat the average radio emission from RS CVn systems is about 10 20 watts as compared with 10 13 watts for the Sun [14]. Drake et al In 1986studied the radio emission of the short period RS CVn systems and found that they had a slightly lower mean radio luminosity (observed fluxes in the range 0.3 to 5.0 mJy) than active binaries of longer orbital period.
GT-Mus usually called a quadruple system consist of a Rs Cvn type binary and an eclipsing binary companion. This system had been mentioned by Mitrou et al as a type without any proven EUVE detection in the AI/Ti/C (160-240 A) and Lex/B (50-180 A) bands [15]. The system was reported by Dempsey et al [16] to be a ROSAT position-sensitive proportional counter PSPC of about 0·1-2·4 keV X-ray source, with a count rate of about 2·85±0·18 counts/s. The calculated X-ray luminosity was deduced to be Lx =39·43 x 10 23 W. Gurzadyan & Cholakyan noted GT Mus for its unique stellar separation and strength of emission in the Mg II, 2800 ultraviolet doublet [17]. He thus concluded that it belonged to a class of close (a ~2·7 R0) binaries with a common chromosphere. This chromospheresurrounds both stars such that the source of chromospheric emissions may be a highly excited region between the stars.
Murdoch et al provided an orbital solution for the singlelined binary system HD 101379, derived from radial velocities [18]. This research is focused on RS CVn type binary source (HD 101379). The source under consideration is a detached binary driven by a high level of chromospheric activity [19]. Due to the active nature of the system's chromospheres, large stellar spots are detected. Powerful magnetic field in the stellar spots is generated due to the flow of gas and plasma down the surfaces of these stars. Being a detached binary, it centers of mass is within the binary separation hence, we can extrapolate that as the stars rotate, the magnetic field will move. Moreover, moving magnetic field can create an electric field, hence a moving electric field can equally create a magnetic field so if they can create each other, they will oscillate. This oscillation creates an electromagnetic radiation, which is likely what we observed as stellar flares that emit in 6.7 KeV.
A region in the galactic plane which emits in X-ray band is called a Galactic ridge X-ray emission. A hard continuum Xray sources associated with 6.7 keV emission line from helium-like iron are some prominent features of GRXE. If this emission is thermal in origin, it implies a detectable plasma at a temperature of about 5-10 keV [20]. However, the 6.7 keV emission line is very bright in the Galactic Centre [21]. Muno et al. In 2003 and2004, studied the galactic ridge and reported that binaries of a white dwarf, late-type dwarf star and cataclysmic variables (CVs), are the major contributors to the 6.7 keV emission line considering its population and thermal plasma with emission line [22][23]. In 2006, Revnivtsev et al. studied the contribution of X-ray sources from active binaries and demonstrated that active binaries (ABs) together with cataclysmic variables (CVs) produce the bulk of the galactic ridge X-ray emissions [24]. In this study, we have carried out a spectral analysis on GT-MUs (HD101379) and resolved a strong 6.7 keV emission line with an equivalent width of 282 eV. Our result compares favorably with the equivalent width of 6.7 keV emission line from the galactic ridge (300 -980 eV) depending on the Galactic position [25][26][27]. Since GT-MUs (HD101379) is a member of RS CVn stars, we suggest that other RS CVn stars located at the galactic center which emit in 6.7 keV during their flaring periods could also contribute to 6.7 Kev emissions from the galactic ridge due to their similar chromospheric activities.
Data Analysis Results
We downloaded GT-Mus data from the high energy astrophysics Suzaku archive. GT-Mus was observed by the Suzaku team on December 12, 2007, with OBSID (402095010) for 96 kilo seconds. The data reduction and analysis were done using the High energy astrophysics software (HEAsoft) version 6.9 and XSPEC version 12.8. The desired extraction regions (source and background spectra) were done using XSELECT. We also extracted the XIS background with 200 arc radii from a circular region to create the background spectra. We made sure that there was no captured light from any apparent source that was offset from both the source and other corner calibrated source. On the X-ray imaging spectrometry, we extracted all events within 250 arc radii of GT-Mus for each of the X-ray imaging spectrometer (XIS) detectors to create the source spectra. The 250 arc radii suit the extraction well; hence there was no need for adjustment. The light curve was generated and we extracted only the portion where the source showed stellar flare (see Figure 1). The Response Matrix Files and Ancillary Response Matrix Files (RMF and ARF) were generated using X-ray imaging spectrometer Response Matrix Files generator (xisrmf-gene) and X-ray imaging spectrometer Ancillary Response Matrix Files generator (xissarmf-gene) software. The object under consideration is a point source;hence we centered our extraction region on the source specifically to deconvolve light from instrumental background.
We combined data from X-ray imaging spectrometers XIS0, XIS1 and XIS3 using XSPEC.
We also extracted the background and source spectra for each observation using the XSELECT filter time file routine program. Spectral analysis of our data was done using XSPEC version 12.8. We obtained all reported errors from the XSPEC error command embedded in the software. XSPEC is available via the High Energy Astrophysical Science Archive Research Center (HEASARC) online service, provided by NASA/GSFC. We modeled the spectrum using thermal bremsstrahlung model with a Gaussian line at 6.7 keV. Since we are primarily interested in the 6.7 keV emission line, we concentrated our spectral fittings within the energy band 4 -8 keV. We resolved a strong He-like (6.70 keV) emission from the source with Equivalent width of 282 ± 0.02 eV which compares with the Equivalent width (300 -980 eV) of 6.7 keV emission linefrom the Galactic ridge depending on the Galactic position [25][26][27]. The light curve is shown in Figure 1 and the spectrum of GT-Mus is shown in Figure 2. Figure 2 shows Suzaku spectra of GT-Mus, the upper panel shows spectrum of the GT-Mus with the crosses and solid lines representing the data and the model respectively. The peak of the spectrum is the 6.7 keV line as represented by dotted lines from the energy axis. (Black color is front illuminated X-ray imaging spectrometer while the red color is back illuminated X-ray imaging spectrometer). Table 1 shows our spectral parameters where C is for the covering fraction. KT is for the energy of the continuum, where K is the Boltzmann constant and T is the temperature. Fcount is for the continuum photon counts. E6.7 is for the 6.7 keV emission line energy. EW 6.7 is for the Equivalent width of the 6.7 keV emission line. R is for the reduced chisquared value and d.o.f. is the degrees of freedom.
Comparison of the 6.7 keV Line of GT Mus from Other Flare Stars from the Galactic Ridge
Previous studies reveal that the equivalent width (EW) of the 6.7 keV emission line from the galactic ridge is in the range of 300 -980 eV depending on the galactic position [25][26][27]. This compares with the equivalent width (EW) we got in this work as shown in the table1. The emission measure distribution shape (X-ray light curve) of Algol during quiescent and flare phases is similar to that of some observed flare stars and X-ray binaries that exhibit sporadic flare activities. The 6.7 keV line emission (with an equivalent width; 510 eV) resolved during the flare epoch suggests that Algol is a Galactic X-ray emitter, and Algol could be among the probable sources of 6.7 keV in the galactic ridge [7].
Contribution of Stars to the 6.7 keV Emission Line from Galactic Ridge
The bulk of the galactic ridge is believed to be composed of X-ray sources from active binaries (ABs), cataclysmic variables (CVs), binaries of white dwarf and symbiotic stars [24][25][26][27]. However, the contributions of these sources to 6.7 keV from the galactic ridge have not been completely addressed. Stellar flares are known to produce strong 6.7 keV lines and it is generally believed that this line may disappear when the star goes into quiescence [28]. The observed light curve shows flaring activities which result in an increase in count/sec (brightness) as shown at the left hand side of the Figure 1 (the region enclosed by the red dotted lines).
Conclusion
We carried out spectral analysis of GT-MUs (HD 101379), and resolved a strong 6.7 keV emission line, which was emitted as a result of flaring activities from the source. We generated the light curve and deduce clearly that the source is flaring. This stellar flare appears to be responsible for the observed 6.7 keV emission line. It is possible that as the GT-Mus (HD101379) returns to its quiescence, the emission of this line may disappear. In other words, GT-Mus (HD 101379) which emit in 6.7 keV could contribute to the 6.7 keV emission in the Galactic Ridge only during its flaring activities. Thus it can be argued that other RS CVn stars located at the galactic center which emit in 6.7 keV line could contribute to 6.7 keV emission since they exhibit the same level of chromospheric activities.
We conclude that the star could contribute to 6.7 keV emission line from the galactic ridge based on the fact that both has a comparable Equivalent Width. We are of the view that a collection of other RS CVn stars, which undergo flaring activities and emit 6.7 keV line in the galactic center could contribute to the 6.7 keV emission from the galactic ridge.
|
2019-09-10T09:10:00.543Z
|
2019-06-19T00:00:00.000
|
{
"year": 2019,
"sha1": "db72121d612a412f5976d31a95f2d0db136808f4",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijhep.20190601.12.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7622a0ce0f8be6f21a039c016090080042e05213",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
251117818
|
pes2o/s2orc
|
v3-fos-license
|
Measurement of Uterus Sizes Of Multiparous Women using Ultrasound THE THERAPIST
Human uterus is a pear-shaped bromuscular organ. The measurements of a typical uterus are 7.6x4. 5x3cm. The uterus grows slowly during fetal life until the end of the rst trimester when it grows at a higher rate due to increased maternal oestrogen production. As a result of this continuation of the maternal oestrogen the uterus shrinks immediately after delivery. Objective: To evaluate uterus size in multiparous women using ultrasound. Methods: It was a cross-sectional study carried out at private sector hospital of Gujrat over 4 months period from December 2021 to March 2022. The sample size was 41 calculated via a convenient sampling approach from previously published studies. Multiparous women following ultrasound examination during the study period were included after receiving informed consent. The patients' demographic statistics were collected on a specially designed data collecting sheet. The data was analyzed using the SSPS V20 program. Results: The average length was 7.9±1.15, width was 4.3±0.77, and thickness 3.5±0.66. There was no signicant correlation between uterine size (length, width, thickness) and many parities because the value in the "Sig. (2-tailed)" is 0.607, 0.640, and 0.983 respectively which is more than 0.05. Conclusion: The current study found no correlation between the number of parities and the length, width, and thickness diameters of the uterus. Multiparous Uterus Of Multiparous
Human uterus is a pear-shaped bromuscular organ. The measurements of a typical uterus are 7.6x4. 5x3cm. The uterus grows slowly during fetal life until the end of the rst trimester when it grows at a higher rate due to increased maternal oestrogen production. As a result of this continuation of the maternal oestrogen the uterus shrinks immediately after delivery. Objective: To evaluate uterus size in multiparous women using ultrasound. Methods: It was a cross-sectional study carried out at private sector hospital of Gujrat over 4 months period from December 2021 to March 2022. The sample size was 41 calculated via a convenient sampling approach from previously published studies. Multiparous women following ultrasound examination during the study period were included after receiving informed consent. The patients' demographic statistics were collected on a specially designed data collecting sheet. The data was analyzed using the SSPS V20 program. Results: The average length was 7.9±1.15, width was 4.3±0.77, and thickness 3.5±0.66. There was no signi cant correlation between uterine size (length, width, thickness) and many parities because the value in the "Sig. (2-tailed)" is 0.607, 0.640, and 0.983 respectively which is more than 0.05.Conclusion: The current study found no correlation between the number of parities and the length, width, and thickness diameters of the uterus. which increases the risk to 0.5 percent [12]. Medical reasons force about a quarter of women who have had a previous cesarean delivery to deliver early. Labor induction during TOLAC (trial of labor after cesarean delivery) raises the risk of uterine rupture even more. The danger is considered to be between 1.4 and 2.3 percent [13]. The uterus grows slowly during fetal life until the end of the rst trimester when it grows at a higher rate due to increased maternal estrogen production. As a result of this continuation of the maternal oestrogen the uterus shrinks immediately after delivery. Uterine length is less than 35 mm between the ages of 2 and 8 with an anteroposterior diameter of 10 mm [14,10]. Subjects are scanned in a supine position in both longitudinal and transverse plans in US examination [15]. The uterine assessment, such as pelvic ultrasound, should be part of the rst evaluation of women who have lost several pregnancies [16]. The post-cesarean uterus is frequently ante exed, and myometrial loss of about 50% is common [17]. The uterine exion angle can be changed to a more retro exed position after a caesarean delivery [18]. Gigantic polyps are most common in multiparous women in their 50s. At the time of presentation, these giants cervical polyps are usually misdiagnosed as malignant neoplasm. In multiparous women with something is coming out per vagina, a huge polyp of the cervix anterior lip occurs [19]. Curettage between the 2nd and 4th weeks after delivery is most likely than any other endometrial trauma to produce adhesions. Infertility, recurrent abortion, or menstrual irregularity following any uterine trauma should alert the doctor to the possibilities of intrauterine adhesions. Uterine myomas are the most frequent benign solid pelvic tumors in women, a f fe c t i n g 2 0 -2 5 % o f r e p r o d u c t i ve -a g e wo m e n. Dysmenorrhea, repeated pregnancy loss, and premature birth are all symptoms of submucosal myomas [20]. The mullerian ducts didelphys is a rare congenital abnormality of the uterus [21]. Uterine broids are one of the most common uterine disorders affecting roughly 12% to 25% of women of reproductive age. Menorrhagia, frequent urination, and dysmenorrhea are all indications of benign neoplasm [22]. Over 10% of all pregnancies are complicated by preeclampsia (PE) and fetal growth restrictions (FGR) which contributes considerably to fetal and maternal morbidity and mortality [23]. A tangle of aberrant arteriovenous connections in or around the uterus is known as a uterine vascular malformation(UVM) [24]. On the 10th day, the endometrial cavity was substantially bigger in multiparous women, and the uterine cavity was mostly echo-negative [25]. The current study was intended to measure uterus dimension in multiparous w o m e n u s i n g u l t r a s o u n d a n d t o c o r r e l a t e t h e measurement of uterus with number of parities. This Table 2 shows the uterus length, width, and thickness with an average length measuring 7.9±1.15, width 4.3±0.77, and thickness 3.5±0.67.
A cross-sectional study was conducted in the department of Radiology of a private sector hospital in Gujrat, Pakistan. Subjects for this study were only female from 20 to 50 years who have undergone ultrasound examination, this study was conducted over 4 months from December 2021 to March 2022. A total of 41 patients were selected using a convenient sampling approach. An informed written consent form was also signed by patients. The ultrasound was done using a 3.5 MHz probe. The patients demographic statistics were collected on a specially designed data collecting sheet. The data were analyzed using the SSPS V20.0.
R E S U L T S
The current study was conducted among 41 females for the study measurement of uterus sizes in multiparous. The study was conducted among different age groups ranging from 20-50 years. Table 1 shows the number of parities among female patients with most females reported to the radiology department were having highest frequency 15 (36.6%) and least female reported low frequency 2(4.9%). Table 3 shows the correlation between many parities and uterus length there is no signi cant relationship between them because the value in the "Sig. (2-tailed)" is 0.607 which is more than 0.05.
Pearson Correlation
Sig
D I S C U S S I O N
In conclusion, the average uterus length 7.9±1.15, width 4.3±0.77, and thickness3.5±0.66 in diameters. The current study also found no correlation between the number of parities and the length, width, and thickness in diameters of the uterus. Furthermore, the study found that uterus length, width, and thickness in diameters had no signi cant link with patient age, weight, or height.
Pearson Correlation
Sig Table 4: Correlation between number of parity and uterus width Table 5 shows the correlation between number of parity and thickness there is no signi cant relationship between them because the value in the "Sig. (2-tailed)" is 0.983 which is more than 0.05. [3]
|
2022-07-28T15:16:00.388Z
|
2022-06-30T00:00:00.000
|
{
"year": 2022,
"sha1": "0a6aec12794538d220e0ba86eee5a3b87aacbe51",
"oa_license": "CCBY",
"oa_url": "https://thetherapist.com.pk/index.php/tt/article/download/33/46",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "00e799207bec43de7c0ad9410fa430b91d515dc4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
32246513
|
pes2o/s2orc
|
v3-fos-license
|
Star-forming Galactic Contrails at z=3.2 as a Source of Metal Enrichment and Ionizing Radiation
A spectroscopically detected Lyman alpha emitting halo at redshift 3.216 in the GOODS-N field is found to reside at the convergence of several Lyman alpha filaments. HST images show that some of the filaments are inhabited by galaxies. Several of the galaxies in the field have pronounced head-tail structures, which are partly aligned with each other. The blue colors of most tails suggest the presence of young stars, with the emission from at least one of the galaxies apparently dominated by high equivalent width Lyman alpha. Faint, more diffuse, and similarly elongated, apparently stellar features, can be seen over an area with a linear extent of at least 90 kpc. The region within several arcseconds of the brightest galaxy exhibits spatially extended emission by HeII, NV and various lower ionization metal lines. The gas-dynamical features present are strongly reminiscent of ram-pressure stripped galaxies, including evidence for recent star formation in the stripped contrails. Spatial gradients in the appearance of several galaxies may represent a stream of galaxies passing from a colder to a hotter intergalactic medium. The stripping of gas from the in-falling galaxies, in conjunction with the occurrence of star formation and stellar feedback in the galactic contrails suggests a mechanism for the metal enrichment of the high redshift intergalactic medium that does not depend on long-range galactic winds, at the same time opening a path for the escape of ionizing radiation from galaxies.
INTRODUCTION
Long slit, spectroscopic blind surveys targeting the HI Lyα emission line have the potential to deliver detailed and otherwise unavailable insights into the gas dynamics and, in conjunction with deep, space-based imaging, the star-gas interactions in proto-galactic halos and the intergalactic medium. Several surveys of this kind (Rauch et al 2008, Rauch et al 2011Rauch et al 2013a, paper II;and Rauch et al 2013b, paper III) have discovered a distinct subpopulation of of extended, asymmetric Lyα emitters at z ∼ 3, with a comoving space density on the order of 10 −3 Mpc −3 and typ-⋆ The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.
ical observed line fluxes of a few times 10 −17 erg cm −2 s −1 . With a large number of processes capable of producing Lyα radiation, one may expect the emitters to be drawn from a highly inhomogeneous group of objects. However, the selection by Lyα emission is likely to favor galaxies in certain phases of their formation, when the stellar populations and gas dynamics are particularly conducive to the production and escape of Lyα photons. The peculiar spatial distribution and clustering behaviour of Lyα emitters suggests that environmental effects and interactions may play an important role in determining whether a galaxy appears as a Lyα emitter (e.g., Hamana et al 2004;Hayashino et al 2004;Kovac et al 2007;Cooke et al 2010Cooke et al , 2013Zheng et al 2011;Matsuda et al 2012). Indeed, all four extended emitters described so far in the papers in this series exhibit signs of interactions.
The duration of the processes leading to the production of ionizing radiation (e.g., the lifetimes of massive stars, AGN activity), as well as the astrophysical timescales relevant for the emission of Lyα in high redshift gaseous halos (recombination-and resonance-line radiative-transfer time scales) tend to be short compared to the dynamical timescales and life times of the general stellar population. Thus the spectroscopic detection of such a halo amounts to a "snapshot" of a particularly interactive phase in their formation, illuminated by a "flash" of Lyα emission.
Among those extended Lyα emitters published to date, the first one showed diffuse stellar features, in addition to a clear detection of the infall of cold gas into an ordinary high redshift galaxy (paper I). Paper II described what may be a Milky-way-sized halo with multiple galaxies hosting disturbed, partly young stellar populations. A thin filament apparently dominated by high equivalent width Lyα emission may reflect recent intra-halo star formation in a tidal tail or in the wake of a satellite galaxy. The third object, revealing the only case in this sample clearly related to nonstellar processes, is an AGN illuminating a satellite galaxy, possibly triggering the formation of very young stars in its halo (paper III). The object examined in the present paper is a large halo surrounded by Lyα filaments illuminated by a group of distorted, mostly blue galaxies. As we shall argue below, the interaction in this case appears to be between the galaxies and a gaseous medium through which they move, and which appears to strip off part of their gas and induce star formation in their wake.
The observations are described in the next section, followed by a description of the galaxies coinciding with the gaseous halo. The presence of spatially extended metal emission, the nature of the Lyα filaments and the energetics of the emission are then discussed. The paper concludes with a discussion of the likely nature of the phenomenon and its significance for the metal enrichment of the intergalactic medium and for the escape of photons responsible for its ionization.
OBSERVATIONS
The observations consist of a long slit, spectroscopic blind survey, with the slit positioned in precise N-S orientation on the object J123647.05+621237.2 in the Hubble Deep Field North (HDFN). Data were taken with the LRIS (Oke et al 1995;McCarthy et al 1998, Steidel et al 2004 B and R arms and the D560 dichroic, using the 600/4000 grism in 2x2 binning (blue side) and the 600/7500 grating in 1x1 binning (red side), through a custom long slit built from two slit segments with a combined size of 2" × 430". Total exposure times of 35.8 hours in the blue and 35 hours in the red arm of LRIS were obtained in March 2008, May 2008, and April 2009. The resulting 1-σ surface brightness detection limit, measured for a 1 − ⊓ ⊔" wide aperture, is approximately 1.1 × 10 −19 erg cm −2 s −1 ⊓ ⊔".
The spectrum shows a strong Lyα emission complex near 5125.0Å ( fig. 1), with the peak of the line corresponding to a redshift 3.2158. The identification as Lyα is supported by the large spatial extent of the emission, and the drop in the continuum blueward of the emission line, caused by the Lyα forest ( fig.2). The Lyα line is broad (FWHM ∼ 910 kms −1 ) and shows a red shoulder, with part of the width coming from superposition of multiple sources (see below). After subtraction of the continuum trace, the total Lyα flux that passed through the slit is (2.26 ± 0.13) × 10 −17 Figure 3. Two-dimensional Keck LRIS spectrum of the Lyα emission line region (left two panels) and ACS B-band (F435W) image (right two panels; smoothed with a 3x3 pixel boxcar filter). In the spectrum, the dispersion runs from left (blue) to right (red), and the N-S direction from top to bottom, with a spatial extent of 16". For clarity, the bottom row of panels repeats the top row, but with annotations. The numbered white closed lines are the extraction windows used to obtained the flux measurements given in table 1. The numbers refer to galaxies with broad band detections listed in table 1, the greek letters to features in the Lyα spectrum. The two black vertical lines at the top of the images indicate the approximate position of the slit, which was determined from a scheme minimizing the variance between the image collapsed perpendicular to the slit position, and the spectrum. The spectrum on the left is shown over a velocity stretch of 4880 km s −1 and shows apparent Lyα filaments extending south from the continuum position out to at least 27 kpc proper (from the center of the emitter to position δ) in projection along the slit, and further, faint protuberances appear to be sticking out to the north. The broad band smudges seen in the image extend over at least about 90 kpc in the N-S direction and E-W directions. Several of the emission patches in the spectrum have counterparts in the B-band image (black dashed lines). . Broad band images of several distressed galaxies to the SW of the position of the Lyα halo. The size of each image is 10.2" × 6.7", with N up and E to the left. The image is centered at 12: 36:46.48 +62:16:06.16 (2000). The panels from left to right are: the ACS F606W (V band) image, the F435W (B band) image, and a version of the latter smoothed with a 3x3 boxcar filter. Object 1, less than an arcsec to the W from the center of the slit, is the source of most of the continuum emission and possibly most of the Lyα in the spectrum. Brackets show the multiple apparent head and tail structures for several objects. For objects 1, 2, 3 and 4 the head appears to the SW of the tail. Object 10 is a point source, whereas 7 has a less distinct shape, with a more E-W orientation and the peak flux occurring to the E. Note the multiple bright cores in the two tadpoles 1 and 2, seen in the rightmost panel. The V-band flux of object 4 may be dominated by Lyα emission. There is no prior redshift information for any of the other faint galaxies, except for a few foreground objects in the N-W corner of the field.
The Lyα emission line is remarkable in that several filaments extrude to distances of at least 27 proper kpc (in projection) away from the continuum position. At least one of these structures corresponds to multiple features detected in the HST ACS F435W ("B band") (RHS panels of fig. 3). In an area extending to about 5 arcsecs on either side of the slit, two highly distorted, tadpole-shaped galaxies (1 and 2; with object 1 producing the continuum in the spectrum) can be seen, Both have substructure with multiple cores of blue light in their heads (figs.4, 9). Several more extended low surface brightness features can be seen further afield (3,4,5,6,7) partly also with (less distinctive) head and tail structures. Among these are blue sources with (objects 5 and 8) and without (object 10) tails, and at least one object (number 3) with an apparent linear size of more than 40 kpc.
Of the features in the Lyα line, the central emission "α" corresponds to tadpole 1 in the image, "ζ" to the smudge near number 9, "δ" to 7, and "γ" to 10. The latter two also have very faint continua recognizable in some regions of the spectrum that line up very well with the Lyα emission, showing that the γ − δ filament is subtended by at least two galaxies. The other significant Lyα smudges with greek letters, β and ǫ, plus the vaguely visible elongated features on either side of β may also have counterparts in the broad band, with ǫ possibly related to object 4 (see below). The various white contours in the bottom right panel of fig. 3, are marking extraction windows used to determine the fluxes listed in table 1. Even though the low surface brightness features are statistically significant detections (objects 3, 8, and 7 are 5.0, 4.4, and 8.1 σ excursions in the B-band) some of them could be observational artifacts. Moreover, even if they are real objects, they may not all be at the same redshift as the Lyα. However, the spectroscopic identifications of objects 1, 7, and 10, and the simultaneous presence of several highly distorted and/or blue (B − V ∼ 0) objects suggest that most of the features are likely to be real and related to the Lyα halo. Further distorted objects in the B band occur to the N-W of the present region (not shown). If these belong to the same large scale structure (which, due to a lack of redshifts is uncertain), the total maximum extent could be as large as 210 kpc proper. With the field being located near the edge of the GOODS-N imaging coverage we cannot follow the structure further to the North.
GALAXY PROPERTIES
The various objects shown in the broad band figure are listed with their identifying numbers in column 1 of table 1, together with their name in the GOODS-N catalog (where detected) in column 2. The GOODS-N broad band B and B-V colors (columns 3 and 4) are followed in columns 5-8 by the same quantities determined directly in extraction windows placed on the head and tail regions of the various objects shown in fig. 3. Note that no corrections for extinction or intergalactic absorption have been applied. From the work by Meiksin (2006), we expect that a color correction of about δ(B − V ) ∼ −0.4 (for young star forming galaxies at z=3.2) needs to be added to the magnitudes in the table.
The magnitudes of the "heads" show good correlation with the GOODS-N "magbest" magnitudes where available for the same object. Because of the irregular apertures used, the absolute magnitudes are not precisely comparable among objects. The useful information mainly lies in the B-V colors and their gradients between heads and tails. Objects 2, 4, 5, 6, 7, 9, and 10 are generally blue (overall B-V<0.12 in the GOODS-N catalog, or, where uncatalogued, in our apertures). We caution not to read too much into the formally very blue colors as the sometimes very large formal errors speak for themselves. However, there is a clear trend in that all the cases where there is a distinct head-tail shape (objects 1, 2, 3, 5, and 8), with the exception of object 4, have bluer tails than heads. The tendency for the tails to be bluer than the heads is different from the prevailing pattern for tadpole galaxies (e.g., Elmegreen & Elmegreen 2010).
The object 4 is an interesting case in that the tail is only visible in the V band ( fig. 5), with a total flux of (5.8 ± 1.2)×10 −17 erg cm −2 s −1 , corresponding to a 4.8 σ detection. The situation here may be similar to the one presented in paper II, a narrow filament of emission visible in only one broad band, that may be glowing in Lyα, except that in the current situation, the Lyα line, due to its higher redshift, would be situated in the V band and not in B. Because of the low flux, other broad band images do not provide useful constraints (e.g., V-I = −1.3 ± 1.0). We can, however, put limits on the equivalent width from the B and V band fluxes, assuming that the B band measures the continuum, and the V band the continuum plus line flux, with the same assumptions as used in paper II. As the equivalent width is directly dependent on the ratio of V-band flux to B band flux, we estimate an equivalent width lower limit by setting the V-band flux to its −nσ excursion, and the B-band flux (which is formally measured to be zero) to its positive, +nσ value. The resulting rest frame equivalent width lower limit for n=1 is EWr > 1130Å. For n=2, the value is already a rather moderate 57Å, and the "3σ" result is compatible with both the V and B band just containing pure continuum emission of a flat spectrum source. However, if the observed V band flux were mostly Lyα emission, the (5.8 ± 1.2) × 10 −17 erg cm −2 s −1 should be easily seen in a spectrum as it is more than two times the flux of in the main halo. In this case, the fact that the western edge of the slit is about 3.9" away from the tail of 4 is likely preventing us from detecting a strong Lyα emission line. It is intriguing that the Lyα feature ǫ, though stronger than either δ or γ, which both correspond to galaxies clearly detected, does not have a corresponding broad band object near enough to the slit to explain the relatively strong signal. Thus ǫ may just be a small fraction of the Lyα emission from the tail of object 4, spilling over into the slit.
HeII 1640
Aside from the Lyα emission, several other extended emission features can be seen in the spectrum. He II 1640.4Å is noisy but clearly present near an observed wavelength of 6914Å. In the spatial direction it ranges over several arcsecs ( fig. 6). The presence of multiple residuals from the subtraction of sky lines and a strong spatial gradient in the sky background make a flux measurement difficult, but we estimate that the flux is about (4.5 ± 1.4) × 10 −18 erg cm −2 s −1 (statistical errors only), within a distance of 4.3" on either side of the continuum trace. Formally, this is approximately 19% of the HI Lyα flux. This comparison with HI should not be taken too serious as HeII 1640 is likely to be less optically thick than HI Lyα, so its spatial width may largely reflect the spatial extent of the emitting gas, whereas a substantial fraction of HI Lyα may have been scattered further outside of the slit. Thus, the observed HeII 1640/ HI Lyα flux ratio may be a considerable overestimate. If we approximate the spatial profile of the Lyα halo by a circular Gaus-α HI Ly HeII 1640
Metal transitions and extended emission
Several features belonging to metal line transitions can be seen in the one-dimensional spectrum ( fig. 2), including an absorption/emission complex, with the peak emission occuring near near 5234.6Å (FWHM ∼ 500 kms −1 ), probably related to the NV 1238/1242Å doublet, and absorption troughs at 5302.5Å and 5418.8Å. We identify the lower redshift one with the SiII 1260Å resonance line. A further, broader trough near 5480Å almost certainly corresponds to the OI/SiII 1302-1304Å complex often seen in Lyman break galaxies (e.g., Shapley et al 2003 In several wavelength regions below and above Lyα, the spectral trace seems to be broadened, suggesting spatially extended emission line regions. We show some of these regions in (fig. 7), indicating the spatial correspondence between spectrum and image by dashed lines. To give an idea of the flux levels and significance of the excess emission along the slit, fig. 8 shows some of the spatial profiles after collaps-ing the regions in the previous figure along the dispersion direction. Note that an average continuum spectrum has been subtracted in fig.8, and the fluxes represent just the excess emission.
Because of the low signal-to-noise ratio, the presence of Lyα forest absorption, and the unknown mechanisms populating the atomic levels we found it difficult to assign the extended emission to definite transitions. The few identifi-cations suggested here are tentative and may change with better data: The first region (top panel in fig. 7) comprises relatively prominent emission extended to both the north and south of tadpole 1. The spectral traces of two other objects, 10 and 7 in fig.3, are visible as well just below the trace of the main galaxy for a wavelength stretch of about 50Å in the rest frame, of which we show the central part in this panel. Having enhanced emission at the same wavelength as the main tadpole strengthens our proposed identification of these objects with galaxies sharing the same redshift, responsible for the Lyα emission in the filaments γ and δ. Among the transitions in this wavelength range that could produce the band-like spectral character, possible candidates are the numerous FeII and FeII] lines in the vicinity of 1104Å, with several transitions from the ground-state and accompanying low-lying excited states.
The second panel from the top, shows an asymmetric, extended emission region (1157.3 -1161.8Å) redward of an absorption trough. The nature of this feature is unclear.
In the third panel of fig.7 we find deep absorption troughs just blueward of extended emission near 1288.7-1294.0Å and 1302.9-1304.1Å. The identification of the lower redshift system is uncertain, but the spectrally narrow, spatially very extended emission region 1302.9-1304.1Å most likely corresponds to the OI 1302.17, 1304.86, 1306.02 A triplet and the SiII 1304.37, 1309.28Å doublet (with possible contributions by other ions) which would also account for the absorption trough immediately blueward. SiII may be expected to be the dominant contributor under a wider range of possible physical scenarios, but is perhaps somewhat too far to the red of the optimal position. The spatial extent of the emission appears to exceed that of the other emission regions discussed earlier in both directions along the slit. If the emission is due to OI, it could be enhanced by Bowen fluorescence (Bowen 1947), i.e., pumping of OI by HI Lyβ. A situation like this could arise in the contact zone between partly neutral gas (OI) embedded in more highly ionized gas where recombinations produce Ly series photons.
The fourth panel shows the NV 1238,1242Å doublet region. NV appears to be present in the form of an absorption trough, as commonly seen in high redshift galaxies, plus NV emission (marked as "1241.3Å"), which is extended to the south. At a flux of (1.8 ± 1.2) × 10 −18 erg cm −2 s −1 , a comparison with other metals commonly seen in ionized gas, is difficult. The flux at the CIV 1548,1549Å doublet position (not shown), which is commonly stronger than NV, at (0.83 ± 1.5) × 10 −18 erg cm −2 s −1 is not significant.
In the bottom panel, a wavelength region from the red side of the spectrum contains a comb-shaped, multiple emission line pattern extending about 2.7" (21 kpc proper) to the north, unfortunately marred by two background residuals. We tentatively identify the lines between 1658.8 -1665.0 and 1666.6 -1674.7Å with OIII] 1660, 1666Å (which most likely would be collisionally excited) and AlII 1671Å (which could be pumped by continuum radiation).
ORIGIN OF THE LYα EMISSION
The uncertain spatial extent of the Lyα emission beyond the slit prevents a measurement of the escape fraction of Lyα, but we can establish whether there is a sufficient source of Lyα that can explain the observed flux. If due to photoionization, the observed flux of Lyα photons requires an ionization rate of about 1.98 × 10 53 erg cm −2 s −1 Hz −1 . Under the assumptions made in paper I and II, this can be achieved by a star-forming galaxy with rest frame luminosity L 1500Å = 2.34 × 10 28 erg s −1 Hz −1 or V band (AB) magnitude = 26.4. With the brightest galaxy, number 1, having a magnitude of 24.9, there nominally are about four times as many ionizing photons available as required to explain the observed Lyα, not counting any of the other sources in the field. Even if the slit losses approached a factor ∼ 4, as discussed above, stellar photoionization would remain a viable explanation. Based on the existing data we cannot rule out the presence of an AGN, but there currently is no positive evidence for such an object either.
The star formation rate associated with galaxy 1, if estimated by the usual relation (Madau, Pozetti & Dickinson 1998), is found to be SF R = 2.9M⊙yr −1 × L1500 2.3 × 10 28 erg s −1 Hz −1 . (1) For some of the more clearly circumscribed emission regions in the individual filaments the photon budget can be studied independently. For example, the tail of region 7, with a V band magnitude 31.12, coincides in projection with the Lyα spot δ in one of the southern filaments. The observed Lyα flux of that spot requires V=30.9, which is very close to the required value for a situation where virtually all ionizing radiation is being trapped and converted into Lyα photons, with all photons escaping.
The Lyα filamentary pattern is at least partly related to point sources that appear connected to the central emission peak by bridges of Lyα emitting gas. In principle, these bridges could consist of ionized gas, in which case the velocity position of the corresponding Lyα emission should be close to the systemic velocity. We cannot test the ionization state of the gas for the individual filaments, but for the main galaxy we have additional evidence from the absorption troughs present, which are all blue-shifted by on the order of |v abs | ∼ 500 − 600 kms −1 with respect to the Lyα emission peak. In the case of Lyα emitters associated with Lyman break galaxies, the redshifted velocity of the Lyα emission line with respect to the systemic redshift of the galaxy, is about 2-3 times the absolute value of the (blueshifted) outflow velocity |v abs |. This puts the redshift of the Lyα emission relative to the systemic redshift at between 250 -600 kms −1 , which is well consistent with the range observed for Lyman break galaxies (e.g., Rakic et al 2011), and would mean that at least the central Lyα peak is highly optically thick. Taken together with the presence of an absorption trough in the 1-d spectra ( fig.2) near 5100Å, the evidence suggests that the Lyα emission is indeed the red peak of a double humped profile, with the blue peak strongly suppressed.
INTERPRETATION
As for the nature of the galaxies in the field, most of the objects with a head-tail structure have their heads to the south of the tails (1,2,3,4 and 5), making it less likely that the features have been produced mainly by tidal interactions. Rather, the evidence appears consistent with large scale ram-pressure stripping (Gunn & Gott 1972;Nulsen 1982) of gas, and recent star-formation in the down-stream ablated tails.
Groups of galaxies falling into clusters seem to produce Hα morphologies similar to the galaxy contrails observed here (e.g., Cortese et al 2006;Owen et al 2006;Sun et al 2007;Yoshida et al 2012). In the present case, a number of observational details closely resemble hydrodynamic features predicted from simulations of supersonic motions in galaxy clusters (e.g., Roediger, Brüggen, & Hoeft 2006;Zavala et al 2012): the main tadpole 1 shows a shape reminiscent of a bow shock, with a projected tangential angle of close to 45 degrees (indicating, at face value, motion with a moderate Mach number ∼ 1.5), and what looks like a turbulent wake consisting of a vortex street with a longitudinal extent of about 12 kpc and a maximum width of about 5 kpc ( fig. 9). A faint, linear structure vaguely resembling a "helix" extends almost perpendicular to the axis of symmetry (i.e., the head-tail direction) to both sides of the galaxy, with a maximum traceable extent of ∼ 20 kpc to the south and somewhat less to the north. The nature of this feature is not clear. Some of the extended emission seen in fig. 7 appears to arise from that region, suggesting that it may be associated with both low and highly ionized gas. The pattern with the "helical" appearance could perhaps arise through hydrodynamic instabilities. A broadening tail of vortices (e.g., Roediger, Brüggen, & Hoeft 2006, their fig. 4), may look similar, if the tadpole were viewed almost frontally, moving toward the observer, and the "helical" structure were on the far side of the tadpole. Similar, curly structures also occur in the wakes of interacting galaxies undergoing stripping (e.g., Kapferer et al 2008, their fig. 9). Alternatively, the apparent spiral pattern may suggest a passing, rotating small galaxy with an asymmetric outflow of gas that traces out a helical pattern. If the structure were an outflow from the tadpole galaxy 1, e.g., the helical jet of an AGN, a large outflow velocity faster than the relative velocity between galaxy and ambient gas could explain why the "helices" are not being swept back at a sharper angle.
Tadpole 2 does not show a discernible bow shock, and has a narrower wake. The object 4 that we associated above with high equivalent width Lyα emission exhibits a linear, thin tail ( fig. 4, 5), not unlike the expected outcome of Bondi-Hoyle accretion (Sakelliou 2000).
The proximity of the galaxies to each other, the presence of a bow shock in one but not in the others, and the various degrees of turbulence in their tails may imply that the flow of galaxies is encountering inhomogeneous conditions. Specifically, the spatial sequence from tadpole 4 to 2 to 1 ( fig. 4) could be interpreted as indicating passage through a zone with a significant temperature gradient (perhaps an accretion shock) from a colder to a hotter gaseous medium: the tadpole 4, which is most advanced in the direction of the flow, has a linear, apparently non-turbulent tail, suggesting that it may be experiencing the higher viscosity and thermal pressure of a hotter environment (e.g., Roediger & Brüggen 2008). Tadpole 2 is more turbulent, but the tail is still narrow, and it does not have a bow shock either. Tadpole 1 has a bow shock, presumably because it is on the colder side of the interface between hot and cold gas, and its velocity relative to the intergalactic medium exceeds the sound speed for the colder gas. This condition would easily be satisfied when passing through the general filamentary IGM with even moderate velocity, as the typical temperature at z ∼ 3 is only a few times 10 4 K (Rauch et al 1996), corresponding to sound speeds of a few tens of kms −1 . The tail of tadpole 1 is flaring up and shows a turbulent vortex pattern, consistent with the lower viscosity expected in a colder medium.
A particular interesting consequence of ram-stripping is the formation of stars in the stripped gas, which has been the subject of recent observational (e.g., Yoshida (e.g., Kapferer et al 2009, Tonnesen & Bryan 2012; see also the Lyα emitting filament in paper II, which may have a similar origin). Several strands of evidence suggest that the current situation is indeed an instance of star formation in galactic wakes: the tails of most of the objects have blue colors, indicative of very young stars. The finding of extended metal line emission far out from the galaxies suggests the presence of a stripped, or re-created, interstellar medium that is being excited by the newly forming stars. The high Lyα equivalent width suggested by the tail of galaxy 4 may be another sign of hot, young stars, as discussed in paper II.
The usual condition for ram-pressure stripping to take place is that the ram-pressure on the gas in a galaxy, experienced when passing through the intergalactic medium, needs to exceed the gravitational binding force per surface area, ρv 2 (π/2)GM (< R)ρ gal (R)/R (e.g., McCarthy et al 2008). Here ρ is the ambient gas density, v the relative velocity of galactic gas and ambient intergalactic medium, and G, R, M (< R) and ρgas are the gravitational constant, the distance of a given gas volume element from the center of the galaxy, the total gravitating mass internal to that radius, and the galactic gas density, respectively. It has often been assumed that ram pressure stripping is most relevant for low redshift, massive clusters. However, the hierarchical nature of structure formation, leading to more compact gravitational potential wells, higher gas densities, and higher interaction rates at z ∼ 3 (when compared to the local universe) suggests that one should expect miniature versions of the ram-pressure stripping seen in low redshift clusters to occur among satellites in individual high z galactic halos. With the higher density at high redshift favoring a higher pressure for a given velocity, the in-falling satellites themselves collapse from a denser background as well, so for the effect of ram pressure stripping one would have to look to lower mass, dwarf galaxies, perhaps aided by processes that may lower the binding energy of the gas further. The higher merger rate at high redshift may also work to increase the amount of ram-pressure stripped gas (e.g., Domainko et al 2006;Kapferer et al 2008), as may stellar or galactic outflows, as long as they can offset a significant part of the gravitational binding energy. In addition, new cosmological hydro-simulation techniques suggest more efficient stripping (e.g., Hess & Springel 2012) and the presence of puffed up, high-angular momentum gas (e.g., Keres et al 2012), "ready to go".
Recently it has been argued (Benitez-Llambay et al 2012) that, in particular, dwarf galaxies do not even require the encounter with fully formed massive halos but can lose gas to ram-pressure stripping in large-scale structure filaments at high redshift when entering terminal nodes like the (future) Local Group pancake. In this case, our spectrograph slit may have intersected a filament of the cosmic web, lit up by the star-formation in the ablated contrails of a swarm of coherently moving galaxies. To attain the high relative velocities required for stripping, the galaxies would have to move highly supersonically with respect to the gas they are plunging into. While we appear to be seeing one galaxy with a bow shock, it is not clear if the velocities of the other objects are high enough for this to work. However, as argued above, the presence of an accretion shock in the terminal node with hot gas on one side may make this scenario consistent with the observations. A variant of this picture may explain the stripping and the apparent gradient in the properties of the tadpoles as a group of galaxies being "hosed down" when obliquely passing an accreting stream of gas.
Metal enrichment and escape of ionizing radiation from star formation in stripped gas
The existence of such extended structures at high redshift, the relatively large number density of galaxies with tidal or ram-pressure related features (see also the disturbed halos described in paper I and II), and the presence of multiple sites of star-formation in a common gaseous halo or large scale filament suggest that the stripping of gas from galaxies in interactions could be an important contributor to the metal enrichment of the intergalactic medium, analogous to the lower redshift process leading to the enrichment of the gas in galaxy clusters. To explain the finding of metal enrichment in the IGM at large distances from the nearest bright galaxy, galactic winds from Lyman break galaxies have been invoked to drive metal-enriched gas far into intergalactic space (e.g., Pettini et al 2001;Steidel et a 2010). Among the persistent uncertainties with this scenario is that the actually observed ranges of galactic winds invariably fall short of accounting for the metals seen in QSO absorption systems at large distances from such galaxies. However, if, as we have argued above, ram pressure stripping of in-falling dwarf galaxies and star formation in the stripped wake operate at high redshift, there may be less need to invoke longrange winds from the central galaxy of a halo. In this alternative picture, the ram-pressure that led to the ablation of gas and subsequent star formation may also act to dispel newly formed metal enriched gas from the tails, aided by stellar winds and supernova explosions that would find it much easier to escape from the weakly bound (and darkmatter free) star forming regions of galactic wakes. In any case, differential motion between the lost gas and the parent star forming galaxies will distribute the gas spatially over time, and the assumption that this process mostly occurs in in-falling dwarf satellites implies that the gas is automatically reaching distances from any brightest halo galaxy as large as commonly observed (in metal absorption lines; e.g., Chen 2012). Tidal interactions between satellites may lead to a similar result, metals expelled into the gaseous halo of brighter galaxies, that came from the shredded interstellar medium of its satellites or from outflows in tidal dwarfs, and both processes may exist among the extended, asymmetric Lyα emitters in our study. There may be differences between the metallicities of the gas ejected from tidal star-forming regions, and the gas lost by star forming regions in galactic contrails. Stars in the former arise from the relatively metal-rich ISM of the parent galaxy, as would stars in gas ablated by ram-pressure or viscous stripping. Stars forming in turbulent wakes behind the galaxies may feed on the lower metallicity gas in the halo or intergalactic medium as well, which may contribute to the signatures of hot, young stars described in paper II. As argued earlier in paper II, extragalactic star formation in the wakes of stripped galaxies and in tidal tails may facilitate the production and escape of ionizing photons and may have brought about the reionization of the universe at high redshift. Star formation outside of the dense galactic HI cocoons would lead to lines of sight with reduced optical depth for ionizing photons. The presence of young, massive stars in the wakes, with the weak gravitational binding force enabling easy removal of neutral gas by even moderate amounts of stellar winds or supernova outflows would all tend to enhance the escape of ionizing radiation. Recently, Bergvall et al (2013), examining selection effects in the search for local galaxies leaking ionizing radiation, have come to similar conclusions as to the likely conditions required, including the importance of stripping.
CONCLUSIONS
We have detected a Lyα emitting halo with several faint filaments stretching over tens of kpc. The filaments correlate with star-forming regions in the form of mostly blue, faint galaxies, several of which have a distinct tadpole shape and blue, partly turbulent tails, with one object showing what appears to be a bow shock. The GOODS-N ACS F435W im-age reveals many such features criss-crossing an area several times bigger than the visible extent of the Lyα halo. The emission of the central halo and of the filaments is broadly consistent with being powered by stellar photoionization. We detect spatially extended emission lines from gas surrounding the main tadpole, including HeII 1640, NV 1240 and probably OIII], AlII, and FeII, suggesting an extended, extragalactic, interstellar medium with current star formation.
The tadpole shapes, partial alignment, and the considerable numbers of unusual broad band objects make it unlikely that the features observed are predominantly tidal in origin (i.e., caused by individual two-body encounters). Instead, the galaxies may have experienced stripping of gas when moving relative to the intergalactic or intra-halo medium, with stars forming downstream in the galactic contrails. This process is observationally and theoretically well established in the local universe. Our observations have identified an occurrence of ram-pressure stripping at high redshift, possibly involving dwarf galaxies interacting with the gas in more massive, individual galactic halos. The filamentary structure trailing behind a galaxy in the z=2.63 halo described in paper II may be another example of this effect. In the present case, the properties of several tadpoles change along their general direction of motion, which may be consistent with these galaxies passing into a hotter gaseous environment, possibly the region behind an accretion shock. Such a stripping scenario may play out on a larger scale when differential motions of galaxies relative to the nodes in the gaseous cosmic web strip galaxies off their gas, as suggested by Benitez-Llambay et al 2012. As in the case of local clusters, the galactic contrails should be able to release metal-enriched gas, perhaps enhanced by local stellar feedback, more easily than normal galaxies. At the very least these objects should provide a contribution to the intergalactic metal budget of galactic halos. The loss of enriched gas from galactic contrails may suggest a solution to the long-standing puzzle of how the intergalactic medium at large distances from bright galaxies was polluted with metals. Star formation in galactic contrails would involve young stars, surrounded by lower HI gas columns than stars born in ordinary galaxies, and capable of clearing their environment of dense gas, suggesting a way in which galaxies can ionize the intergalactic medium.
ACKNOWLEDGMENTS
The data were obtained as part of a long term collaboration with the late Wal Sargent, to whose memory we dedicate this paper. We acknowledge helpful discussions with Guillermo Blanc, Bob Carswell, Michele Fumagalli and Andy McWilliam. We thank the staff of the Keck Observatory for their help with the observations. MR is grateful to the National Science Foundation for grant AST-1108815. GB has been supported by the Kavli Foundation, and MGH received support by the European Research Council under the European Union's Seventh Framework Programme (FP/2007(FP/ -2013 / ERC Grant Agreement n. 320596. JRG acknowledges a Millikan Fellowship at Caltech. We acknowledge use of the Atomic Line List v2.05, maintained by Peter van Hoof, and the use of the NIST Atomic Spec-tra Database (ver. 5.0; Kramida et al 2012). This research has further made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration, and of the VizieR catalogue access tool.
|
2013-05-24T20:04:20.000Z
|
2013-05-24T00:00:00.000
|
{
"year": 2014,
"sha1": "06f87cd22f3a5bed0340869e38c46af2ae62e640",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/441/1/73/3011320/stu528.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "06f87cd22f3a5bed0340869e38c46af2ae62e640",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
27121591
|
pes2o/s2orc
|
v3-fos-license
|
In vitro antiplasmodial activity of marine sponge Clathria vulpina extract against chloroquine sensitive Plasmodium falciparum
Malaria is an infectious disease caused by the genus Plasmodium such as Plasmodium falciparum (P. falciparum), Plasmodium ovale, Plasmodium vivax and Plasmodium malariae. Among them, P. falciparum is the parasite responsible for most severe diseases and fatal cases, which may kill over one millions of people per annum. The parasite, P. falciparum is genetically diverse and has multiple independent origins of mutations in genes that confer resistance to widely used antimalarial drugs like chloroquine[1]. A major breakthrough of the past decades is the discovery of artemisinin by Chinese researchers. Artemisinin combination treatments for P. falciparum are currently used only first line antimalarial drugs ameanable to widespread use against all chloroquinone resistant malarial parasites. However, artemisinin resistant strains were recently found in Cambodia[2]. So, there is an urgent need for the new antimalarial drugs. Living organisms are recognized as a source of potential bioactive molecules PEER REVIEW ABSTRACT
Introduction
Malaria is an infectious disease caused by the genus Plasmodium such as Plasmodium falciparum ( P. falciparum), Plasmodium ovale, Plasmodium vivax and Plasmodium malariae. Among them, P. falciparum is the parasite responsible for most severe diseases and fatal cases, which may kill over one millions of people per annum. The parasite, P. falciparum is genetically diverse and has multiple independent origins of mutations in genes that confer resistance to widely used antimalarial drugs like chloroquine [1] . A major breakthrough of the past decades is the discovery of artemisinin by Chinese researchers. Artemisinin combination treatments for P. falciparum are currently used only first line antimalarial drugs ameanable to widespread use against all chloroquinone resistant malarial parasites. However, artemisinin resistant strains were recently found in Cambodia [2] . So, there is an urgent need for the new antimalarial drugs. Living organisms are recognized as a source of potential bioactive molecules S163 which are commonly more effective than those obtained through the combinatorial synthetic chemistry. Hopefully, the new breakthrough in the malaria treatment will come with the development of a marine lead compound. The incredible potential of marine organisms (mostly invertebrates, such as sponges, tunicates, and soft corals) known to produce a large array of secondary metabolites can be interpreted by considering the common features of secondary metabolism in all living organisms as well as some peculiar features of the marine environment. The marine ecosystem is the biggest source for the development of new drugs against P. falciparum. Pharmaceutical interest in sponges was aroused in the early 1950's by the discovery of number of unknown nucleosides, such as spongothymidine and spongouridine in the marine sponge Cryptotheca crypta [3,4] . Most of the bioactive compounds from sponges consist of anti-inflammatory, antitumor, immunosuppressive (or) neurosuppressive, antiviral, antibiotics, antifouling and antimalarial properties [5] . In this connection, the present study was made an attempt to find out the antimalarial compounds from the crude extract of marine sponge Clathria vulpina (C. vulpina) collected from Thondi coast of Palk Strait region, Tamil Nadu, India.
Collection of marine sponges
Marine sponge C. vulpina was collected by using bycatch at Thondi (lat. 9°44'10"N, lon. 79°10'12"E) of Palk Strait region, Tamil Nadu, India, and authenticated by Dr. S. Lazarus, emeritus fellow (retired), Centre for Marine Science and Technology, Manonmaniam Sundaranar University, Rajakkamangalam, Kanyakumari district, Tamil Nadu, India. The collected samples were washed thrice with tap water and twice with distilled water to remove the adhering associated animals. A sample voucher specimen was deposited in the herbarium facility (sponsored by the Indian Council of Medical Research, New Delhi) maintained in the Department of Oceanography and Coastal Area Studies, Alagappa University, Thondi Campus, Tamil Nadu, India.
Extraction of bioactive principles
The samples were cut into pieces and kept for shade drying. After the complete removal of moisture, the samples were subjected for percolation by soaking in ethanol:water mixture (3:1 ratio). After 21 d of dark incubation, the filtrate was concentrated separately by rotary vacuum evaporator (>45 °C ) and then freeze dried (-80 °C ) to obtain solid residue. The percentage of extraction was calculated by using the following formula: Weight of the extract (g) Weight of the sponge material (g) Percnet of extraction (%)= 伊100 The ethanolic extracts was dissolved in dimethyl sulphoxide (
Culture maintenance
The in vitro antiplasmodial activity of marine sponge extract was assessed against P. falciparum (obtained from the Jawaharlal Nehru Centre for Advanced Scientific Research, Indian Institute of Science, Bangalore, India). P. falciparum were cultivated in human O Rh + red blood cells (donated by the volunteers) using RPMI 1640 medium (Hi Media Laboratories Private Limited, Mumbai, India) [7] . Hi Media Laboratories Private Limited, Mumbai, India, supplemented with O Rh + serum (10%), 5% sodium bicarbonate and 40 μg/ mL of gentamycin sulphate. Haematocrits were adjusted at 5% and parasite cultures were used when they exhibit 2% parasitaemia [8] .
In vitro antiplasmodial activity
Different concentrations of filter-sterilized crude extract from C. vulpina (100, 50, 25, 12.5, 6.25 and 3.125 μg/ mL) were incorporated in 96-well tissue culture plate containing 200 μL of P. falciparum culture with fresh red blood cells diluted to 2% haematocrit. Negative control was maintained with fresh red blood cells and 2% parasitized P. falciparum diluted to 2% haematocrit and positive control was maintained with parasitized blood culture treated with chloroquine [9] . Parasitaemia was evaluated after 48 h by giemsa stain and the average percentage suppression of parasitaemia was calculated by the following formula: Where Ac is average percentage of parasitaemia in control (%); At is average percentage of parasitaemia in test (%).
Antiplasmodial activity calculation and analysis
The antiplasmodial activity of marine sponge C. vulpine were expressed by the inhibitory concentrations (IC 50 ) of the drug that induced 50% reduction in parasitaemia compared to the control (100% parasitemia). The IC 50 values were calculated (concentration of extract in X-axis and percentage of inhibition in Y-axis) using office XP (SDAS) software. This activity was analyzed in accordance with the norms of antiplasmodial activity of Rasoanaivo et al [10] . It was suggested that an extract was very active if IC 50 <5 μg/ mL, active IC 50 <50 μg/mL, weakly active IC 50 <100 μg/mL and inactive IC 50 >100 μg/mL.
Chemical injury to erythrocytes
To assess any chemical injury to erythrocytes that might attributed to the extract, 200 μL of erythrocytes was incubated with 100 μg/mL of the extract at a dose equal to the highest used in the antiplasmodial assay. The conditions of the experiment were maintained as in the case of antiplasmodial assay. After 48 h of incubation, thin blood smears were stained with giemsa stain and observed for morphological changes under high-power light microscope. The morphological findings were compared with those erythrocytes that were uninfected and not exposed to extract [11] .
Hemolytic activity
Hemolytic activity was evaluated as described by Andra et al. with a slight modifications [12] . Human erythrocyte suspensions were washed with phosphate buffer saline (PBS) (1.5 mmol/L KH 2 PO 4 , 2.7 mmol/L KCl, 8.1 mmol/L Na 2 HPO 4 , 135 mmol/L NaCl, pH 7.4) and then centrifuged at 7 000 r/ min for 10 min. After washing four times with PBS (or until the supernatant was colourless), the human erythrocytes were re-suspended and diluted to 10 times of the original volume with PBS, referred as stock erythrocyte suspension. The suspension (2% v/v) was incubated with different concentrations (3.125 to 100 μg/mL) of C. vulpina sponge extracts at 37 °C for 1h. After the incubation period, the reaction mixture was centrifuged at 3 000 r/min for 10 min to remove intact erythrocytes. The supernatant was collected and the absorbance was determined at 450 nm by using spectrophotometer (Cyber UV-1, Mecasys Co. Ltd.) using PBS as negative control and Triton X-100 as positive control. The percent hemolysis was calculated using the formula:
Results
The percentage yield of extract of C. vulpina was found to be 4.8%. The extract of C. vulpina showed excellent antiplasmodial activity (IC 50 =14.75 μg/mL) ( Table 1) which was one fold higher when compare to the positive control chloroquine (IC 50 =7 μg/mL). The uninfected erythrocytes incubated with the ethanolic extract of marine sponge C. vulpina and uninfected erythrocytes from the blank column of the 96-well plate showed no morphological differences after 48 h of incubation. The extract showed slight hemolytic activity of 1.023% at 100 μg/mL concentration (Figure 1).
Discussion
The rich diversity of bioactive compounds from sponges have provided molecules that interfere with the pathogenesis of diseases at many different points, which increase the chance of developing selective drugs against specific targets. Marine sponges have provided many examples of novel secondary metabolites that possess varied chemical status and potent antimalarial activity [5] . Marine sponges belonging to the genus Ircinia are known to be a very rich source of terpenoids, several of which have shown a wide variety of biological activities. The variabilins from terpenoids which were polyprenyl -hydroquinones, had analgesic and anti-inflammatory properties [13] . Among the halogenated alkaloids, bromoalkaloids form the most widely distributed group of natural compounds, which are predominantly found in marine eukaryotes like sponges [14] . Polyacetylenenic alcohols, including (35,145)-petrocortyre A, purified from the marine sponge Petrosia sp., which possess the cytotoxic activity against a small panel of human solid tumor cell liner by inhibiting DNA replication [15] . The sponge Ircinia vulpin has also shown to possess antiviral, central nervous system stimulatory and antialgal properties [16] . Aplysina cavernicola, a much studied sponge which produces aeroplysinin and aerthionin and other dibromo and dichlorotyrosine derivatives, found to have antibiotic activity against Bacillus subtilis and Proteus vulgaris [13] . An ethanolic extract S165 of Haliclona virdis showed a significant hypoglycemic effect lasting for more than 8 h after single oral doses of 200 or 500 mg/kg to normal mice [17] . Considering these significances, present investigation has been made an attempt to evaluate the antiplasmodial activity of marine sponge C. vulpina against chloroquine-sensitive P. falciparum. The extract of the C. vulpina exhibited excellent antiplasmodial activity with the IC 50 value of 14.75 μg/mL. And the extract showed a slight haemolytic actvity of 1.0% at the concentration of 100 μg/mL and decreased significantly in lower concentrations. Song et al. reported that the saponins of Panax notiginseng exhibited the haemolytic activity of 11.6% and 3.6% at 500 mg/L and 250 mg/Lconcentration respectively [18] . The IC 50 value of C. vulpina extract showed below 50 μg/mL concentration in the present investigation. According to Rasoanaivo et al., the extract which shows the in vitro antiplasmodial activity by <50 μg/mL is active [10] . According to the above said point, the extract of C. vulpina is active and can be used as a potential active antiplasmodial drug in future. The mechanism of action might be due to the inhibition of P. falciparum merozoites invasion into the erythrocytes or disruption of P. falciparum rosettes [19][20][21] . Ravikumar et al. reported that the bark extract from the mangrove plant of Avicennia marina exhibited minimum concentration of inhibitory activity of IC 50 49.63 μg/mL [22]. When compared with the above report, C. vulpina extract is more potent against the chloroquine sensitive P. falciparum. The antiplasmodial activities of sea weeds [23] , sponge associated bacterium [24] , terrestrial medicinal plants [25] , coastal medicinal plants [8] , traditional medicinal plants [26] have been reported and found to have good antiplasmodial activity against P. falciparum. It is reported that the endophytic fungi from Thai medicinal plants collected from forest region of Thailand, found to have excellent antiplasmodial activity with the IC 50 value of 1.6-8.0 μg/ mL [9] . Fattorusso et al. reported that the cycloperoxide compounds obtained from the sponge Plakortis simplex showed a good antiplasmodial activity against chloroquine sensitive P. falciparum with IC 50 26.81 to 1 263.52 nmol/ L [5] . It is also reported that the marine sponge C. vulpina associated bacteria have a potent antiplasmodial activity of IC 50 20.73 μg/mL [27]. Ravikumar et al. reported that the South Indian medicinal plant Azadirachta indica (bark extract) showed a good antiplasmodial activity (IC 50 29.77 μg/mL) [28]. These findings could encourage the development of new antiplasmodial drugs from the marine natural products. It is concluded from the present findings that the sponge C. vulpina collected from the Thondi coast, Palk Strait region, Tamil Nadu, possesses a significant suppressive effect on in vitro cultures of chloroquine sensitive P. falciparum, which could be used as a potential antiplasmodial drug after completing successful clinical trials.
Conflict of interest statement
We declare that we have no conflict of interest.
Background
Malaria is an infectious disease caused by the genus Plasmodium. Among them, P. falciparum is the parasite responsible for most severe diseases and fatal cases, which may kill over one millions of people per annum. Authors have highlighted the literature report about the incredible potential of marine organisms (mostly invertebrates, such as sponges, tunicates, and soft corals) known to produce a large array of secondary metabolites. Authors selected sponge as a source material for screening antiplasmodial activity. Most of the bioactive compounds from sponges were used for various bioassay such as anti-inflammatory, antitumor, immunosuppressive (or) neurosuppressive, antiviral, antibiotics, antifouling and antimalarial properties.
Research frontiers
Studies are being performed in order to screen the antiplasmodial activity from the crude extract of sponge C. vulpine. Marine sponges have provided many examples of novel secondary metabolites that possess varied chemical status and potent antimalarial activity.
Related reports
Authors have mentioned many related article to support this research. Compared with other extract of various sources, the extract of C. vulpina exhibited excellent antiplasmodial activity with less IC 50 value of 14.75 μg/mL.
Innovations & breakthroughs
When compared with the mentioned report in this research, C. vulpina extract is more potent against the chloroquine sensitive P. falciparum with less IC 50 value.
Applications
The study has showed C. vulpina extract is more potent against the chloroquine sensitive P. falciparum. The IC 50 value of C. vulpina extract showed below 50 μg/mL concentration in the present investigation, so C. vulpina extract can be used as a potential active antiplasmodial drug in future.
Peer review
This is a good study in which the authors mentioned about the sponge C. vulpina possesses a significant suppressive effect on in vitro cultures of chloroquine sensitive P. falciparum. The results are interesting that P. falciparum could be used as a potential antiplasmodial drug.
|
2019-01-23T22:24:12.155Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "0b23a1c5f3093a8a30eb0c275f0c6904261cfa97",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s2222-1808(14)60433-3",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "43d95cc44c888189354e13b22349e36477f118f7",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
39080191
|
pes2o/s2orc
|
v3-fos-license
|
Seasonal Variations of Avifauna of Shallabug Wetland , Kashmir
The main thrust in this research work has been given on the evaluation of current status of Avifauna associated with Shallabug wetland. The main objectives were to evaluate the bird population fluctuation, to determine various threats to waterbirds and their habitats, and to present the remedial measures based on the key issues identified. For the purpose of present investigation, the study area was divided systematically into three study units of 700 m2 each. Visual census method was used for the estimation of bird population. Visual counting was made with the help of high power field binocular (SG9.2) from respective vantage points. The birds were observed on the monthly basis in 2008 and the fluctuation in bird population was determined in different seasons: summer, autumn and winter. The observations were made from 5:00 am to 7:00 am (when they come out from their resting place) and 6:00 pm to 7:00 pm (when they approach towards their resting place). The analysis of the results showed that the Shallabug Wetland is particularly important for migratory bird species and marsh land breeding species. The wetland was also found important for long distance migrants as a stopper site for feeding and resting. The bird population showed fluctuation with site differences as well as with changing seasons.
Introduction
Wetlands are defined as 'lands transitional between terrestrial and aquatic eco-systems where the water table is usually at or near the surface or the land is covered by shallow water (Mitsch & Gosselink 1986).The values of the world's wetlands are increasingly receiving due attention as they contribute to a healthy environment in many ways.Wetlands are one of the most productive of all ecosystems, and carry out critical regulatory functions of hydrological processes within watersheds (Banner et al. 1988).Regulating water quality, water levels, flooding regimes, and nutrient and sedimentation levels are a few of these processes (Gregory et al. 1991).In addition, wetlands are important feeding and breeding areas for wildlife and provide a stopping place and refuge for waterfowl.As with any natural habitat, wetlands are important in supporting species diversity and have a complex of wetland values.Even small wetlands are extremely important to the conservation of biodiversity because they provide critical breeding habitat where dispersed populations can exchange genetic material, reducing the risks of extinction (Semlitsch and Brodie 1998).Further, wetlands are dynamic, characterized by fluctuating water, nutrient, and vegetation levels.
Strategically located at the western extremity of the Himalayan range in India and south of the Pamirs, the wetlands of Kashmir serve as important staging grounds for medium and long distance migratory geese, ducks, shorebirds, cranes and other species that breed in the northern latitudes of Central Asia and Siberia.Many of these wetlands are of international and national importance, due to the large population and diversity of waterbirds and other wetland associated birds that they support.Of these, Wular Lake and Hokersar have already been included in Ramsar List considering their importance based on biodiversity and socio-economic aspects.More recently Wular Lake and associated marshes viz., Haigam, Hokersar, Mirgund and Shallabugh have been included in the network of Important Bird Areas (Islam and Rahmani 2004), based on their international importance for birds and all of which are not formally protected.
Out of more than 9,000 bird species of the world, the Indian subcontinent contains 1,300 species or over 13% of the world's bird species (Grimmett et al. 2004).The subcontinent rich in avifauna also boasts 48 bird families out of total 75 families in the world.Kashmir valley has always been considered wealthy in floral and faunal diversity-250species of macrophytes, 150-200species of phytoplankton, and 300 taxa of periphytic algae and over 50 species of periphytic rotifers (Zutshi and Gopal 2000).About 187 species of breeding birds belonging to 46 families under 16 orders have been reported from Kashmir valley.A total of 76 mammalian species belonging to 20 orders have been reported from Kashmir Valley (Dar et al. 2002).The amphibians and reptiles are mainly represented by frogs, toads, lizards and snakes.Majority of migratory birds are winter migrants in Kashmir.Over half a million migratory birds visit Kashmir wetlands (Central Asia News Net, 2 Dec, 2007).In Kashmir wetlands these birds get the temperature that suits their metabolism.The waterfowl migrate to the valleys wetlands and lakes from their breeding grounds in the Palaeartic region, extending from North Europe to Central Asia.Winter migrants from Central Asia and Siberia are thought to use two main flyways: one in the west along the Indus valley and the other in the north along the River Brahamputra.Wetlands are relatively safe areas which provide the birds with abundance of food and safe place for roosting, nesting and moulting.
Unfortunately all these wetlands including the Shallabug wetland are experiencing significant bio-ecological changes that include loss of habitat through continued human impact, denudation of forest lands, intense agricultural activities, pollution, and erosion in catchment & watershed areas.Further, the impact of fast urbanization, encroachment, siltation and indiscriminate macrophyte removal has seriously affected the use of wetlands by the water fowl.The concern about the habitat destruction and overall deterioration of the wetland stimulated the need to carry out the study on current status of Avifauna associated with Shallabug wetland.The main objectives of this research were: to evaluate the bird population fluctuation; to determine various threats to waterbirds and their habitats, and to present the remedial measures based on the key issues identified.
Site 1 (Kreishibal): This site lies towards the southwest of the wetland with an average depth is 0.3 m.It has a rich growth of macrophytes like Azolla spp., Lemna minor, Typha angustata, etc.The site is covered by Willow and populus towards the littoral sides.Site 2 (Noorgah): This site lies towards the southeast side of the wetland with an average depth of 0.75 m.A number of macrophytes were observed at this site.Some of them are Phragmites australis, Typha angustata, Lemna spp., Azolla spp., Nelumbo nucifera, Nymphea spp., Trappa natans etc. Site 3 (Shallabug): This site lies towards the northwest of the wetland.It has an average depth of 1.5 m.It is open water zone rich in Spargonum ramosum, Hydricharis dubia, Myriophyllum verticellatum, Nymphea spp., Trapa natans, Potomogeton, Azolla spp., etc.
Methods
Transect Method: For the purpose of present investigation, the study area was divided systematically into three study units of 700 m² each.Visual census method was used for the estimation of bird population.Visual counting was made with the help of high power field binocular (20x 50x; SG-9.2)) from respective vantage points.Observations were made from 5:00 am to 7:00 am (when they come out from their resting place) and 6:00 pm to 7:00 pm (when they approach towards their resting place).The birds were observed on the monthly basis and the fluctuation in bird population was determined in different seasons like summer, autumn and winter.
For the winter months where the flock is no more than a few hundred birds, all were counted from suitable vantage points through binoculars.With large number of birds or with mobile flocks counts in twenties were done rather than counting individual birds.
Identification of birds was done with the help of identification keys evolved by Bates and Lowther (1952) and with the help of Department of Wildlife of Haigam.Data was collected regarding composition of flock and population of individual species at three different study sites.
Fluctuation in Bird Population
Out of 32 species of birds recorded from wetland, 13 species were found to represent the residents, 9 species were found to represent the summer migrants while as 10 species represented the winter migrant community (Table 1).
At site 1, the birds which were dominant during summer season are Slaty headed parakeet (15), Central Asian Kingfisher (15), Indian Moorhen (13) and Kashmir House Sparrow (11).The birds which were found in low numbers are Pheasant Tailed Jacana (1) and Gold Fronted Finch (1).The birds which were dominant during autumn season on the same site were House Crow (11) and Common Pariah Kite (11).The birds which were found in low numbers are Himalayan Griffin Vulture (1) and European Hoopoe (1) (Table 2).
At Site 2, the birds which were dominant during summer season are Kashmir House Sparrow ( 22), Indian Moorhen (18) and House Crow (17).The birds which were found in low numbers are Himalayan Griffin Vulture (1) and Gold Fronted Finch (2).The birds which were dominant during autumn season are House Crow (30) and Common Pariah Kite (20).The birds which were found in low numbers are White Cheeked Bulbul (1) and Rufous backed shrike (2) (Table 3).
Results of the annual monitoring programme were analyzed to asses the trends in population changes and change in species composition.The bird population shows fluctuation with site differences as well as with changing seasons.Bird population fluctuation in Shallabug wetland during the study period (July to November) is depicted graphically (Fig 1).Site 1 is an open site with scattered trees and hence less number of birds was recorded at this site.Site 2 is covered by dense emergent vegetation and Salix trees providing space for nesting, breeding and resting place to birds.This site hence shows maximum number of birds in the summer season (residents and summer migrants).Site 3 is located towards the residential side of the wetland and hence shows large number of resident birds in the autumn season.
Autumn season shows decrease in the number of birds -both residents and summer migrants.This is due to the fact that summer migrants go for migration and residential birds move towards residential areas and nearby paddy fields.Fire in the emergent vegetation like Typha species in October and November makes grounds for the winter migrants but hampers the nesting and breeding of summer migrants.Till November/December, Shallabug wetland was almost completely dry with water being diverted and used for irrigation and other purposes resulting in the complete non use by any waterfowl till then they start coming to the wetland when the wetland was filled with water.In the month of January, there was influx of winter migrants into the wetland and a total approximate of 1, 32,000 birds were observed belonging to 7 species.In the total number of birds, Mallard, Common Teal, Pintail and Coot altogether represent up to 75 to 78%.Areas having dense vegetation of emergent macrophytes were preferred by mallards.Where as pochards, coot, gadwall and geese preferred open waters.Trapa species was found providing the best food for various bird species.While as Typha spp, Phragmite spp and some other emergent macrophytes were found providing the food and best places for resting and breeding purposes.
It has been observed that the wetland was mostly visited by the winter migratory fauna.It is because of the severe cold and non-availability of food for their survival in Siberia and other cold areas in Europe.37 species of waterfowl were reported to breed in the western Siberia.Fig. 6 depicts some birds recorded at various sites of the study area.
Threats to Waterbirds and Their Habitats
A recent risk to waterbirds and mass deaths of different migratory species to a highly pathogenic avian influenza virus (strain H5N1) from domestic poultry or other sources in east, southeast and north-central Asia, has highlighted the need for greater attention to understanding the impact of the virus on waterbirds and of the potential role of waterbirds in its spread.As the state of Jammu and Kashmir shares international borders with Pakistan and China -countries in which the virus has been recorded, there is a high risk of incursion of the virus to the waterbirds of the Valley.
The specific threats to waterbirds identified in this study are: Siltation, eutrophication, excessive weed infestation and degradation of water quality Lack of formal conservation status (such as protected areas) for most sites leading to poaching.Thousands of geese and ducks are hunted by the poachers in the unprotected areas leading to their movement to protected areas such as Haigam during day and their reverse movement during night.
Collection of eggs and chicks of nesting waterbirds that constitutes a loss to breeding success.
Spread of aquatic vegetation over open water areas leading to habitat loss of birds that prefer open water.
Encroachment by agriculture and urbanisation, resulting in the decrease in the size and functions of many wetland areas affecting waterbirds.
Heavy grazing leading to destruction of breeding and feeding grounds of birds.
Unregulated and over fishing in some area resulting in loss of fish and invertebrate prey and disturbance to migrants, seasonal migrants and resident waterbirds.
The key issues identified based on observations and assessments in study sites are: Absence of comprehensive baseline information on waterbirds necessary for trend analysis and planning.
Intense poaching in unprotected areas leading in decline in waterbird populations.
Habitat modifications due to changes in natural water regimes and human activities.
Rehabilitation of threatened / rare species
Maximizing the carrying capacity of the wetlands and associated marshes for waterbirds that use a range of preferred habitats for feeding, resting/roosting and nesting requires considerable planning and location specific knowledge.Adaptive management should be applied based on available knowledge of the management of the marsh vegetation and water depths.Through experimentation within sample plots, different vegetation management regimes can be tested during which time continuous monitoring of waterbird diversity, abundance and habitat use as well floral species diversity, abundance and cover, aquatic faunal diversity and abundance should be undertaken.Actions to manage the aquatic vegetation (species, quality and abundance/densities) should be undertaken with a complete understanding of their importance for waterbirds, fishes and other aquatic fauna.
Habitat restoration
Regulation of water levels is critical to the maintenance of species diversity and abundance.Areas of open water need to be created to cater to the requirements of some bird species, particularly diving ducks for feeding and many other species for resting.The food and feeding habits of different species need to be investigated to advice on their precise needs.Thereafter, a detailed survey of the wetland is required to ensure that there is a proper proportion of open water area and surrounding vegetation belts.
Control of poaching
Control of poaching requires an understanding of the modus operandi, impact on species and socio-economic impacts to enable appropriate responses to be undertaken.
Main locations of poaching, seasonality, main species taken and numbers per season are quantified.
For the resident species, an analysis of poaching of eggs, chicks and adults at nests and disturbance through cattle grazing, reed harvesting, lotus/other plant collection should be undertaken.
Conclusion and Recommendations
Autumn season shows decrease in the number of birds (both residents and summer migrants).This is due to the fact that summer migrants go for migration and residential birds move towards residential areas and nearby paddy fields.The bird population shows fluctuation with site differences as well as with changing seasons.Overall decreasing trends in the bird population were observed in autumn season.Less number of birds was recorded at site 1 because this site is an open site with scattered trees.Population density of birds shows direct relationship with density of emergent vegetation plus density of trees.Mathematically, ρx = K{ρy+ρz} ………… (Formula © to Imran Ahmad) Where "ρx" is the Avifaunal Density (in Wetland), "ρy" denotes the Emergent Vegetation Density (in Wetland), "ρz represents" the Tree Density (in Wetland) and "K" is the Proportionality Constant (depending mainly upon the environmental conditions).As Site 2 is covered by dense emergent vegetation and Salix trees providing space for nesting, breeding and resting place to birds, this site shows maximum number of birds in the summer season (residents and summer migrants).Site 3 is located towards the residential side of the wetland and hence shows an increase number of resident birds in the autumn season.
In the present research, it has been observed that the wetland is mostly visited by the winter migratory fauna.It is because of the severe cold and non-availability of food for their survival in Siberia and other cold areas in Europe.37 species of water fowl are reported to breed in the western Siberia.Out of these, 15 species were reported from Hokersar wetland and 9 species were observed in Shallabug wetland.
In order to protect the Shallabug Wetland from the major threats (mentioned above), the remedial measures suggested in this research work should be given due consideration.
Figure 1 :
Figure 1: Bird population fluctuation in Shallabug wetland, Kashmir during the study Period (July to November, 2008)
Figure 2 :
Figure 2: Residents and summer migrants in Shallabug Wetland during the study period (July to November)
Table 1 : Resident birds, Summer Migrants & Winter Migrants (July to Nov. 2007) in Shallabug wetland, Kashmir
2 Common and Scientific names follow the BirdLife International (2006).
|
2017-09-08T14:58:47.933Z
|
2009-05-21T00:00:00.000
|
{
"year": 2009,
"sha1": "9584c58456a4b27f1293dce17b15709f057ac1ad",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.3126/jowe.v2i1.1853",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9584c58456a4b27f1293dce17b15709f057ac1ad",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
270842879
|
pes2o/s2orc
|
v3-fos-license
|
AGBL4 promotes malignant progression of glioblastoma via modulation of MMP-1 and inflammatory pathways
Introduction Glioblastoma multiforme (GBM), the most common primary malignant brain tumor, is notorious for its aggressive growth and dismal prognosis. This study aimed to elucidate the molecular underpinnings of GBM, particularly focusing on the role of AGBL4 and its connection to inflammatory pathways, to discover viable therapeutic targets. Methods Single-cell sequencing was utilized to examine the expression levels of AGBL4 and functional assays were performed to assess the effects of AGBL4 modulation. Results Our findings identified the significant upregulation of AGBL4 in GBM, which correlated with adverse clinical outcomes. Functional assays demonstrated that AGBL4 knockdown inhibited GBM cell proliferation, migration, and invasion and influenced inflammatory response pathways, while AGBL4 overexpression promoted these activities. Further investigation revealed that AGBL4 exerted its oncogenic effects through modulation of MMP-1, establishing a novel regulatory axis critical for GBM progression and inflammation. Discussion Both AGBL4 and MMP-1 may be pivotal molecular targets, offering new avenues for targeted therapy in GBM management.
Introduction
Gliomas are the most common malignant primary tumors in the central nervous system, derived from glial or precursor cells and encompass diverse histopathological subtypes, including GBM, astrocytoma, oligodendroglioma, ependymoma, and oligoastrocytoma.GBM is the most common and aggressive form, making up the majority of cases (1).Despite standard treatment involving maximal safe resection, chemotherapy, radiotherapy, and tumor treating fields, GBM remains therapeutically challenging due to its aggressive nature, tendency to infiltrate the surrounding brain, and develop resistance to therapies (2).This results in a dismal 5-year survival rate of merely 4.7% (3).Consequently, there is an urgent need to elucidate the molecular underpinnings of GBM to improve diagnostic efficacy and develop novel targeted therapies (4).
The advent of single-cell sequencing technology has revolutionized our understanding of cellular processes in biology (5).By enabling the measurement of individual cell genomes, singlecell sequencing facilitates the analysis of differentially expressed genes (DEGs), the identification of key factors dysregulated during tumorigenesis, and the construction of regulatory network and clonality trees within tumor lesions.It also enables the study of tumor heterogeneity across multiple levels, which is crucial for understanding resistance to therapy and for creating new treatment approaches (6).Therefore, single-cell sequencing has been widely employed for detecting mutations and studying the epigenomic changes during tumor progression.
Emerging evidence implicates the ATP/GTP-binding proteinlike 4 gene (AGBL4) in various pathological processes, including antituberculosis drug-induced hepatotoxicity (7), cardiometabolic risk (8), and colorectal cancer, where it is anticipated to serve as a novel biomarker (9).However, its role in gliomas, particularly GBM, remains largely unexplored.In this study, we employed single-cell sequencing to confirm high expression levels of AGBL4 in GBM tissues linked to poor outcomes, supported by data from The Cancer Genome Atlas (TCGA) and Changhai Hospital.Functional assays demonstrated its capacity to promote GBM cell proliferation, migration, and invasion.Subsequent investigations identified matrix metalloproteinase-1 (MMP-1) as a key gene increased in GBM tissues and a likely target of AGBL4.Reducing AGBL4 levels significantly hindered GBM growth in xenograft models, a process that MMP-1 could reverse.
Further analysis indicated that AGBL4-related DEGs like MMP-1, Fos proto-oncogene (FOS), and FosB proto-oncogene (FOSB) are involved in the interleukin (IL)-17 signaling pathway, suggesting that AGBL4 and MMP-1 could influence GBM progression via inflammatory pathways.Subsequent analyses showed a complex relationship among AGBL4, MMP-1, and other inflammatory genes in regulating the GBM tumor microenvironment, affecting tumor behavior and patient survival.These findings highlight the potential of inflammation-related factors as focal points for future research and the development of novel therapeutic strategies for GBM.
Patients and tissue samples
Specimen collection and clinical data were approved by the Research Ethics Committee of Changhai Hospital, Naval Medical University.Written informed consent was secured from each participant.The study included three primary and three recurrent GBM samples from six Chinese patients for single-cell sequencing.Additionally, eight fresh GBM samples and four normal brain tissues from traumatic injury patients were obtained.Sixty-five paraffin-embedded primary GBM specimens from January 2005 to December 2019, with clinical data and follow-up, were analyzed.GBM patient datasets from TCGA database provided external validation.
Single-cell sequencing
GBM sample single-cell sequencing libraries were constructed following the Chromium Next GEM Single Cell 3' Reagent Kits v3.1.Gene expression matrices were generated and processed using Cell Ranger software on the 10×Genomics platform.Genomic and transcriptomic mapping was done using Spliced Trans Alignment to a Reference software, producing gene counts matrices per cell.Cell filtration, standardization, classification, differential gene expression analysis, and marker gene screening were conducted using the Seurat package in R studio.Sequencing was outsourced to Oebiotech Co., Ltd., Shanghai, China.
Cell viability, colony formation, scratch assay, and Matrigel-transwell assay
Cell viability was assessed using the Cell Counting Kit-8 (CCK-8, cat CK04-01, Dojindo, Japan) by measuring the optical density at 450 nm at 24, 48, 72, 96, and 120 hours post-treatment.Colony formation efficiency was evaluated by seeding cells in 6-well plates and staining emerging colonies with 0.1% crystal violet.To assess cell migration, a scratch assay was performed.Cells were grown in 6-well plates and a scratch was made in the center of the wells using a 200 µL pipette tip.After washing away the cellular debris and further incubating, images of the scratch were captured to evaluate the migration rate by measuring the gap closure.For invasion assays, the upper chamber of a transwell apparatus was coated with Matrigel (cat CLS3422, 8-µm pores, Millipore, MA, US) and seeded with 5 × 10 4 cells in 100 mL of serum-free medium.The lower chamber was filled with 600 µL of complete culture medium.After overnight incubation, cells that migrated to the underside of the membrane were stained with 0.1% crystal violet, and five random fields were counted under a light microscope.
Xenograft animal model
Male athymic nu/nu mice aged 6 weeks, obtained from Shanghai Jiao Tong University, were used in compliance with guidelines set by the Institutional Animal Care and Use Committee of Changhai Hospital, Naval Medical University.For tumor induction, we used three groups of mice (6 mice per group) injected with different cell lines: U87-MG control cells (U87MG-NC), AGBL4-knockdown U87-MG cells (U87MG-AGBL4-KD), and U87-MG cells with both AGBL4 knockdown and MMP-1 overexpression (U87MG-AGBL4-KD+MMP1-OE).Each mouse was anesthetized and their heads were secured in a stereotaxic instrument for precise intracranial injection of 5 × 10 5 cells into the corpus striatum.Post-injection, the mice were monitored every three days for changes in behavior and body weight.Magnetic resonance imaging (MRI) was utilized to assess tumor development when clinical signs such as reduced eating, decreased movement, circling behavior, or weight loss were observed.Tumor volumes were calculated based on the MRI data, and body weight differences among the three groups were compared on the day of MRI scanning.Mice were euthanized at humane endpoints, which were clearly defined by severe neurological dysfunction, inability to access food or water, unrelieved pain, or other signs indicating a severe decline in quality of life.The overall survival periods were recorded and the brains were harvested for further histopathological examination.
Statistics
Statistical analyses were performed using SPSS software (version 19.0).Student's t-test was used to compare the mean differences between two groups.Kaplan-Meier survival analysis and log-rank test were employed to evaluate the survival outcomes among different groups.All statistical analyses were two-sided, and P < 0.05 was considered statistically significant.Statistical graphs were drawn using GraphPad Prism 7 software (GraphPad Software Inc., San Diego, CA, USA).
Methods for Hematoxylin-Eosin (H&E) staining and bioinformatics analysis are detailed in the Supplementary Materials.
AGBL4 is highly expressed in GBM and predicts poor prognosis
Single-cell sequencing was performed on both primary and recurrent GBM specimens.Dimensionality reduction via the tdistributed stochastic neighbor embedding (t-SNE) algorithm revealed nineteen distinct clusters (Figure 1A).AGBL4 expression was observed across a majority of these tumor clusters (Figure 1B), with a significant upregulation in recurrent GBM compared to primary GBM (Figure 1C).Survival analysis using TCGA database indicated that elevated AGBL4 levels were associated with a worse prognosis in GBM patients (Figure 1D).
To validate the role of AGBL4 in GBM prognosis, we analyzed AGBL4 expression in normal brain tissues (n=4) and GBM tissues (n=8) through RT-PCR and WB.The WB results confirmed a marked increase in AGBL4 levels in GBM tissues relative to normal brain samples (Figures 1E, F).Immunohistochemical analysis was conducted on primary GBM tissue microarray.Based on the scoring criteria outlined before, samples were classified into low and high AGBL4 expression groups.Representative images of low (Figure 1G) and high (Figure 1H) AGBL4 groups illustrate the distinctions in staining intensity and cellular distribution.Survival analysis demonstrated a significant association between AGBL4 expression levels and patient outcomes.Specifically, patients categorized into the high AGBL4 group (scores ≥4) exhibited notably shorter survival times compared to those in the low AGBL4 group (scores <4) (P=0.017)(Figure 1I).Altogether, these results demonstrate that AGBL4 expression is significantly elevated in GBM and its overexpression is predictive of poor prognosis in both our cohort and TCGA dataset.
Knockdown of AGBL4 inhibits GBM cell proliferation, migration, and invasion
To determine the roles of AGBL4 in GBM cell functions, we first analyzed AGBL4 expression in various GBM cell lines.Using the 2 - DDCt method, RT-PCR results showed differential expression levels of AGBL4, with U87-MG and A172 cells exhibiting higher expression compared to T98G and U251-MG cells (Figure 2A).Additionally, WB analysis confirmed these findings, showing protein expression levels consistent with the RT-PCR results (Figure 2B).Following the knockdown of AGBL4 using the most effective shRNA sequence (shRNA2: CCGGACCATAGGAAGAACT) in U87-MG and A172 cell lines, WB analysis confirmed the efficient reduction of AGBL4 expression The knockdown efficiency was quantified at approximately 70% in the U87-MG cell line and around 65% in the A172 cell line (Figures 2C-E).
Functional assays were then performed to investigate the effect of AGBL4 on GBM cell pathology.The CCK-8 assay and colony formation assay, both indicative of cell proliferative capacity, showed that AGBL4 knockdown significantly decreased proliferation in U87-MG and A172 cells (Figures 3A-F).Scratch assays demonstrated that AGBL4 knockdown also decreased the migratory capabilities of these cells (Figures 3G-J).Finally, Matrigel-transwell assays provided quantitative and visual evidence of the diminished invasion capacity following AGBL4 knockdown (Figures 3K-N).
These findings collectively suggest that AGBL4 is integral to the proliferative, migratory, and invasive characteristics of GBM cells, confirming its potential as a target for GBM therapy.
Relative expression levels of AGBL4 in GBM cells.
Overexpression of AGBL4 improves GBM cell proliferation, migration and invasion
To investigate the roles of AGBL4 in GBM growth, we overexpressed AGBL4 in T98G and U251-MG cells.WB results verified the overexpression of AGBL4 in these cells (Figure 2C).RT-PCR also confirmed that the relative expression levels of AGBL4 were significantly increased in both U251-MG and T98G overexpression groups (Figures 2F, G).The CCK-8 assay demonstrated that AGBL4 overexpression enhanced the proliferation ability of GBM cells (Figures 4A, B).Additionally, the colony formation assay revealed that AGBL4 overexpression led to an increased number of colonies (Figures 4C-F).Scratch assays indicated that the high expression of AGBL4 promoted the migration of GBM cells (Figures 4G-J).Furthermore, the Matrigel-transwell assays demonstrated a significant increase in invasion, with more cell visible in the fields of view compared to the controls (Figures 4K-N).These findings highlight a critical role for AGBL4 in promoting the proliferation, migration, and invasion of GBM cells.
AGBL4 knockdown significantly reduces a range of classic factors associated with cancer-related pathways
To elucidate the molecular mechanism of AGBL4 in GBM, we conducted transcriptome sequencing on A172 cells with or without AGBL4 knockdown.The heatmap revealed distinct differences and pairwise correlations in gene expression between the various GBM cell samples (Figure 5A).Analysis identified nearly 42 DEGs, with 30 up-regulated and 12 down-regulated (Supplementary Figure 1).Bioinformatics analysis indicated that these DEGs were primarily involved in processes such as enzyme binding, positive regulation of protein complex assembly, positive regulation of TRAIL-activated apoptotic signaling pathway, and negative regulation of microtubule motor activity, according to GO annotations (Figure 5B).Furthermore, KEGG enrichment analysis suggested that AGBL4-associated DEGs might participate in pathways related to microRNAs in human cancer and contribute to the IL-17 signaling pathway, which is frequently recognized as a reference index to judge the malignancy of gliomas (Figure 5C).Based on these findings, we speculated that AGBL4-related DEGs might play significant roles in tumor progression within the central nervous system.
AGBL4 promotes GBM cell proliferation, migration, and invasion abilities via MMP-1
From the DEGs identified in our transcriptome analysis (Supplementary Table 1), eight candidate genes were selected based on fold change and prognostic correlation in TCGA database (Table 1).RT-PCR analysis revealed that among these candidates, MMP-1 exhibited the most significant differential expression (Supplementary Figure 2), identifying it as a target for further investigation to clarify the specific signaling pathway through which AGBL4 may promote GBM tumor progression.
Microarray data revealed elevated MMP-1 expression in GBM tissues, categorized samples into high and low MMP-1 groups.Histologically, cells in the low MMP-1 group displayed uniform morphology, with regular arrangement and clear tissue structures, as confirmed by H&E staining.In contrast, the high MMP-1 group exhibited cells of varying sizes, irregular shapes, disorganized arrangement, significant nuclear atypia, and frequent mitosis, indicating a more aggressive cellular phenotype (Supplementary Figure 3A).Survival analysis displayed that patients with high MMP-1 expression had significantly shorter survival times than those with low expression (P=0.0149),indicating that MMP-1 levels are inversely correlated with GBM patient survival (Supplementary Figure 3B).
RT-PCR confirmed that compared to the U87-MG negative control (U87MG-NC group), knocking down AGBL4 (U87MG-AGBL4-KD2 group) significantly reduced the expression of MMP-1.Overexpressing MMP-1 in the AGBL4 knockdown cells (U87MG-AGBL4-KD2+MMP1-OE group) restored MMP-1 expression levels to those comparable with the control group (Figures 6A, B).Overexpression of MMP-1 on the basis of AGBL4 knockdown could counteract the inhibitory effect of AGBL4-decrease on GBM cells, which was manifested as the improvement of the proliferation capacity of AGBL4-knockdown U87-MG and A172 cells after complementing MMP-1 in CCK-8 assay (Figures 6C, D).Colony formation assays further supported this trend.Colony formation assays further supported this trend, with the MMP1-OE group demonstrating the strongest ability to form colonies.The AGBL4-KD2+MMP1-OE group's colony-forming capacity was comparable to the NC.The AGBL4-KD2 group had the least robust colonyforming ability, reinforcing the significant role of MMP-1 in GBM cell proliferation (Figures 6E-G).The Matrigel-transwell and scratch assays indicated that, the MMP1-OE group exhibited the highest levels of invation and migration, followed by the AGBL4-KD2 +MMP1-OE group, which displayed similar levels to the NC gtoup.Both of these groups exhibited enhanced capabilities compared to the AGBL4-KD2 group, which showed the lowest levels of invasion and migration (Figure 7).
Inhibition of AGBL4 suppresses GBM progression and prolongs survival via MMP-1 in animal models
To determine the effect of AGBL4 and MMP-1 in GBM in vivo, we injected U87MG-NC, U87MG-AGBL4-KD2 and U87MG-AGBL4-KD2+MMP1-OE cells into nude mice (n=6).After intracranial tumor implantation, the mice were monitored every 3 days for behavioral changes and weight loss.On approximately day 15, MRI was performed to assess tumor growth when clinical symptoms were noted.The MRI data revealed that the U87MG-AGBL4-KD2 group exhibited significantly slower tumor growth compared to the U87MG-NC group.Conversely, the U87MG-AGBL4-KD2+MMP1-OE group showed accelerated tumor progression relative to the U87MG-AGBL4-KD group (Figures 8A, C).Survival analysis indicated that the U87MG-AGBL4-KD2 mice had the longest survival time, followed by the U87MG-AGBL4-KD2+MMP1-OE and U87MG-NC groups (Figure 8B).H&E staining of nude mice's brain tissues displayed that there were more mitotic figures in U87MG-NC mice, followed by U87MG-AGBL4-KD2+MMP1-OE mice, while the morphology of cells from AGBL4-KD2 mice was relatively less irregular as well as fewer mitotic figures (Figure 8D).The protein content of tumor cells in the three group of nude mice differed from one another, that is, the degree of tumor progression was quite different.The proliferation level and the malignancy degree of U87MG-NC mice and U87MG-AGBL4-KD2+MMP1-OE mice were both higher than AGBL4-KD2 mice (Figure 8E).
AGBL4-MMP-1 axis is associated with inflammatory response pathways in GBM
Enrichment analysis of AGBL4-related DEGs suggests that 3 genes, including MMP-1, FOS, and FOSB, are significantly concentrated in IL-17 signaling pathway.This may indicate that upregulated AGBL4, along with downstream MMP-1, could intervene in the progression of GBM by influencing key components within inflammation-related pathways.In the TIMER database, an immune cell correlation analysis of MMP-1, FOS, and FOSB revealed a negative correlation between MMP-1 gene expression and the infiltration levels of B cells, CD8+ T cells, CD4+ T cells, and macrophages, after purity adjustment.Conversely, a positive correlation with dendritric cell infiltration was observed.Meanwhile, FOS gene expression showed a positive correlation with the infiltration levels of CD4+ T cells, neutrophils, and dendritic cell infiltration.Besides, FOSB gene expression demonstrated a negative correlation with CD4+ T cell infiltration and macrophage infiltration levels (Figure 9A).These findings suggest that the expression levels of MMP-1, FOS, and FOSB are closely related to immune cell activity in GBM, hinting at the role of these genes, particularly MMP-1, in modulating the GBM immune microenvironment.
PPI network and correlation analysis of MMP-1 and inflammatory response genes
We then constructed an interaction network integrating MMP-1 with 737 genes from the Inflammatory Response annotation cluster (GO:0006954) of GO database to identify key molecules interacting with MMP-1, which resulted in a PPI network comprising 15 nodes and 87 edges (Figure 9B).Excavation of this network yielded the top 10 hub genes, which were then subjected to GO and KEGG enrichment analyses.The results, as shown in Figures 9C, D, revealed that these hub genes are predominantly localized to the cell surface, extracellular space, and extracellular region, and are involved in various inflammatory and immune regulatory processes such as the inflammatory response, positive regulation of transcription from RNA polymerase II promoter, and positive regulation of interleukin-6 production.KEGG pathway analysis also indicated significant enrichment in several pathways related to inflammation and immune responses.Collectively, these findings underscore the role of genes interacting with MMP-1 in regulating inflammatory responses, immune signal transduction, and cell proliferation, invasion, and migration, indirectly reflecting the importance of MMP-1 in maintaining tissue structure and signal transduction within the inflammatory and tumor microenvironment.
To further examine the correlation between MMP-1 expression levels and the expression of inflammatory response genes in GBM samples, we utilized data from TCGA database.The results indicated a moderate positive correlation between MMP-1 and several genes, including NFKB1, SELE, TGFB1, THBS1, TIMP1, and TNFAIP6.A weaker positive correlation was observed between MMP-1 and PTX3, STAT3, TLR2 (Supplementary Figure 4).These findings corroborate, at the expression level, the involvement of MMP-1 with these genes in certain biological processes or pathological mechanisms within GBM, particularly in pathways related to the inflammatory response.
Mutation profile and prognostic value of inflammatory response genes interacting with MMP-1
Figure 10A presents the mutation profile of the 14 inflammatory response genes that interact with MMP-1 in GBM from TCGA database.It is observed that over 10% of the samples harbor mutations in at least one of the aforementioned genes, with THBS1 exhibiting the highest mutation frequency, nearing 4%.The predominant type of mutation found in most inflammation-related genes is missense mutation.VCAM1 harbors frame shift deletions, while THBS1, VCAM1, and TGFB1 contain nonsense mutations, and NOX4 shows splice site mutations.These mutation data provide insight into the functional roles of MMP-1 and associated inflammatory response genes in GBM, suggesting they may influence protein function through alterations in amino acid sequences, premature protein translation termination, protein inactivation, or changes in protein structure, thereby affecting inflammatory and immune responses and ultimately contributing to tumor progression.Bioinformatic analyses of these inflammatory response genes revealed that high expression levels of THBS1 correlate with a lower overall survival rate in GBM patients (Figure 10B), implying that THBS1 may be an adverse prognostic factor.Figure 10C reconfirms the expression levels of THBS1 in GBM from TCGA database compared to normal brain tissue in GTEx database, where THBS1 is significantly overexpressed in tumor tissues.These findings may signify a detrimental role of THBS1 in the pathological process of GBM, where its elevated expression reflects more aggressive biological characteristics of the tumor and provides direction for the development of future biomarkers.
Combining immune cell correlation analysis, PPI network construction, gene mutation profiling, and correlative studies, we can tentatively conclude that the interactions among AGBL4, MMP-1, and other inflammatory response genes, especially THBS1, may constitute a complex network in the pathological process of GBM.This network potentially regulates the tumor microenvironment, influencing tumor proliferation, invasion, migration, and patient survival.These findings highlight the potential of inflammation-related factors as focal points for future research, offering the possibility to further explore the precise mechanisms of these molecules and provide critical information for the development of novel therapeutic strategies.
Discussion
AGBL4, also named as cytosolic carboxypeptidase 6, is part of the family of enzymes that catalyze the deglutamylation of polyglutamate side chains on proteins such as tubulins and nucleosome assembly proteins (10).Polyglutamylation is a reversible post-translational protein modification and has been found playing a critical role in tubulin regulation as well as in cellular processes such as chromatin remodeling or hematopoiesis (10,11).Besides, alterations in polyglutamylation levels have been associated with several pathologies, including neurodegenerative processes or cancer progression (12,13).As a member of cytosolic carboxypeptidase family, although the role of AGBL4 in various cellular and pathological processes such as antiviral activity, immunomodulatory activity, and renal adenocarcinamo is documented (14)(15)(16), its function in central nervous system tumors, particularly GBM, has been less explored.Our study made an approach to the involvement of AGBL4 in GBM pathogenesis and its potential mechanism of action through the modulation of MMP-1.
Our finding indicate that elevated AGBL4 expression correlates with poor prognosis in GBM patients, which aligns with data from both TCGA and our tissue microarray experiments.The promotion of GBM cell proliferation, invasion, and migration by AGBL4 was substantiated through phenotypic experiments.Transcriptomic and bioinformatic analyses further revealed that AGBL4-realted DEGs were enriched in cancer-associated microRNA-related pathways and IL-17 signaling pathway, the latter being notably related to malignancy in central nervous system tumors.This suggests a possible link between AGBL4's oncogenic effects and inflammatory pathways, highlighting its role in the tumor microenvironment's immune responses.
The matrix metalloproteinase family, particularly MMP-1, known for its role in cleaving collagenous extracellular matrix (17), appears to be a critical downstream effector of AGBL4.Elevated MMP-1 expression is a hallmark of highly malignant gliomas and is implicated in enhancing tumor invasiveness and malignancy (18,19).Pullen et al. demonstrated a regulatory pathway linking nitric oxide to high-grade glioma cell motility via MMP-1 (20).Anand et al. identified that EGFR regulates MMP-1 predominantly through the MAPK signaling pathway in GBM cells (21).Malik et al. found an association between the 2G/2G genotype and 2G allele of -1607 MMP-1 polymorphism and GBM occurrence (22).Additionally, increased MMP-1 and PAR1 expression correlates with higher histological malignancy and poorer clinical outcomes in gliomas (23).While much research has focused on MMP-1's downstream mechanisms in gliomas, its upstream regulators remain underexplored, which is crucial for understanding glioma invasiveness.
Our study not only confirms the upregulation of MMP-1 in high-grade gliomas but also identifies AGBL4 as a novel upstream regulator of MMP-1.Existing studies on AGBL4 are relatively few and mainly focus on its role in cellular component (24), neurodegeneration (25), and immunomodulatory activities (16,26).However, its implications in oncology, particularly in GBM, have been less explored.Our study marks a significant advancement by first identifying the differential expression of AGBL4 in GBM and verifying its negative correlation with patient survival through analysis of public databases and gene chips.This groundbreaking research links AGBL4 to the aggressive nature of central nervous system tumors at the molecular level for the first time.Further, our experimental findings underscore the critical role of AGBL4 in tumor biology, revealing that knocking down AGBL4 inhibits the proliferation, migration, and invasion of GBM cells, thertby highlighting its importance in tumor viability and progression.Importantly, this research not only pioneers the investigation of the interaction of AGBL4 with GBM, but also introduces the novel concept that AGBL4 may contribute to GBM in an MMP-1dependent manner.
In addition, the interaction between AGBL4 and MMP-1 highlights a potential connection to the inflammatory processes within the tumor microenvironment of GBM.The upregulation of MMP-1, mediated by AGBL4, may not only promote tumor invasiveness through structural modifications but could also exacerbate inflammation, thereby creating a more conducive environment for tumor growth and spread.Our data indicates that the expression levels of MMP-1, FOS, and FOSB are closely related to immune cell activity in GBM, suggesting their pivotal roles in modulating the GBM immune microenvironment.
Our constructed PPI network, integrating MMP-1 with genes from the Inflammatory Response cluster of the GO database, identified key molecules that interact with MMP-1.These interacting genes are primarily involved in inflammatory response, positive regulation of transcription from RNA polymerase II promoter, and positive regulation of interleukin-6 production, indirectly reflecting the importance of MMP-1 in maintaining tissue structure and signal transduction within the inflammatory and tumor microenvironment.
Further analysis from TCGA database on the correlation between MMP-1 expression levels and the expression of inflammatory response genes in GBM samples showed a moderate positive correlation between MMP-1 and several genes, exemplified by THBS1, confirming the involvement of AGBL4-MMP-1 axis in GBM-related inflammatory pathways.
However, understanding the molecular pathogenesis of GBM remains a challenge.It is speculated that AGBL4 and MMP-1 may contribute to the occurrence, development, and spread of GBM, but the specific mechanism and interactions between AGBL4 and MMP-1 still require further investigation.
Conclusion
In summary, this study demonstrates that AGBL4 expression in GBM is upregulated and links with poor prognosis of GBM patients by enhancing tumor cell proliferation, migration, and invasion.Our findings reveal a novel mechanistic pathway where AGBL4 enhances GBM malignancy primarily through modulation of MMP-1 expression, which in turn influences the inflammatory response pathways within the tumor microenvironment (Figure 11).The identification of AGBL4 and MMP-1 not only deepens our understanding of the molecular dynamics of GBM but also highlights their involvement in inflammatory processes that FIGURE 11 Model for the mechanism of AGBL4 in GBM tumorigenesis.may contribute to tumor aggressiveness, suggesting the potential of AGBL4 and MMP-1 as strategic targets for gene-directed therapy, as well as advocating for the development of targeted inhibitors against these proteins as a promising new direction for therapeutic intervention in glioma treatment.
1 AGBL4
FIGURE 1 AGBL4 was highly expressed in GBM and predicted poor prognosis.(A) t-SNE visualization of 19 distinct clusters identified from single-cell RNA sequencing of primary and recurrent GBM samples.(B) Expression of AGBL4 across the clusters revealed by t-SNE plot.(C) AGBL4 expression is significantly higher in recurrent GBM compared to primary GBM samples.(D) Survival curves of GBM patients with low AGBL4 or high AGBL4 expression, obtained from TCGA database, P=0.017.(E) WB analysis confirms elevated AGBL4 protein levels in GBM tissues compared to normal brain samples.(F) Quantification of qRT-PCR verifies the upregulation of AGBL4 in GBM relative to normal brain tissues, P=0.0176.(G, H) Representative images of immunohistochemical staining show (G) low and (H) high AGBL4 expression in GBM tissues.Scale bars: 100 mm (4X), 25 mm (40X).(I) Kaplan-Meier analysis demonstrates that GBM patients with high AGBL4 staining have significantly shorter survival times compared to those with low AGBL4 expression, P = 0.0170.
(A) QRT-PCR analysis shows varying expression levels of AGBL4 across different GBM cell lines, with U87-MG and A172 exhibiting higher expression compared to T98G and U251-MG.(B, C) WB analysis confirms the protein expression patterns of AGBL4 in (B) U87-MG, U251-MG, T98G, and A172 cell lines and (C) after AGBL4 knockdown in A172 and U87-MG cells, and overexpression in T98G and U251-MG cells.(D-G) Quantification of qRT-PCR demonstrates successful AGBL4-KD in (D) U87-MG and (E) A172 cells, and successful AGBL4-OE in (F) U251-MG and (G) T98G cells.
5 AGBL4
FIGURE 5 AGBL4 knockdown significantly reduces a range of classic factors associated with cancer-related pathways.(A) Heatmap showing differential gene expression between NC and AGBL4-KD in A172 cells.Each column represents a different sample, and each row represents a gene.Red indicates upregulated genes, blue indicates downregulated genes, and color intensity correlates with expression level.(B) GO annotations analysis of DEGs in A172 cells comparing NC and KD.The bar chart categorizes GO terms by biological processes, cellular components, and molecular functions, with the number of associated genes indicated.(C) KEGG analysis of DEGs in A172 cells after AGBL4-KD.The bar chart categorizes pathway terms by organismal systems, human diseases, and environmental information processing, with the number of genes involved in each pathway.
9
FIGURE 9Immune cell correlation and molecular interaction analysis in GBM.(A) Scatter plots illustrating correlation between MMP-1, FOS, and FOSB gene expression and the infiltration levels of immune cells in GBM, adjusted for tumor purity, P < 0.05.(B) PPI network of MMP-1 with associated genes from the Inflammatory Response annotation cluster (GO:0006954), consisting of 15 nodes and 87 edges with MMP-1 centrally positioned.(C) GO annotations analysis for the top 10 hub genes from the PPI network.(D) KEGG pathway enrichment analysis for the top 10 hub genes from the PPI network, and the relationship between genes and pathways.
10
FIGURE 10Mutational landscape of inflammatory response genes and their impact on survival in GBM.(A) A mutational landscape displaying the frequency and types of genetic alterations in inflammatory response genes across 393 GBM samples from TCGA with each row representing a gene, and each column a sample.Alterations, including nonsense mutations, frame shift deletions, missense mutations, multi-hit events, and splice site alterations, are color-coded.The graph on the right side indicates the percentage of samples with mutations in each gene, with the graph on the top showing the total number of mutations per sample.(B) Kaplan-Meier survival curves comparing overall survival between GBM patients with high and low expression of THBS1, P=0.046.(C) A box plot illustrating the differential expression of THBS1 between tumor (T) tissue samples in GBM patients from TCGA and normal (N) brain tissue samples from GTEx database, with the red asterisk denoting a statistically significant higher expression in the tumor samples.
TABLE 1
Candidate genes for downstream targets of AGBL4.
|
2024-06-30T15:11:56.001Z
|
2024-06-28T00:00:00.000
|
{
"year": 2024,
"sha1": "f1a36f82dfe3f1639d6f53ea1997e2edb5d9813e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fimmu.2024.1420182",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0bc3a4779f9c09398235d2dabe21df5a6e416c05",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119604416
|
pes2o/s2orc
|
v3-fos-license
|
Yang-Mills mass gap at large-N, non-commutative YM theory, topological quantum field theory and hyperfiniteness
We review a number of old and new concepts in quantum gauge theories, some of which are well established but not widely appreciated, some are most recent. Such concepts involve non-commutative gauge theories and their relation to the large-N limit, loop equations and the change to the anti-selfdual variables also known as Nicolai map, topological field theory (TFT) and its relation to localization and Morse-Smale-Floer homology, with an emphasis both on the mathematical aspects and the physical meaning. These concepts, assembled in a new way, enter a line of attack to the problem of the mass gap in large-N SU(N) YM, that is reviewed as well. In the large-N limit of pure SU(N) YM the ambient algebra of Wilson loops is known to be a type II_1 non-hyperfinite factor. Nevertheless, for the mass gap problem at the leading 1/N order, only the subalgebra of local gauge-invariant single-trace operators matters. The connected two-point correlators in this subalgebra must be an infinite sum of propagators of free massive fields, a vast simplification. It is an open problem, determined by the grow of the degeneracy of the spectrum, whether the aforementioned local subalgebra is in fact hyperfinite. For the mass-gap problem, in the search of a hyperfinite subalgebra containing the scalar sector of large-N YM, a major role is played by the existence of a TFT underlying the large-N limit of YM, with twisted boundary conditions on a torus or, what is the same by Morita duality, on a non-commutative torus.
In the large-N limit of pure SU(N) Yang-Mills, the ambient algebra of Wilson loops is known to be a type II 1 non-hyperfinite factor. Nevertheless, at the leading 1/N non-trivial order, because of the mass gap and confinement, the connected two-point correlation functions of local gauge invariant operators are conjectured to be an infinite sum of propagators of free massive fields.
It is an open problem, most relevant to a complete solution of the glueball spectrum at large-N, whether or not the corresponding local algebra is hyperfinite. Yet, for the mass gap problem or for a partial solution of the glueball spectrum, one should consider hyperfinite subalgebras. We show that a hyperfinite sector is constructible by fluctuations around a trivial topological field theory underlying Yang-Mills at large N. The hyperfiniteness problem has been suggested by the author as one of several problems arising as a byproduct of the Simons Center workshop "Mathematical Foundations of Quantum Field Theory", Jan 16-20 (2012).
Topological quantum field theory in large-N Y M and hyperfiniteness
It is known 1 that in a 4d quantum field theory with a finite number of fields, under mild assumptions on the existence of the KMS states for any temperature, the von Neumann algebra of the observables is algebraically isomorphic to the unique type III 1 hyperfinite factor.
We recall that a von Neumann algebra is hyperfinite if it is the weak limit of a sequence of matrix algebras.
The situation gets more involved in the large-N limit of any field theory that carries fields in the adjoint representation of SU (N), in particular in the large-N limit of pure SU (N) Yang-Mills (Y M) [1].
In the Y M case the large-N limit can be properly defined in terms of the von Neumann algebra generated by Wilson loops, Ψ(x, x; A), supported on a loop, L xx , based at a point, x: built by means of the Y M connection, A α . At leading large-N order the Wilson loops satisfy the Makeenko-Migdal loop equation [2,3]: We can combine the expectation value < ... > with the normalized matrix trace 1 N Tr to define a new normalized trace T R =< 1 N Tr(...) > [4]. Then the problem is to find an operator solution, A α , of the Makeenko-Migdal equation uniformly for all loops, with values in a certain operator algebra with normalizable trace T R(1) = 1. Such a solution is called the master field [5]. Such an algebra is of type II 1 because of the existence of the normalizable trace and it is explicitly known.
Indeed the ambient von Neumann algebra of the Makeenko-Migdal loop equation is the Cuntz algebra in its tracial representation with at least as many self-adjoint generators as the number of components of the gauge connection, i.e. 4 in 4d [6,7,8,9,10,11].
We recall that the tracial representation of the Cuntz algebra is defined as follows [9,10,11]: and that the construction of the master field in terms of the Cuntz algebra [9,10,11] involves only four generators since, by a version of the Eguchi-Kawai reduction at large-N [13,14,15,16,17,18,19,20], translations can be absorbed by gauge transformations [21,22]: However, the finite number of generators is only seemingly a simplification [23]. In fact, by Voiculescu work [24], the von Neumann algebra of the Cuntz algebra with more than one selfadjoint generator in its tracial representation is a type II 1 non-hyperfinite von Neumann algebra, that is algebraically isomorphic to a free group factor with the same number of generators, that is the main example of the elusive non-hyperfinite type II 1 factors. Therefore solving the Makeenko-Migdal equation is, to use just an euphemism, very difficult. We should add that the von Neumann algebra generated by the actual solution need not to be non-hyperfinite (a string solution [25,26,27] ?) but there is no field theoretical reason why it should not.
Nevertheless, connected two-point correlation functions of local single trace gauge invariant operators, O(x), of large-N Y M or QCD are conjectured to be in a sense the most simple as possible, a sum of an infinite number of propagators of free fields (for simplicity of notation we consider the scalar case only) [28,29,30] at the leading non-trivial 1 N order: saturating the logarithms of perturbation theory [28,29,30]: is a multiplicative renormalization related to the one-loop anomalous dimension, γ, of the operator, O(x), whose RG-improved behavior is a fractional power of a logarithm: with β 0 the first coefficient of the beta function [28,29,30] . It is an interesting problem 2 , related to the actual computability of the glueballs spectrum, whether the corresponding local algebra is hyperfinite.
Hyperfiniteness can be interpreted as a condition on the number of local degrees of freedom of the theory 3 . Indeed hyperfiniteness is only slightly weaker than the existence of the KMS states for any temperature.
For example the bosonic string does not satisfy the KMS condition for all temperatures because of the Hagedorn transition [32], i.e. the divergence of the partition function at a certain finite temperature. However, the Hagedorn transition has been related to the tachyon of the bosonic string (see for example [33,34]), in such a way that the glueball spectrum of large-N Y M may or may not arise by a hyperfinite local algebra even in the likely case that a stringy description (obviously non-tachyonic) does exist.
Yet, if we are interested only in the problem of the mass gap [35], we may limit to correlators that involve only scalar states 4 and possibly to hyperfinite subalgebras [31,30].
Therefore we may wonder as to whether there exist special Wilson loops that generate "small" subalgebras.
Surprisingly the answer is affirmative. There exist special Wilson loops, that we call twistor Wilson loops for geometrical reasons explained below, defined in pure U (N) Y M on non-commutative space-time, R 2 × R 2 θ , with complex coordinates, (z,z,û,û), built by a modified non-Hermitian connection: that define the observables of a trivial topological theory underlying the large-N limit of Y M [31]: It is known that noncommutative Y M in the limit of infinite non-commutativity, θ , coincides with the large-N limit of commutative Y M, in such a way that the non-commutative theory realizes the same master field of the commutative one [19,20,36,22]. The topological theory is trivial 5 because the generalized trace, T R, is exactly 1 for all the topological twistor Wilson loops for θ → ∞.
The trivial topological theory exists at all scales. Heuristically, in the same vein of the argument of the previous footnote, this may be related to the fact that at leading 1 N order the local algebra factorizes, i.e. it is in fact ultralocal. Indeed for every local single trace operator, O i (x i ), at leading 1 N order: The triviality of twistor Wilson loops follows from the fact that in the large-θ limit they are gauge equivalent to ordinary Wilson loops supported on Lagrangian submanifolds of twistor space of complexified space-time: with the parameter λ playing the role of the (Lagrangian) fiber of the twistor fibration [31]. In the language of local wedge algebras 6 these loops are supported on (the analytic continuation 4 It is believed that the lowest glueball state is a scalar. 5 The existence of a possibly trivial topological theory in the infrared underlying every theory with a mass gap, in particular pure Y M, has been advocated by Edward Witten in his talk at this workshop. Such a trivial topological theory underlying large-N Y M in 4d had been constructed in [37,38,39,31,30]. 6 See pdf of Detlev Buchholz talk at this conference. of) Lagrangian wedges. Remarkably the support property implies triviality [31], because of the vanishing of the coefficients of the effective propagators on the support: and, via triviality, implies the following localization property, of the trivial topological algebra of twistor Wilson loops, in function space [37,31].
There exists a change of variables in the Y M functional integral such that a new loop equation [37,31] in the trivial topological sector: 14) The change of variables involves a non-SU SY version [37,31] of the Nicolai map [40,41,42,43]: (1.15) and the holomorphic gauge, Bz = 0 [37,31]: The new holomorphic loop equation [37,31] can be regularized in a gauge-invariant way by analytic continuation to Minkowski: Finally, deforming the standard Makeenko-Migdal loop with the shape of the symbol ∞, to a cusped loop with zero cusp angle at the non-trivial self-intersection, the localization property follows [37,31]: This equation expresses the fact that matrix elements of the equation of motion of Γ vanish, when restricted to the sub-algebra of twistor Wilson loops: There is a homological interpretation of the localization such that the holomorphic loop equation is localized by the addition to the loop of vanishing boundaries, i.e. backtracking arcs ending with cusps [37,31], in the dual way to the localization of cohomology classes, that involves deforming by coboundaries [44,45,46,47,48,49,55]. Thus the topological sector is localized at the critical points of the effective action, Γ. Γ contains the interesting information of the localization, since it is naturally defined on the physical wedge, rather than on the topological one. Indeed, though the topological theory is trivial at large-N, the effective action, Γ, admits at subleading 1/N order, non-trivial fluctuations around the critical points, supported on a "transverse" Lagrangian wedge, that is a physical wedge: i.e. a certain Lagrangian wedge that is defined through the analytic continuation, as operators, of the topological twistor Wilson loops to the physical twistor Wilson loops: The twistor Wilson loops supported on the physical wedge satisfy the same Minkowski loop equation, Eq.(1.17), but not the localization property, Eq.(1.18), since their v.e.v. is non-trivial. In particular their v.e.v. does not stay uniformly bounded from zero and from infinity deforming to a cusped loop, because of the cusp anomaly [50,51,52]. The algebra of observables of the topological theory can be realized explicitly as the closure of a dense set in function space that involves a lattice of surface operators supported on Lagrangian submanifolds [37,31]. Mathematically these are local systems associated to the topological theory, obtained interpreting [31] the Nicolai map as hyper-Kahler reduction [53,54] on a lattice divisor: On the lattice hyper-Kahler quotient the physical fluctuations around the topological theory are locally abelian in function space, all the other non-abelian degrees of freedom being zero modes of the Jacobian of the Nicolai map associated to the moduli of the local system [31]. The theory becomes locally abelian [31] because of the automatic commutativity [56,57,58,59] of the triple, µ − αβ (p), at each lattice point, p, of the "lattice field of residues", due to the singular nature of the Hitchin equations, Eq.(1.22) [60,61,62].
Therefore the physical fluctuations around the topological theory become computable in the large-N limit.
On the lattice hyper-Kahler quotient the critical points of the effective action can be found as fixed points for the action, on the Nicolai variables in function space, of the semigroup that rescales the fiber of the Lagrangian twistor fibration, because of the λ -independence of the v.e.v. of twistor Wilson loops: A large-N beta function of Novikov-Shifman-Vainshtein-Zakharov type [63] at the leading 1/N order follows [37,31]. At the next to leading 1/N order, the mass gap, the glueballs spectrum, the anomalous dimensions [64,65] and the hyperfiniteness follow as well, for fluctuations supported on the transverse Lagrangian wedge [31,30].
Acknowledgments
We would like to thank Alexander Migdal and Alexander Polyakov for a long fascinating discussion on loop equations in Princeton. In particular one question that has arosen in that discussion helped us to focus on the locally abelian nature of the lattice hyper-Kalher reduction, that is the mathematical justification of the computations in [31,30].
|
2014-09-17T01:24:45.000Z
|
2012-02-20T00:00:00.000
|
{
"year": 2012,
"sha1": "ceea897e9b5783faec310b382b2fdd0266333509",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1202.4476",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ceea897e9b5783faec310b382b2fdd0266333509",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
212985605
|
pes2o/s2orc
|
v3-fos-license
|
Occurrence of pathogenic microorganisms in dessert items collected from Dhaka city
19 Due to delicious taste and readily availability, desserts are one of the most popular foods in Dhaka city. High amount of carbohydrate and protein in dessert items make them more susceptible to proliferation of microbial growth. Present study depicted a complete microbiological profile of some popular desserts such as, sweet, pastry, ice cream, pudding, falooda, yogurt and custard available in different food shop at Dhaka city, Bangladesh. All the samples were found to be contaminated with heterotrophic bacteria as well as fungi within the range of 10 to 10 cfu/g. In case of specific microflora, the growth of Staphylococcus spp., Klebsiella spp. and Pseudomonas spp. were observed in most of the samples indicates the poor quality of these products. Bioburdens of E. coli in sweet, pudding and yogurt were found in the range of 1.2×10 to 2.7×10 cfu/g. Salmonella spp., Shigella spp. and Vibrio spp. could not be isolated from any of the samples. Current study indicates that hygienic conditions should be maintained during preparation, packaging and retailing of dessert items in order to reduce the load of contamination in ready to eat foods which will ensure good health of consumers.
nutritional value, contaminated desserts may cause serious food borne illness to consumers (4)(5)(6)(7). Many studies revealed that contaminated milk and milk based products are responsible for outbreaks of food borne disease (8)(9)(10)(11)(12). Milk based food like desserts can serve as an ideal growth medium for bacteria. Growth of pathogenic bacteria in these foods will be detrimental to health of children and immunocompromised persons when they consume such products in sufficient amount. Faulty pasteurized milk and contaminated raw materials are possible ways by which pathogens can get entry to final product (13).
In case of pastry or cake raw eggs are major source of contamination of pathogenic bacteria like Salmonella spp. (14)(15). Some studies claim that outbreak of salmonellosis is associated with slightly cooked desserts using raw egg (14)(15)(16). Similarly, others pathogenic microbes can be get entry from different ingredients like custard powder, cream or sauces, nuts etc. (17). Higher moisture content, neutral pH and rich amount of nutrient make dessert excellent growth medium for many kinds of microorganisms (18). However, the microbial loads of food products are influenced by a number of factors such as the storage condition, raw materials used, processing environment, sanitary conditions, unhygienic handling packaging and storage (19).
Yeast and mold are the most common contaminants along with pathogenic bacteria like Escherichia coli, Staphylococcus aureus, Bacillus cereus, Campylobacter jejuni, Salmonella species, Listeria monocytogenes and some other pathogens (20). Presence of coliforms and S. aureus in desserts may be due to defective pasteurization, contaminated water or poor sanitary practices of handlers (1)(2). L. monocytogenes, is another most common food borne illness causing microorganisms, may be associated with the consumption of pasteurized milk, cheese made from unpasteurized milk and other dairy based products (21)(22).
Nearly 30 million foodborne illnesses are encountered every year in Bangladesh (32). To ensure public health safety, safe foods should be ensured for all consumers (17,23). The aim of this research was to investigate the presence of microorganisms in common desserts available in Dhaka city.
MATERIALS AND METHODS
Study area, sampling and sample processing. Total sixteen desserts of different categories (sweet, pastry, ice cream, pudding, falooda, yogurt and custard) from two different locations (Baily Road and Mouchak) of Dhaka city were randomly collected following standard protocol (24). All the samples were quickly transported to the laboratory. Prior to microbiological assay, 10 g of each sample was homogenized with 90 ml of normal saline in 9:1 ratio and serially diluted to 10 -5 (4)(5)(6)(7)25).
Microbiological analysis of each sample. A volume of 0.1 ml from each sample suspension was spread onto nutrient agar (NA) and incubated at 37C for 24 h for enumerating total viable bacteria (TVB). Sabouraud dextrose agar (SDA) (HiMedia Laboratories, Mumbai, India) was inoculated in the similar manner followed by incubation at 25C for 48 h for the isolation of fungi (25).
Isolation of pathogenic microorganisms. For the isolation of coliform
RESULTS
Different studies on food products claim that processed foods are one of the most important portions of energy intake but contaminated food are serious threat to public health (26)(27)(28). So before declaring food as safe to consume, food should be compared with different food standard guidelines. In case of ready to eat foods, the maximum permissible limits for Total Plate Count (TPC) is <10 5 cfu/g, yeast and mold is <10 4 cfu/g, coliform bacteria <200/g and E. coli should be absent (29)(30)(31).
Previously published many study on food samples claim that ready to eat foods of Bangladesh were contaminated with different microorganisms, among them E. coli was predominantly present in most of the cases (35). Present study found that, all the samples were contaminated with heterotrophic bacteria within the range of 10 4 to 10 6 cfu/g (Table 1). Comparing this result with food standard, three out of 16 samples exceed the microbial limit and only results from six samples were found to be satisfactory (below 10 5 cfu/g). On the other hand, fungal load was within the range of 10 2 to 10 4 cfu/g which is under the recommended limit of fungus. Huge bacterial growth in desserts is a matter of concern as consumption of these food may lead to disease in many occasion.
In case of specific micro flora, the growth of Staphylococcus spp. was most predominated but E. coli, Klebsiella spp., Bacillus spp., and Pseudomonas spp. were also observed in many dessert samples. Staphylococcus spp. growth was found in ten samples with higher load up to 10 3 cfu/g. Unhygienic handling can be the possible cause of this contamination (1-3). Out of sixteen samples, five desserts mainly pudding and sweet were found to be contaminated with E. coli which is so alarming as it indicated the presence of others water borne pathogens. Six samples were found to be contaminated with Klebsiella spp. Use of contaminated water during preparation and washing are possible ways of transmission of these pathogens (33). Pseudomonas spp. is found in almost everywhere in the environment, but their presence in food is not acceptable (34). Six dessert samples were contaminated with Pseudomonas spp. (up to 10 2 cfu/g). Spore forming bacteria like Bacillus spp. were also encountered in three samples with the load of 10 2 cfu/g. Huge amount of microbes in ready to eat food is not acceptable as they are consumed without any further processing and current condition may be due to lack of knowledge about hygienic in consumers, use of contaminated equipment and dirty processing area. In present study, commercially available foods (sweet, pastry, ice cream) containing less microbial load than handmade food (sweet, pudding, falooda, yogurt and custard), indicates the unhygienic handling or poor environmental conditions.
CONCLUSION
Cross-contamination of foods is one of the major concerns in the food industry, and if microorganisms are not completely removed from food-contact surfaces, they may form biofilms and also increase the bio-transfer potential. This study demonstrates the presence of some pathogens including Staphylococcus spp. and E. coli in dessert items. Therefore, from the public health point of view, these foods offer serious threat for the consumers. Presence of contaminating microorganisms indicates the poor hygienic conditions during the manufacturing, storage and sales process of these traditional foods. Manufacturing procedures within the scope of the HACCP, appropriate hygienic measures to avoid processing and post processing cross contamination and the use of properly pasteurized milk are critical for controlling these pathogens in dessert items.
|
2020-03-05T10:50:18.653Z
|
2020-02-27T00:00:00.000
|
{
"year": 2020,
"sha1": "4f00c50e0baf19d59a44ce0627bc73a9483bf759",
"oa_license": null,
"oa_url": "https://www.banglajol.info/index.php/SJM/article/download/45653/33356",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e099421616745e008a29dbbb9bebc310a8d7a0bd",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
229174963
|
pes2o/s2orc
|
v3-fos-license
|
Diffusion Spectrum of Polymer Melt Measured by Varying Magnetic Field Gradient Pulse Width in PGSE NMR
The translational motion of polymers is a complex process and has a big impact on polymer structure and chemical reactivity. The process can be described by the segment velocity autocorrelation function or its diffusion spectrum, which exhibit several characteristic features depending on the observational time scale—from the Brownian delta function on a large time scale, to complex details in a very short range. Several stepwise, more-complex models of translational dynamics thus exist—from the Rouse regime over reptation motion to a combination of reptation and tube-Rouse motion. Accordingly, different methods of measurement are applicable, from neutron scattering for very short times to optical methods for very long times. In the intermediate regime, nuclear magnetic resonance (NMR) is applicable—for microseconds, relaxometry, and for milliseconds, diffusometry. We used a variation of the established diffusometric method of pulsed gradient spin-echo NMR to measure the diffusion spectrum of a linear polyethylene melt by varying the gradient pulse width. We were able to determine the characteristic relaxation time of the first mode of the tube-Rouse motion. This result is a deviation from a Rouse model of polymer chain displacement at the crossover from a square-root to linear time dependence, indicating a new long-term diffusion regime in which the dynamics of the tube are also described by the Rouse model.
Introduction
Molten polymers are macromolecular systems with complex translational dynamics of entangled chains and their segments being characterized by a large span of spatial and temporal scales. These dynamics are an important factor in the functionality and reactivity of the molecules. High-density entanglements, chain-bonds and cross-links prevent the formulation of an explicit theory of translational dynamics, even on larger intra-molecular length scales. In general, translation is described in a 6N phase space (N is the number of Kuhn segments of the chain) and studied by computer simulations [1,2]. Simpler models with fewer parameters can be set up. The simplest one-parameter model describing the chain motion is center-of-mass Brownian self-diffusion. In this model, the self-diffusion coefficient relates the chain center-of-mass mean square displacement (MSD) to the diffusion time: r 2 = 6D c t. Compared to the coefficients of simple liquids, this coefficient is several orders of magnitude smaller. This approach considers a polymer melt as a simple liquid and is suitable only on a long-time scale. Measurements of diffusion on a shorter time scale show anomalous diffusion [3][4][5], indicating translation more complex than that described by the Brownian model. In this case, the self-diffusion coefficient is not a constant but depends on the diffusion time and can suitably be described by its diffusion spectrum. The diffusion spectrum is the Fourier transformation of the chain segment velocity autocorrelation function.
To describe anomalous diffusion on a shorter time scale, the Rouse model [6] is used. In the Rouse model, the polymer chain is approximated by a string of Kuhn segments, connected by bonds modeled as springs, diffusing in viscous surroundings described by its effective friction drag ζ. No topological effects of the surrounding chains are considered. The MSD of a segment along a chosen axis is expressed as a sum of modes [7]: Here, N is the number of Kuhn segments of length a, D c = k B T Nζ , k B is the Boltzmann constant, X 2 p = Na 2 2π 2 p 2 is the squared amplitude of displacement of the p-th mode, and τ R = 2 X 2 1 3D c is the Rouse relaxation time of the chain. This model can be further simplified for the case of a long chain [7] to Intermolecular entanglements in a dense polymer prevent the lateral motion of a chain and localize it inside a curved tube. The Rouse model can be used to model a shorter part of the chain, with N e Kuhn segments, between the adjacent entanglements in the short time limit. In the intermediate time regime, segments reach the tube walls, and their motion is restrained. The polymer chain can only move along the tube in a reptation process [8,9]. As the polymer chain is released from the tube, the correlation with the initial conformation is lost. The described progression in translational mechanisms causes successively different time dependencies of the MSD: proportional to t 1/2 at short times, to t 1/4 and back to t 1/2 in the intermediate reptation regime, and to t for the chain disengagement [10]. Considering the collective motion of chains as a single chain moving in a fixed tube is a simplification. In real polymers, adjacent chains also move, causing constraint release [11,12], and the tube itself behaves as a coarser-grained Rouse chain [8,13], with the relaxation time proportional to the lifetime of the obstacles. The combined model of chain reptation inside the tube, exhibiting Rouse motion, predicts the segmental MSD starting as t 1/2 and evolving to a t time dependency. According to [11], the longest relaxation time for the tube-Rouse motion τ in a mono-dispersed polymer melt is equal to the terminal time of the chain's reptation in the tube, which amounts to almost equal contributions of both processes to the MSD t 1/2 dependence.
An alternative to the description of the MSD in the time domain is the transformation of the translation dynamics to the frequency domain, where it is described by the power spectrum , where FT is the Fourier transformation. As the MSD exhibits different translational modes expressed through changes in the power of the MSD time progression, so does D(ω). It starts as a constant D c at low frequencies and increases as ω 1/2 and ω 3/4 in the tube/reptation regime. It passes into the Rouse regime at the frequency 2π/τ R and exhibits ω 1/2 dependence again until it levels off at high frequencies [14,15]. The tube/reptation model replaces the many-chain problem by a single chain moving in a tube of topological constraints exerted by the surrounding chains. This model oversimplifies the actual dynamics, because the surrounding chains are not static but are moving as well. This motion is responsible for the constraint release and relaxation of the tube [11,12]. The tube relaxation time for constraint release, or tube reorganization, is determined by obstacle lifetime [8]. Various models account for the impermanence of entanglements [16][17][18]. The theory of constraint release involves tube dilation and tube-Rouse motion [13]. In this model, the constraint release is considered as a Rouse motion of the tube with coarser segments and slower relaxation than the Rouse motion of the chain. The chain relaxation in polymer melts results from two independent and concurrent processes: reptation inside the tube and tube-Rouse motion as the tube reorganization. The diffusion spectrum at low frequencies starts from a constant for both processes and changes to the ω 1/2 at inverse values of the longest relaxation time of each process. In the case of a mono-dispersed polymer melt, the tube reorganization is slower than reptation and must have a small effect on the diffusion properties [16,17]; nevertheless, tube reorganization significantly affects the viscoelastic properties of the polymer melt, presumably because of the difference between the spectrum of the tube-Rouse modes and the spectrum of reptation [11].
The structural and dynamical properties of polymers predicted by the models are in qualitative agreement with experimental data resulting from different methods, some of which are not limited to a macroscopic, rheological scale and offer insight into the chain dynamics on the segmental scale [12]. In a polymer melt, the chains exhibit a complex hierarchy of dynamic processes. Very fast and local conformational rearrangements on the picosecond scale can be measured by neutron scattering [19]. Slow, diffusive and cooperative motion extending into the range of seconds can be observed by methods of nuclear magnetic resonance (NMR), optical methods or viscosity measurements in rheology [20]. NMR is sensitive to polymer dynamics on a wide range of time scales; for example, the diffusion coefficient can be measured in the interval from milliseconds to seconds [3], either indirectly with NMR relaxometry [10,[21][22][23] or directly by measuring the effect of spin-bearing-molecule displacement on the gradient spin-echo (GSE) attenuation in the applied magnetic field gradient [3,10].
The chain translation dynamics influence NMR relaxation because the dipolar coupling between adjacent spins depends on mutual orientation. The orientational fluctuations mirror the segmental dynamics through the magnetic dipole-dipole correlation function [24,25]. The correlation function includes intramolecular and intermolecular contributions. Intramolecular interactions fluctuate due to molecular rotation. Intermolecular couplings also depend on the relative translational motion of the chains. The presence of internal field gradients (conditioned by voids in polymer melts) in high-molecular-mass polymers has been suggested in [10] based on an accelerated transversal relaxation rate obtained from free induction decay, which is effectively reduced by the application of a Carr-Purcell-Meeboom-Gill pulse sequence. Other phenomena can lead to accelerated relaxation, e.g., dipole-dipole interactions not averaged by molecular motion arising due to the anisotropy of the motion of the chain segments, which is typical for entangled polymer chains. Reorientational and translational dynamics must be discerned in order to study polymer dynamics by NMR relaxometry. This is achieved by different techniques, e.g., by the isotope dilution technique in field-cycling and transverse NMR relaxometry [10,26], by combining NMR relaxometry and dielectric spectroscopy [27,28], or by double-quantum NMR experiments [22,23,29]. Different models and approximations of polymer dynamics have been discussed in the context of different spin relaxation studies, failing to provide an exact form of the correlation function; however, these experiments generally confirm the scaling laws of the reptation model [25].
GSE methods can be roughly divided into two classes, modulated and pulsed GSE. Modulated GSE (MGSE) uses an applied magnetic field gradient modulated in a way to harmonically change spin dephasing, thus measuring the diffusion spectrum at the modulation frequency [30]. Pulsed GSE (PGSE) employs (two) short gradient pulses separated by a defined time interval. In the limit of short pulses, this time interval can be considered as the diffusion time. The first applied gradient pulse defocuses spins, encoding their position in their phase; the second, decoding pulse refocuses all stationary spins in the spin echo. The moving spins do not refocus completely, causing the attenuation of the echo, which thus becomes sensitive to translational motion. If the time between the pulses, the diffusion time, is longer than the terminal relaxation time, defined as the asymptotic viscous decay of the polymer in rheology, the PGSE method can provide the polymer center-of mass diffusion coefficient in the polymer melt [4]. PGSE can also measure anomalous diffusion. The shortest diffusion time interval is limited by the strongest applicable gradients, and the longest diffusion time interval is limited by the decoherence of spins (transversal relaxation). This puts the limits of the segment displacement that can be detected by GSE NMR somewhere in the range of several hundred nanometers, assuming the self-diffusion coefficient of the high-molecular-weight polymer melt is on the order of 10 −15 -10 −12 m 2 ·s −1 . Polymer chain reptation displacements are smaller than 100 nm and are not detectable with a conventional PGSE experiment. Conflicting reports on the self-diffusion N scaling power follow from poorly determining the center-of-mass diffusion coefficient without considering the crossover to the anomalous diffusion regime at the same time [3].
Internal gradients, caused by susceptibility mismatches or paramagnetic centers, can cause artifacts, leading to the overestimation of the self-diffusion coefficient. There are common NMR diffusion techniques that can be used to reduce artifacts by internal gradients, such as bipolar gradients [31]. However, when the background gradients are spatially non-uniform, molecular diffusion introduces a temporal modulation of the background gradients, defeating the simple bipolar gradient suppression of background gradients in diffusion-related measurements. Several other methods have thus been proposed to minimize the effect of the internal gradient [32][33][34][35], among which is also the method presented in the paper [14], where the data from the PGSE measurements are explained with a crossover to the anomalous diffusion regime in polymer melts with the addition of the internal gradient effect. In certain cases, the effect of internal gradients can provide valuable information on the dynamics, topology or composition of the material studied [36]. Measurements of molten polydisperse polymers provide a diffusion coefficient that scales as N 2 for polymers with numbers of Kuhn segments larger than the entanglement number and as power N for those below [4]. However, the subsequent PGSE measurements of very mono-dispersed molten polymer [37,38] do not confirm this result but provide a scaling power larger than 2 for the total range of polymer lengths without any crossover to the power 1 for short chains. These conflicting data could result from a mis-defined crossover to the anomalous diffusion regime as shown in [30]. There are also reports that crossover is mis-defined because a strong internal susceptibility magnetic field at the interstices of voids in a polymer melt spoils the measurement [12,15]. Internal gradients are, aside from paramagnetic centers, caused by voids in the melt. Voids in polymer melts are statistically varying formations, which can be characterized by their sizes and mean lifetimes. For example, in polybutadiene, these voids are adjacent to the reptating chain segments and characterized by a diameter~0.5 nm. However, such voids can affect the diffusion NMR experiment only if the diameters of the voids and mean lifetimes are at least of the order of magnitude of the covered diffusion paths and the diffusion times, respectively [39]. A method that also avoids the effects of the internal gradient is the MGSE method. Its results for self-diffusion measurements of mono-dispersed molten polymers [37,38] provide scales for the total range of polymer lengths and the transition into the regime of entanglement at Kuhn steps, which are below theoretical predictions. A test of the tube/reptation model by measuring the diffusion of nanoscopic strands of linear, mono-disperse poly (ethylene oxide) embedded in artificial cross-linked methacrylate matrices is described in [40]. PGSE studies of polymer dynamics are well described by the Rouse model in the case of dilute and semi-dilute polymers [41][42][43][44]. However, the PGSE measurements of diffusion in dense polymers do not clearly support the tube/reptation model [5,10,37,45]. The MGSE method, which measures the velocity autocorrelation spectrum, shows that in a polymer melt, the tube-Rouse motion has a prevailing role at long diffusion times, and this indicates faster tube reorganization than expected [30].
NMR measurements in a magnetic gradient field are sensitive to the MSD, ∆z 2 , in the direction of the applied magnetic field gradient G = ∇|B|, here taken to be along the z axis. The attenuation of the spin echo is given by: where S is the spin-echo amplitude, S 0 is the amplitude of the echo without the applied gradient (in the limit G → 0 ) and |q(ω)| 2 is the sampling function tailored by the gradient-radiofrequency pulse sequence. The gradient sampling function for the Hahn-echo PGSE sequence is given by [34]: Here, G is the strength of the gradient, γ is the gyro-magnetic ratio, δ is the width of the gradient pulse and ∆ is the time interval between the leading edges of the two gradient pulses. The gradient sampling function Equation (4) is shown superimposed on the diffusion spectrum given by Equation (5) in Figure 1.
Molecules 2020, 25, x 5 of 10 gradient sampling function Equation (4) is shown superimposed on the diffusion spectrum given by Equation (5) in Figure 1. This paper presents a study of anomalous self-diffusion in a linear polyethylene melt by the PGSE method. A special short diffusion time sensitivity is achieved by the variation of the gradient pulse width , contrary to usual measurements of anomalous diffusion with variable inter-pulse separation Δ. By changing only , artifacts induced by internal gradients can also be reduced as described by Equation (A2) in [14]. Measurements with PGSE are more effective for long diffusion times or, conversely, the low-frequency part of the diffusion spectrum. A problem arises if we want to measure the diffusion spectrum at high frequencies, as a short Δ together with strong magnetic gradients must be used to achieve the desired attenuation of the spin echo. This is experimentally hard to implement. Additional attenuation caused by the background or internal magnetic field gradient and the effect of transverse relaxation must also be accounted for when the inter-pulse separation is changed. Here, we set out to verify the results of the measurements of a polymer melt diffusion spectrum with the MGSE method reported in [15] by an alternative method of PGSE. The results in [15] show that the observed dynamics in the low-frequency range belong to tube-Rouse motion [13] and can be described by the formula where is the center-of-mass diffusion coefficient, is the diffusion rate of the tube segments and is the tube-Rouse time, corresponding to the characteristic time of the crossover. This spectrum is shown in Figure 1 and overlaid with the gradient sampling function of the PGSE sequence.
For the diffusion spectrum model given in Equation (5), the spin-echo attenuation Equation (3) becomes: The standard evaluation of the diffusion data measured with the PGSE is calculating the This paper presents a study of anomalous self-diffusion in a linear polyethylene melt by the PGSE method. A special short diffusion time sensitivity is achieved by the variation of the gradient pulse width δ, contrary to usual measurements of anomalous diffusion with variable inter-pulse separation ∆. By changing only δ, artifacts induced by internal gradients can also be reduced as described by Equation (A2) in [14]. Measurements with PGSE are more effective for long diffusion times or, conversely, the low-frequency part of the diffusion spectrum. A problem arises if we want to measure the diffusion spectrum at high frequencies, as a short ∆ together with strong magnetic gradients must be used to achieve the desired attenuation of the spin echo. This is experimentally hard to implement. Additional attenuation caused by the background or internal magnetic field gradient and the effect of transverse relaxation must also be accounted for when the inter-pulse separation is changed. Here, we set out to verify the results of the measurements of a polymer melt diffusion spectrum with the MGSE method reported in [15] by an alternative method of PGSE. The results in [15] show that the observed dynamics in the low-frequency range belong to tube-Rouse motion [13] and can be described by the formula where D c is the center-of-mass diffusion coefficient, D s is the diffusion rate of the tube segments and τ is the tube-Rouse time, corresponding to the characteristic time of the crossover. This spectrum is shown in Figure 1 and overlaid with the gradient sampling function of the PGSE sequence.
For the diffusion spectrum model given in Equation (5), the spin-echo attenuation Equation (3) becomes: The standard evaluation of the diffusion data measured with the PGSE is calculating the effective diffusion coefficient D e , defined with: where the b factor is given by b = γ 2 G 2 δ 2 ∆ − δ 3 . The effective diffusion coefficient is a constant for all possible parameters in the case of Brownian diffusion, but in the case of anomalous diffusion, it is interpreted as a time-dependent diffusion coefficient, in our case, given by:
Results and Discussion
Polyethylene melt diffusion was measured by the PGSE method. In the experiment, the spin-echo amplitude was recorded by varying the gradient pulse width δ at several different strengths of the applied gradient pulse. Figure 2a shows the spin-echo amplitude and (b) the derived effective self-diffusion coefficient as defined in Equation (7), both as a function of the gradient pulse width δ and for all the applied gradient strengths. The effective diffusion coefficient in Figure 2b clearly shows signs of anomalous diffusion.
Molecules 2020, 25, x 6 of 10 Polyethylene melt diffusion was measured by the PGSE method. In the experiment, the spinecho amplitude was recorded by varying the gradient pulse width at several different strengths of the applied gradient pulse. Figure 2a shows the spin-echo amplitude and (b) the derived effective self-diffusion coefficient as defined in Equation (7), both as a function of the gradient pulse width and for all the applied gradient strengths. The effective diffusion coefficient in Figure 2b clearly shows signs of anomalous diffusion. The model describing the data is given by Equation (8), and a few simplifications can be made, since it is reasonable to assume from previous measurements [30] that Δ ≫ , to obtain a simpler model: Both models return the same fitting parameters with the measured data. A least-square nonlinear fit of the model to the data gives the parameters presented in Table 1. The only value estimated with high certainty is the chain diffusion coefficient . Both the tube segment diffusion coefficient and the relaxation time appear in the model (Equation (9)) together as to the first order of / , and any change in one can be compensated by an according change in the other and does not significantly alter the fit. Thus, the used in the measurements should be accordingly short, or at least one of the parameters should be determined separately. The results for the chain diffusion coefficient match within the error with the results in [14] and [46]. The result for The model describing the data is given by Equation (8), and a few simplifications can be made, since it is reasonable to assume from previous measurements [30] that ∆ τ, to obtain a simpler model: Both models return the same fitting parameters with the measured data. A least-square non-linear fit of the model to the data gives the parameters presented in Table 1. The only value estimated with high certainty is the chain diffusion coefficient D c . Both the tube segment diffusion coefficient and the relaxation time appear in the model (Equation (9)) together as τ 2 D s to the first order of τ/δ, and any change in one can be compensated by an according change in the other and does not significantly alter the fit. Thus, the δ used in the measurements should be accordingly short, or at least one of the parameters should be determined separately. The results for the chain diffusion coefficient match within the error with the results in [14] and [46]. The result for the relaxation time matches the tube displacement per obstacle lifetime L eq / √ τ ob , if the number of Kuhn segments between the entanglements N e is 25 (compared to the 120 total number of segments per chain) since L eq = N N e a and τ = N 2 The dashed line in Figure 2b represents the best fit of the D e model to the data without the points measured at δ = 1 ms. The fitting parameters in this case differ significantly: D c = 3.3 × 10 −13 m 2 /s, D s = 1.8 × 10 −11 m 2 /s and τ = 3.8 s. This demonstrates the sensitivity of the model to the input data without measurements at a short-enough δ, which should be short enough to include the increase in D e at a short δ. It also demonstrates that caution using the approximation ∆ τ for the model in Equation (9) is warranted, since τ was determined to be longer than ∆ in the case of short δ data points being excluded.
We have shown here that the PGSE method enables the measurement of the segmental translation of polymeric chains by the variation of the gradient pulse width. This approach can also effectively take into account the effect caused by the internal gradient, which commonly affects PGSE measurements but requires knowledge of the interplay between the molecular motion and the buildup of spin phase structure during the magnetic field gradient action. By combining the PGSE sampling function and the segmental diffusion spectrum rendered from the model of tube-Rouse motion [13], we obtain the dependence of the PGSE signal attenuation on the gradient pulse width. The data obtained by the measurements of molten polyethylene well fit the predictions and provide evidence of the tube-Rouse motion model proposed in [13]. The model, which was already confirmed for other polymer samples by the MGSE method [30], reveals the tube segmental motion in the range of milliseconds. The sample polymer data M w and M n indicate a sharp distribution of fragment sizes; thus, the effects of polydispersity, which may cause a deviation from the model, can be neglected in our case.
This study presented here is a reevaluation of the study in [14]. The study of polymer diffusion by the MGSE method [30] shows slow dynamics that can be attributed to the reorganization of the polymer tube with temporal and spectral resemblance to Rouse motion. This description matches the theory of tube-Rouse motion put forward in [13]. In [14], PGSE measurements were used to trace the crossover of the chain Rouse dynamics from √ t to t dependence because of the constraint release. The constraint release was originally termed tube reorganization by Pierre-Gilles de Gennes, where the obstacle lifetime determines the tube relaxation times [8]. Various models account for the impermanence of entanglements [16][17][18], among which is also the theory of constraint release involving the tube dilation and tube-Rouse motion [13]. In this theory, the constraint release is considered as the tube-Rouse motion, and the relaxation time is proportional to the lifetime of the obstacles [13]. In the previous paper [14], we followed a quite common approach to considering polymer chain dynamics described by the Rouse model in the range where the dynamics cross from square-root to linear time dependence in the MSD [47], to explain the anomalous effective diffusion obtained from the PGSE measurements in polymers. In the original experiment, the internal gradient artifacts were not suppressed by any of the numerous methods, because the system was considered homogenous enough and the measurements fitted well to the model used for the larger part of the measured interval. However, the results deviate from the model in the limit of the short δ. In [14], it was proposed that the deviation was a result of internal gradients (caused by a susceptibility mismatch) adding to the effect of the external applied gradient. An extra term based on internal gradients was added to the attenuation factor, resulting in a better fit. According to [39], this would require unrealistic conditions, and a search for a better explanation was fruitful, since the results are here satisfactorily described with the new tube-Rouse model and without recourse to the effect of the internal gradient. This is also in accordance with subsequent measurements with the MGSE method [30], which indicate that the polymers in the millisecond time range exhibit some new dynamics that are not related to the motion of the polymer chain inside the tube, but can be explained by the theory of polymer tube reorganization, where the tube behaves in a similar way to a chain, and therefore, this motion can be called tube-Rouse motion [13]. In this paper, we show that the new interpretation fits better to the results of our PGSE measurements. We show that the data can be well fitted to this model of tube dynamics, and this is a deviation from the previous results based on long-chain approximation (Equations (4) and (6) in [14]).
Materials and Methods
We studied a sample of linear polyethylene Standard Reference Material 1482 with a narrow molecular weight distribution (M n = 11, 400 g mol −1 , M w = 13, 600 g mol −1 ) prepared by NIST, Washington, DC, USA. Measurements were performed on a melted polyethylene sample at 426 K.
The measurements were performed on a home-made pulsed NMR spectrometer (Ljubljana, Slovenia) at a 60 MHz proton NMR frequency and equipped with a magnetic field gradient coil system described in [48]. The PGSE sequence is shown in Figure 3. The widths of the π/2 radiofrequency (RF) pulses used were 1.2 microseconds. The π RF pulse was applied symmetrically between the gradient pulses. The gradient pulse followed the RF pulse with a delay short enough to be neglected in the signal analysis. The same is true for the echo following the second gradient pulse; however, the echo followed the second gradient pulse with a delay large enough that no artifacts were introduced because of the finite gradient fall time. The PGSE attenuation dependence on the duration of the gradient pulses was measured by changing the pulse width δ from 1 to 15 ms, with the diffusion time (the interval between the gradient pulses) fixed at ∆ = 80 ms. The measurements were performed with the gradient fields 4.38, 3.04 and 1.34 T/m.
Materials and Methods
We studied a sample of linear polyethylene Standard Reference Material 1482 with a narrow molecular weight distribution ( = 11,400 g mol , = 13,600 g mol ) prepared by NIST, Washington, DC, USA. Measurements were performed on a melted polyethylene sample at 426 K. The measurements were performed on a home-made pulsed NMR spectrometer (Ljubljana, Slovenia) at a 60 MHz proton NMR frequency and equipped with a magnetic field gradient coil system described in [48]. The PGSE sequence is shown in Figure 3. The widths of the /2 radiofrequency (RF) pulses used were 1.2 microseconds. The RF pulse was applied symmetrically between the gradient pulses. The gradient pulse followed the RF pulse with a delay short enough to be neglected in the signal analysis. The same is true for the echo following the second gradient pulse; however, the echo followed the second gradient pulse with a delay large enough that no artifacts were introduced because of the finite gradient fall time. The PGSE attenuation dependence on the duration of the gradient pulses was measured by changing the pulse width from 1 to 15 ms, with the diffusion time (the interval between the gradient pulses) fixed at Δ = 80 ms. The measurements were performed with the gradient fields 4.38, 3.04 and 1.34 T/m.
|
2020-12-15T21:59:31.840Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "2be46a0efe9b0cac0b986ace0e996e699d0d17a0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/25/24/5813/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2aec230f87b38a7b5788a85c681e4e945abca65a",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
239016373
|
pes2o/s2orc
|
v3-fos-license
|
Hausdorff Dimension and Lebesgue Measure of Codiagonal of Embedded Vector Bundles over Submanifolds in Euclidean Space
In this paper we study measure theoretical size of the image of naturally embedded vector bundles in $\mathbb{R}^{n} \times \mathbb{R}^{n}$ under the codiagonal morphism, i.e. $\Delta_{*}$ in the category of finite dimensional $\mathbb{R}$-vector spaces. Under very weak smoothness condition we show that codiagonal of normal bundles always contain an open subset of the ambient space, and we give corresponding criterions for the tangent bundles. For any differentiable hypersurface we show that the codiagonal of its tangent bundle has non-empty interior, unless the hypersurface is contained in a hyperplane. Assuming further smoothness (e.g. twice differentiable) we show that union of any family of hyperplanes that covers the hypersurface has maximal possible Hausdorff dimension. We also define and study a notion of degeneracy of embedded $C^{1}$ vector bundles over a $C^{1}$ submanifold and show as a corollary that if the base manifold has at least one non-inflection point then codiagonal of any $C^{1}$ line bundle over it has positive Lebesgue measure. Finally we show that codiagonal of any line bundle over an $n$-dimensional ellipsoid or a convex curve has non-empty interior, and the same assertion also holds for any non-tangent line bundle over a hyperplane.
Introduction and preparation lemmas
In this paper all base fields are assumed to be the field of real numbers R, and all vector spaces R n := (R n , ·, · ) are assumed to be equipped with the standard Euclidean structure.
For any subset X ⊆ R n we denote by i X : X −→ R n the canonical inclusion and by int X,X, ∂X its interior, closure, boundary (in the induced topology) resp. We denote R + := {x ∈ R | x > 0}, and I will always be the open interval ] − π, π[. As usual, D n := {x ∈ R n | x 1} is the ndimensional unit disk, S n−1 := {x ∈ R n | x = 1} is the (n − 1) -dimensional unit sphere, and B n r (a) := {x ∈ R n | x − a < r} is the n-ball with center a ∈ R n and radius r. For simplicity we write B n r := B n r (0). For any map f denote by Γ f its graph, and for any differentiable f denote by J x f its Jacobian at x. Linear functions and linear mappings on an open subset of an Euclidean space are defined to be the functions and mappings with constant differential.
Moreover we denote by dim and dim H the topological and Hausdorff dimension resp. and by L n the n-dimensional Lebesgue measure.
1.1. Manifolds in Euclidean Space. In this paper, a topological d-submanifold (with boundary) M of R n is a topological subspace of R n s.t. M is a topological d-manifold (with boundary) in the induced topology. Moreover, topological d-submanifold M is said to be a C r submanifold iff its transition functions are of class C r . All manifolds are assumed to be connected.
In this paper we also consider the following two notions of smoothness.
1 Definition 1.1.1. A topological submanifold M is said to be k-th differentiable iff for any p ∈ M , at the vicinity of p, M coincides with the graph of a k-th differentiable map f . For simplicity once differentiable submanifolds are abbreviated as differentiable submanifolds.
Definition 1.1.2. A C 1 submanifold is said to be Lipschitz-continuously differentiable iff for any p ∈ M , at the vicinity of p, M coincides with the graph of a C 1 map f with locally Lipschitz differential.
Since C 1 maps are locally Lipschitz, by definition we have that any C 2 submanifold is Lipschitzcontinuously differentiable.
As usual, a 1-submanifold is called a regular curve, a 2-submanifod is called a regular surface, a submanifold of codimension 1 is called a hypersurface.
Let M be a k-submanifold of R m and N be a ℓ-submanifold of R n then the product manifold M × N := {(p, q) ∈ R n+m | p ∈ M, q ∈ N } is a (k + ℓ)-submanifold of R n+m . For any two maps f : A → C and g : B → D we define f × g : A × B → C × D by f × g(a, b) := (f (a), g(b)), then Γ f ×g = Γ f × Γ g . For example, if f, g are differentiable maps (from open subets in Euclidean spaces to Euclidean spaces), then Γ f , Γ g are differentiable submanifolds and Γ f ×g is their product submanifold.
Remark 1.1.3. Notice that the implicit and inverse function theorem hold for differentiable maps (Theorem 2 and 3 in [2]), and differentiable maps with differentiable local inverses form a pseudo group G and hence defines a manifold structure. The differentiable submanifold in Definition 1.1.1 clearly has local parametrization f (x) = (x, f (x)) and admits a differentiable chart, and hence a G-structure.
The similar assertion holds also for Lipschitz-continuously differentiable submanifolds in Definition 1.1.2.
Substantial and Nonlinear Submanifolds.
Recall that a submanifold is said to be substantial iff it is not contained in any hyperplane of the ambient space.
An affine plane in R 3 is not substantial, and the twisted quartic X : t → t, t 2 , t 3 , t 4 is a well-known example of a substantial C ω curve in R 4 .
Notice that the curvature of X is non-vanishing. Another nontrivial example can be constructed as follows.
Recall that for any C 1 regular curve g : I → R n the tangent derelopable Σ of g is parametrized by x(u, v) = g(u) + vg ′ (u), u ∈ I, v ∈ R, and g is a cuspidal edge of Σ.
We denote by Σ + the one-sided tangent developable of g parametrized by x(u, v) = g(u) + vg ′ (u), u ∈ I, v ∈ R + . Proposition 1.2.1. Let r 1 and g : I → R n be a substantial C r+1 regular curve with nonvanishing curvature, then the one-sided tangent developable of Σ + of g is a substantial C r regular surface.
Moreover x(u, v) = g(u) + vg ′ (u) is clearly C r . It remains to prove that Σ + is substantial. Provided that Σ + is not substantial then ∃ hyperplane Π ⊆ R 4 s.t. Σ + ⊆ Π. Since ∀ u ∈ I lim v→0 + x(u, v) = g(u), we have that ∀ p ∈ g, p is a limit point of Σ + . Since Π is closed, we have that g ⊆ Π, and hence g is not substantial, contradiction.
We give a definition of a notion related to the substantialness. It is clear by definition that a linear substantial submanifold is an open subset in the ambient space, and for hypersurfaces the nonlinearity is equivalent to substantialness.
In this paper the nonlinearity is an important notion. We shall introduce several preparation lemmas about the nonlinearity.
We now introduce another type of restriction of domain of nonlinearity. We start with several lemmas.
Recall that a monomial x α ∈ R [x 1 , · · · , x n ] is said to be square free iff α ∞ 1, where α ∈ N n is the multi-index. For simplicity we denote Ξ n := span R {x α ∈ R [x 1 , · · · , x n ] | α ∞ 1} as a space of polynomial functions defined on I n . Lemma 1.3.7. Let f : I n → R be a real function s.t. ∀ i = 1, · · · , n∃ real function a i , b i : Proof. Apply induction on n.
The statement holds trivially for n = 1. Assume that the statement holds for n = k. Take any f : . Therefore the statement holds for n = k + 1.
Provided that deg f 2, then define ℓ ∆ : is a polynomial in t with degree 2 and hence nonlinear, contradiction. Therefore f is a polynomial of deg f 1, i.e. a linear function.
Let Ω be a connected open subset of R n and f : Ω → R is a nonlinear function, then up to rigid motion and homothety we have I n ⊆ Ω and f 0 (t) := f (0, · · · , 0, t), t ∈ I is a nonlinear function.
Proof. Apply Remark 1.3.4 and Lemma 1.3.8. Lemma 1.3.10. Let {f n (x) = a n x + b n } n∈N be a sequence of linear functions on I, if {f n } n∈N converges weakly then ∃ a, b ∈ R s.t. lim n→∞ a n = a, lim Proof. Since {f n } n∈N converges pointwise, we have that ∃ a, b ∈ R, s.t. lim n→∞ a n = lim Let Ω be a connected open subset of R n and f : Ω → R is a nonlinear continuous function, then up to rigid motion and homothety we have I n ⊆ Ω and ∃ δ > 0 s.t.
. . x n−1 , t , t ∈ I is a nonlinear continuous function.
Proof. By Corollary 1.3.9 up to rigid motion and homothety we have I n ⊆ Ω and f 0 (t) := f (0, · · · , 0, t), t ∈ I is a nonlinear function. Provided {f xn } n∈N is a sequence of linear function, by Lemma 1.3.10 is linear, contradiction. Therefore 1.4. The Osculating Spaces and Normal Spaces. We discuss several important notions related to submanifolds, these notions will be used frequently throughout this paper.
As usual, for a point on a differentiable (or C r ) submanifold the tangent space at this point is defined to be the image of the differential of the parametrization map at this point.
For example, let a differentiable d-submanifold M of R n be the graph of differentiable map f : Recall that the first order approximation of f at x 0 is defined by f (x 0 ) + (J x0 f ) (x − x 0 ), i.e. the linear part of the Taylor expansion of f . The graph of the first order approximation is p + T p M (here + is the Minkowski sum), i.e. the affine d-subspace of R n tangent to M at p. As usual, the normal space N p M is defined to be the orthogonal complement of T p M .
Recall that for a twice differentiable d-submanifold M of R n , the second osculating space at p ∈ M is defined to be T 2 p M := span R ∂ 2 ∂x i ∂x j , ∂ ∂x k where x 1 , · · · , x d are local coordinates at the vicinity of p. An elementary example of the second osculating space is the osculating planes of a C ∞ regular curve in R 3 . Also recall that, if ∃ p ∈ M s.t. dim T 2 p M = n, then M is substantial. This provides an practical criterion of substantialness.
For a C 2 submanifold M , as usual we say that p ∈ M is a inflection point of M iff p + T p M is of high order tangency with M at p, i.e. at the vicinity of p, M coincides with the graph of a C 2 map with a non-Morse singularity at p.
Embedded Vector Bundles.
For any integer 0 k n we denote by Gr k (n) the Grassmannian of k-subspaces in R n . We denote by τ k (n) := {(p, u) ∈ Gr k (n) × R n | u ∈ p} then tautological bundle over Gr k (n), where the first projection π 1 gives the bundle map, and the second projection π 2 is the blow up.
Let X be a topological subspace of R n then a continuous map ϕ : X → Gr k (n) pulls back an k-vector bundle over X with a canonical embedding into R n × R n , by the following commutative diagram. ϕ * τ k (n) is said to be the vector bundle defined by ϕ. If k = 1 then Gr 1 (n) = P n−1 and ϕ * τ 1 (n) is called a line bundle.
In this paper we consider only embedded vector bundles pullbacked from tautological bundles, by abuse of notion we write vector bundles as abbrevation.
Let us recall several properties and examples of vector bundles. If the base space X = M is a topological d-submanifold of R n then ϕ * τ k (n) is a topological (d + k)-submanifold of R 2n . Moreover if M is C r and ϕ is C r then ϕ * τ k (n) is a C r submanifold. An open cover of M simutanously serves as an atlas of M and trivializes ϕ * τ k (n) exists and is said to be a trivializing atlas, its elements are called trivializing charts. Moreover, take any trivializing chart U ⊆ M we have a parametrization map f : Ω → R n and a matrix-valued function A : Ω → R n×k s.t. f (Ω) = U, Im A(x) = ϕ(f (x)) and F : is the trivilization over U .
Let f : Y → X be a continuous map between two topological subspaces of R n and E := ϕ * τ k (n) be a k-vector bundle over X defined by continuous map ϕ : X → Gr k (n), then f pulls back a k-vector bundle Notice that if M is C r with positive r then T M is the tangent bundle defined by C r−1 map ϕ : p → T p M . If M is differentiable but not C 1 then T M does not necessarily have a bundle structure. By abuse of notation in this case we still call it the tangent bundle of M .
The case for the normal bundle N M := {(p, u) | u ∈ N p M, p ∈ M } is analogous.
1.6. The Codiagonal Morphism. In the category of finite-dimensional R-vector spaces the codiagonal morphism ∆ * is defined via the following commutative diagram as the dual of ∆. As a map on Euclidean spaces, ∆ * : R n × R n → R n (x, y) → x + y is bi-linear and hence C ∞ . Take any subset E of R n × R n , ∆ * E ⊆ R n is the continuous image under ∆ * . It possesses the following obvious properties: i). Take any subsets A, . Let E be a vector bundle defined by ϕ : X → Gr k (n), then ∆ * E = x∈X (x + ϕ(x)).
In particular, if M is a differentiable submanifold, then ∆ * T M = p∈M (p + T p M ) and Let E be a k-vector bundle over a topological subspace X ⊆ R n then ∆ * E is the union of a family of k-affine subspaces of R n covers X. For example if X is a C ∞ regular curve in R 3 and E = T X is its tangent bundle, then ∆ * E is the tangent developable of X. However, as we will show later in this paper, if the assumption on smoothness is weakened then the struture of ∆ * E can be complicated. In geometric measure theory, it is important to study these objects from analytic point of view, since they are closely related to Kakeya's conjecture.
For instance, Cumberbatch, Keleti, and Zhang [2] studied the Hausdorff dimension of the union of tangent lines of a plane curve and in particular, they constructed a convex curve whose union of one-sided tangents has small Hausdorff dimension, these results were generalized to high dimensions in [1]. R.O. Davies [10] proved that any subset A of the plane can be covered by a collection of affine lines such that the union of lines has the same Lebesgue measure as A, and in [3] Esa Järvenpää et al. studied the non-empty interior of continuously parametrized Besicovitch sets. Also, in [6], Kornélia Héra, Keleti Tamás and András Máthé estimated the Hausdorff dimension of union of family of affine spaces.
In this paper we mainly discuss the subcase that X = M is a (at least) differentiable dsubmanifold of R n and E is a k-vector bundle or E = T M, N M . Under this assumption, E can be regarded as a parametrized family of k-affine subspaces with d independent parameters, hence we expect ∆ * E to have Hausdorff dimension k + d when k + d n, and have positive n-dimensional Lebesgue measure when k + d > n. In this case we (informally) say that E attains its expected size. We shall study in this paper the conditions for normal and tangent bundles, and then general k-vector bundles and line bundles, to attain expected size. 1.7. Section Transform for Differentiable Mappings. We introduce tools for studying tangent bundle of graph of a differentiable map. Definition 1.7.1. For any differentiable map f : I d → R n−d , we define the lifted section map of f by and for any a ∈ R d define the section transform of f at a by φ a f : The importance of the lifted section map is suggested by the following lemma.
In order to compute the lifted section map, we will use several properties of the section transform.
Lemma 1.7.3. For any a ∈ R d and differentiable function f, g : . For any affine map ℓ : Before we close this section, we present a weak version of implict function theorem proved by using Brouwer's fixed point theorem instead of the Banach fixed point theorem. The corollary of this theorem is poweful when applied to the lifted section maps.
In the following proof, as usual, for any bounded operator A we denote by A op its operator norm and by A −1 L its left inverse (if exists).
Theorem 1.7.4. Let 0 n m be integers, Ω be an open subset of R m and h : Ω → R n be a contin- , then h(x) = y and hence Ψ y (x) = x. Therefore by the estimation (*), we have Remark 1.7.5. In [7] this theorem is proved for m = n and Ω = R n . The proof is essentially the same. Notice that the classical implicit function theorem holds also for Banach manifolds. However, since balls in infinite dimensional spaces are not compact in the norm topology, Brouwer's fixed point theorem does not hold, and hence this weak form of the implicit function theorem cannot be generalized directly to general Banach spaces or Hilbert spaces. Corollary 1.7.6. Let 0 n m be integer, Ω be an open subset of R m and h : Ω → R n be a continuous map. If ∃ x 0 ∈ Ω s.t. h is differentiable at x 0 and d x0 h has full rank, then int(h(Ω)) = ∅.
has non-empty interior.
1.8. Universal Measurability and Analyticity. Before discussing Lebesgue measure and Hausdorff dimension of union of tangents and normals or codiagonal of vector bundles over submanifolds we shall prove their measurability. Using the section transforms we will show that these objects are analytic sets (aka. Suslin sets) in R n , and hence universally measurable. (for basic properties of analytic sets, see e.g. [8] ) Firstly we point out that this property clearly holds for codiagonal of vector bundle over submanifolds. For completeness we give a proof. Proposition 1.8.1. Let X be a topological submanifold (with boundary) of R n × R n , then ∆ * X is analytic and hence universally measurable.
Proof. By the Lindelöf property of second countable spaces, X admits a countable atlas {U k } k∈N * . Since for any i ∈ N * , U k is homeomorphic to a open disk, we have that U k is analytic. Therefore X = k∈N * U k is an analytic set. Since ∆ * is continuous, we have that ∆ * X is analytic and hence universally measurable.
Let M be a topological submanifold (with boundary) of R n and E be a vector bundle over M , then ∆ * E is analytic and hence universally measurable. Now we shall prove the same assertion for union of tangents (and normals). Proof. Denote by e 1 , · · · , e d the standard orthonormal basis of R d ⊇ I d and define ρ i : is continuous and by the definition of the Take any x 0 ∈ I d and denote p : is the approximation of Jacobian defined in Lemma 1.8.3, then similarly we have that Ψ (k) f k∈N * is a sequence of Borel function with pointwise limit Ψ f and hence Ψ f is Borel. Therefore Γ Ψ f is a Borel set, and Ψ f I d × R n−d is an analytic set.
This proves that ∆ * N Γ f = Ψ f I d × R n−d is analytic and hence universally measurable.
Proposition 1.8.6. Let M be a differentiable submanifold then ∆ * N M is analytic and hence universally measurable.
Proof. Apply Lemma 1.8.5, the proof is the same as that of Proposition 1.8.4.
Now we are prepared to study the measure theoretical size of union of tangents and normals, and codiagonal of vector bundles over submanifolds.
Size of union of normals and tangents
2.1. Normal Bundles. We first study the size of normal bundles.
Take the tubular neighborhood theorem into account, one may expect that the normal bundle over a smooth submanifold always contains an open subset. Actually more is true: using the square distance function, we will show that this assertion holds for all differentiable submanifolds. Definition 2.1.1. Take any subset X ⊆ R n and point q ∈ R n , define the square distance S q X : Proof. Take any q ∈ R n , since q and M are compact, Proof. Take any p ∈ M , for a sufficiently small r > 0, we have B : This proves that ∀ p ∈ M, p ∈ int (∆ * N M ). Therefore the required neighborhood exists.
Tangent Bundles.
For the union of tangents, the situation is more complicated. We start by estimating the Hausdorff dimension.
The following example shows that the inequalities in Proposition 2.2.2 and Corollary 2.2.3 are sharp.
It is easy to show that ∆ * T γ 1 = C 1 \({|z| < 1} ∪ {Re z 0, | Im z| 1}) and mutatis mutandis for γ 2 , γ 3 , γ 4 , then it is clear that ∆ Consider the following two cases: The following example shows that the Lipschitz condition in Proposition 2.2.2 is necessary.
It is clear that ∀ t ∈ U, ϕ(t) = α(t) + α ′ (t) ∈ ∆ * T γ, i.e. ϕ(U ) ⊆ ∆ * T γ. By the self-similarity of ϕ, we have that ϕ(U ) = ϕ(R) = R n−d+1 . This proves that ∆ * T γ = R n−d+1 . Define Remark 2.2.6. The phenomenon in Proposition 2.2.5 is anti-intuitive. It shows that tangent lines of a C 1 regular curve can fill up the ambient space R n , and its tangent bundle is a 2 -dimensional surface embedded in R n × R n with n-dimensional codiagonal.
From the previous discussion we have seen that union of tangents of various submanifolds can be very different in size. Take a differentiable submanifold M we shall provide a sufficient condition for int (∆ * T M ) = ∅. By Proposition 2.2.1 and Corollary 2.2.3 M should be substantial and dim M n 2 , but the following example shows that these two conditions are not enough.
Proposition 2.2.7. Let g : I → R 4 be a substantial C 3 regular curve with positive curvature then its one-sided tangent developable Σ + is a substantial C 2 regular surface in R 4 s.t. L 4 (∆ * T Σ + ) = 0.
Proof. By Proposition 1.2.1 Σ + is a substantial C 2 regular surface in R 4 .
By strengthening the condition we obtain the following theorems.
Proof. Wlog assume that Ω = I d and x 0 = 0. It suffices to prove that int Φ )) are continuous and differentiable at (0, b), and By Corollary 1.7.6 it remains to proof that Since by assumption d 2 0 f is non-degenerated, there exists b 0 ∈ R d s.t. the contraction b i 0 ∂f k (0) ∂x i ∂x j has full rank.
This concludes the proof. By assumption ∃ local chart U ⊆ M , s.t. U = Γ f where up to rigid motion and homothety f : )) are continuous and differentiable at (0, b), and moreover by Corollary 1.7.6 it remians to proof that ∃ b 0 ∈ R d s.t. ∂Φ f (x, a) ∂(x, a) (0,b0) has full rank, i.e.
b i 0 ∂f k (0) ∂x i ∂x j has full rank.
Since by assumption ∂x i ∂x j has full rank. This concludes the proof.
Further results for hypersurfaces
In section 1 we give a criterion for a C 1 submanifold M to have int (∆ * T M ) = ∅. It turns out that if M is a hypersurface then this result can be significantly improved.
3.1. Tangent Bundle over Hypersurfaces. We start with the following important observation: Proof. Take any x ∈ I n \{a}, denote λ := x + a 2 , then (λ−a)t+λ, t ∈ [−1, 1] is a segment connecting a and x. Define . This proves that φ a f is path-connected. Here we give a straightforward appliation of Lemma 3.1.1 to real functions.
Solve the ordinary differentiable equation y + (a − x)y ′ = b by separating variables, we obtain contradicts the nonlinearity of f . Therefore ∀ a ∈ R, φ a f is not constant. Now we are prepared to state the main theorems. Proof. By Corollary 1.3.9, ∃ affine map ℓ : I → I n s.t. f • ℓ is nonlinear, moreover clearly f • ℓ is differentiable. Therefore by Corollary 3.1.2 ∃ s ∈ R s.t. φ s f •ℓ (I) is not a singleton. Since by Lemma This proves that int (∆ * T Γ f ) = φ. Proof. For any i = 1, · · · , m, by Corollary 1.3.6, ∃ local chart U k ⊆ M k s.t. U k = Γ f k where up to rigid motion and homothety f k : I n k → R is nonlinear and differentiable. By Proposition 3.
This concludes the proof.
3.2.
Hyperplane covering of a hypersurface. Moreover, if we assume more on the smoothness then we are able to obtain a much stronger result.
is absolutely continuous and hence satisfies Lusin's N -property.
Take any For any L ∩ Π x0 ∈ A x0 , since codim L = 1, dim Π x0 = 2, and L ∩ Π x0 = ∅, by Krull's principle ideal theorem we have that dim L ∩ Π x0 = 1 or 2. Since L and Π x0 , and hence L ∩ Π x0 are affine spaces, we have that L ∩ Π x0 = Π x0 or L ∩ Π x0 is an affine line in Π x0 . We claim that dim H ∪ A x0 = 2. Indeed, if Π x0 ∈ A x0 then ∪ A x0 = Π x0 and hence dim H ∪ A x0 = 2, if Π x0 / ∈ A x0 then A x0 is a family of affine lines covers Γ fx 0 , by Corollary 9 in [9] we have dim H ∪ A x0 = 2. Define Proof. Since k-th differentiability is preserved under restrictions and differentiable functions also satisfies Lusin's N -property, the same proof as in Theorem 3.2.1 yields the desired result.
The results above show that the union of any family of hyperplanes that covers a (sufficiently smooth) nonlinear hypersurface in R n+1 always has full Hausdorff dimension. However if we drop the nonlinearity condition then the Hausdorff dimension of the union can be any arbitrary real number between n and n + 1.
The corresponding constructions are based on the following theorem, which is proved by Falconer and Mattila using energy integrals. (Here for any q := (a, b) ∈ R n ×R we denote by L(q) the hyperplane (x, y) ∈ R n × R | y = a T x + b ) Proof. Wlog we assume that M is an open subset of L(0). Let C be a s − n dimensional Cantor set embedded in R n+1 , and denote X := R n+1 and E : 4. Generic C 1 vector bundles 4.1. Genericity of Vector Bundles. In this section we study the conditions for a C 1 vector bundle to attain expected size. In section 2 we have shown that normal bundles always have large size while tangent bundles do not. Here, assume C 1 smoothness, we will prove that if a vector bundle of a submanifold M is in general position with the tangent bundle of M then it attains expected size.
Recall that a k 1 -subspace V 1 and a k 2 -subspace V 2 of R n is in general position iff dim V 1 ∩ V 2 = max {(k 1 + k 2 ) − n, 0}. We extend this notion naturally to vector bundles, and give a definition of generic vector bundle in terms of tangency.
Therefore by the implicit function theorem, ∃ open subset Ω 0 of Ω × R k , s.t. ∆ * F | Ω0 parametrizes a C 1 (k + d)-submanifold of R n and hence L k+d (∆ * F (Ω 0 )) = Vol k+d (∆ * F (Ω 0 )) > 0. Since ∆ * E ⊇ ∆ * F Ω × R k ⊇ ∆ * F (Ω 0 ), we obtain that L k+d (∆ * E) L k+d (∆ * F (Ω 0 )) > 0. Moreover, if k + d =: m = n then ∆ * F (Ω 0 ) is a n-submanifold, i.e. an open subset of R n , and hence int (∆ * E) ⊇ ∆ * F (Ω 0 ) = ∅. Case 2: k > n − d. In this case m = n. Denote ℓ := (k + d) − n. Since dim ϕ(0) ∩ T 0 M = ℓ, by continuity ∃ common trivializing chart N of E and T M at the vicinity of 0 s.t. ∀ p ∈ N we have dim ϕ(p) ∩ T p M ≡ ℓ. Therefore pullback E by inclusion N i ֒→ M then E 0 := i * E ∩ T N is locally trivial and hence a C 1 ℓ-vector bundle over N . By Swan's theorem we have the splitting Assume that the base manifold is not a point, then by Definition 4.1.1 it is clear that a C 1 line bundle is non-generic iff it is a C 1 distribution, i.e. a C 1 line subbundle of the tangent bundle. In this case we also say that the bundle is tangent to the base manifold.
4.2.
Semi-generic Line Bundles. We proceed by showing under which condition a non-generic C 1 -vector bundle can attain its expected size.
Firstly, we give an example of a degenerated case: Let ϕ * τ k (n) be a C 1 vector bundle over C 1 submanifold M defined by ϕ : M → Gr k (n), then ϕ 1 := ϕ•ϕ * π 1 in the following diagram pulls back a C 1 vector bundle over C 1 submanifold ϕ * τ k (n). Then we have ϕ * 1 (ϕ * τ k (n)) = ((p, u), (0, v)) ∈ R 2n × R 2n | u, v ∈ ϕ(p), p ∈ M and hence ∆ * ϕ 1 * More concretely, consider the line bundle L on a ruled surface S in R 3 generated naturally by the ruling, then we have ∆ * L = S. Notice that in the example above, all points on the base manifold are degenerated in the sense that they are inflection points. We will show that this is not a coincidence.
For simplicity we restrict our consideration to line bundles. In this case we are able to introduce a notion of degeneracy in terms of foliation.
By Frobenius' theorem any C 1 line subbundle of the tangent bundle is integrable and hence defines a regular 1-foliation with C 2 leaves. Conversely, any regular 1-foliation F with C 2 leaves generates C 1 line distribution T F .
The C 2 smoothness of integral submanifolds allow us to get more information about further degeneracy.
For example, Id M is the deformation induced by the zero section θ, and ∆ * θ = i M implies that Id M is ε-small for all ε > 0.
A deformation is by definition a C 1 surjection, but in general not necessarily invertible and its image is not necessarily a submanifold. Proof. If L is not tangent to M then Id M is a required deformation. It remains to consider the case that L := T F is a distribution, where F is a regular 1-foliation with C 2 leaves s.t. ∃ p 0 ∈ γ ∈ F s.t. T 2 p0 γ T p0 M . Wlog assume that p 0 = 0 ∈ R n . Let U ⊆ M be a trivializing chart at the vicinity of 0 with chart map φ : U → I d , and C 1 parametrization f : Since L is trivial on U , there exists a C 1 unit vector field X : I d → S n−1 s.t. X • φ generates L on U . Let ρ : I d → R be a compactly supported bump function with supp(ρ) =: K s.t. ρ ∞ = ρ(0) = 1 and ∇ 0 ρ = 0. For any t ∈ R, extend t(ρX) • φ : U → R n by zero to a C 1 vector field v t on M , and define σ t : M → L by σ t (p) = (p, v t (p)), then σ t is a C 1 global section of L . Denote by h t := h σt the deformation induced by σ t and M t : For any t ∈ R, define f t := f + t(ρX). Take any x ∈ I d \K, then ∀ t ∈ R, J x f t = J x f + tJ x (ρX) = J x f + t · 0 = J x f has full rank. Take any x ∈ K, since J x f has full rank, by continuity ∃ δ x > 0 s.t. ∀ t ∈ B 1 δ , J x f t has full rank. By the compactness of K, ∃ δ > 0 s.t. ∀ x ∈ I d , ∀ t ∈ B 1 δ , J x f t has full rank. By the implicit function theorem ∀ t ∈ B 1 δ , we have that f t parametrizes C 1 submanifold U t := f t I d and hence a C 1 diffeomorphism onto its image. This Parametrize γ at the vicinity of 0 as a integral curve of unit vector field X • φ i.e. γ(s) := f (α(s)) where α : I → I d is a C 1 curve s.t.γ(s) = X(α(s)). Wlog assume that α(0) = 0 and denote b :=α(0) ∈ R n , then we have . This proves the claim.
< ε, we have that h λ is ε-small. This concludes the proof. Now we are prepared to state the main theorem. It follows straightforwardly from the following lemma which claim that deformation along a line bundle preserves codiagonal.
Proof. Denote by ϕ : M → P n−1 the C 1 map defines L . Since ∀ p ∈ M, σ(p) ∈ ϕ(p), we have that We discuss an application of Theorem 4.2.7 to C 2 submanifolds. In this case the degeneracy of line bundles can be restricted by the geometry of the base manifold. Parametrize γ at the vicinity of p by γ(t) := (α(t), f (α(t)) where α : where Hess 0 f is the Hessian tensor of f at 0. Proof. Since ∃ p ∈ M s.t. p is not an inflection point, by Lemma 4.2.8 ∀ regular 1-foliation F of M with C 2 leaves, we have T 2 p γ T p M where γ is the leaf contains p. Therefore ∀ C 1 line bundle L on M, L is semi-generic. The statement then follows immediately from Theorem 4.2.7.
Line bundles over hyperplanes and ellipsoids
In this section we focus on (continuous) line bundles over classical geometric objects such as hyperplanes, convex curves and ellipsoids. 5.1. Algebraic Topological Lemmas. Before we discuss the size of line bundles, we shall introduce several useful lemmas.
Recall that for any u, v ∈ S n ⊆ R n+1 define ∠(u, v) := arccos u T v then ∠ coincides with the geodesic distance on S n w.r.t. the induced Riemannian metric from R n+1 . Definition 5.1.1. For any a ∈ S n and θ > 0, define V θ (a) := {u ∈ S n | ∠(u, a) θ} the geodesic disk at a with radius θ, and W θ (a) := {u ∈ S n | ∠(u, a) = θ} its boundary.
We have the following analog of Borsuk's non-retraction theorem. The proof is adapted from Lemma 2.5 in [3] and the condition is modified and slightly weakened for our purpose.
. Denote X := S n−1 \{b, c} and A := V θ+ε (a)\V θ−ε (a), i.e. the ε-neighborhood of W θ (a). Since ∀ u ∈ A clearly ∃! distance-minimizing geodesic connects u and W θ (a), we conclude that the metric projection P := P W θ (a) : A → W θ (a) is a well-defined (single valued) map. Since by assumption f (W θ (a)) ⊆ A, we have that P • f | W θ (a) is a well-defined continuous map.
Since X ≃ W θ (a), it is easy to check that P extends continuously to a deformation retractioñ P : X → W θ (a). Since by assumption b, c / ∈ f (V θ (a)), i.e. f (V θ (a)) ⊇ X, we obtain thatP • f is a well-defined continuous map extends P • f | W θ (a) , contradiction.
The next lemma describes the size of a special continuous family of half-lines parametrized by a disk.
Since U 0 is a dense open subset of P n+1 , we have that x∈D nl x ∩ U 0 has non empty interior In order to reformulate the propositions above using the language of fiber bundles, we introduce the following notion of transversality.
Definition 5.1.5. Let E be a k-vector bundle over a subset X of R n defined by ϕ : X → Gr k (n), and V be a (n − k)-subspace of R n , then E is said to be transverse to V (denote by E ⋔ V ) iff ∀ x ∈ X ϕ(x) ⋔ V . For simplicity, for any subset X of R n and f ∈ C(X) we introduce notation P f := id X × 0 : Γ f −→ X ×{0} for the projection (x, f (x)) −→ (x, 0). Since it is clear that P f is a homeomorphism, pullback by P * f is a 1 − 1 correspondence, i.e. all line bundles over Γ f have the form P * f L where L is a line bundle over X × {0}, and vice versa.
Therefore the following reformulation is also useful: Line Bundle over Hyperplanes, Convex Curves, and Spheres. Now we shall discuss the application of the theorems above.
We start with the line bundles over linear hypersurfaces and, in particular, hyperplanes. Proof. Wlog assume that M := Ω × {0} ⊆ R n × R where Ω is an open subset of R n . =⇒: Provided that L is tangent to M , then ∆ * L ⊆ ∆ * T M = R n ×{0} and hence dim H ∆ * L dim R n = n, in particular int (∆ * L ) = ∅.
We also have the following application to convex functions and convex curves. Proof. Let E be the line bundle over I s.t. P * f E = L and denote by ϕ : I → P 1 the continuous map defines E .
We consider the following two cases: Case 1: ∀ x ∈ I, ϕ(x) is a supporting hyperplane of the epigraph of f at (x, f (x)). For any a ∈ I, since Γ f ∩ (x, y) ∈ R 2 | x > a = ∅ and Γ f ∩ (x, y) ∈ R 2 | x < a = ∅, we have ϕ(a) is not the vertical line. Denote by U 0 := {[x : y] | x = 0} ⊆ P 1 the standard affine chart and π 0 := [x : y] → y x the chart map, then ϕ(I) ⊆ U 0 . Define g : I → R by g := π 0 • ϕ, and denote by C the set of non-differentiable points of f . Since f is convex, we have that C is countable and I\C is dense in I. By the continuity of g, g(I\C) is dense in g(I) and g(I) is a connected subset of R, i.e. an (finite or infinite) interval.
Denote by f ′ L the left derivative of f , then C is the set of discontinuity of f ′ L and ∀ x ∈ I\C we have f ′ L (x) = f ′ (x). Recall that for any x ∈ I\C the supporting hyperplane of the epigraph of f at (x, f (x)) is unique and coincides with the tangent space of Γ f at (x, f (x)), we obtain that f ′ L | I\C ≡ g | I\C . Therefore f ′ L (I\C) = is dense in interval g(I), and hence f ′ L has no jump discontinuity. Since the one-sided derivative of a convex function is monotonous and hence admits only jump discontinuity, we conclude that f ′ L is continuous. Therefore C = ∅, f ′ L ≡ f ′ , and f is C 1 . Moreover, since f is strictly convex, we have that f is nonlinear. By Proposition 3.1.3 we have int (∆ * T Γ f ) = ∅.
Case 2: ∃ x 0 ∈ I s.t. ϕ (x 0 ) is not a supporting hyperplane of the epigraph of f at (x 0 , f (x 0 )). Up to rigid motion and homothety of R 2 ⊇ Γ f wlog we assume that f (x 0 ) = 0 is the global minima of f . Since the horizontal line R × {0} is a supporting hyperplane of the epigraph of f at (x 0 , 0), we have that ϕ(0) = [1 : 0]. By the continuity of ϕ wlog we assume that ∀ x ∈ I, ϕ(x) = Since j * E has non-empty interior. Notice that the strictly convexity condition in the two theorems above cannot be weakened, since picewise-linear curves clearly do not enjoy the desired property.
Proposition 5.2.6. Let L be a line bundle over an open subset U of a n-sphere in R n+1 , if L is not tangent to U then int (∆ * L ) = ∅.
Proof. Denote by S ⊆ R n+1 the n-sphere, wlog we assume that S is centred at 0 with radius R > 1, then at the vicinity of the north pole p := (0, R), S is the graph of f (x) = R 2 − x 2 , x < R. Wlog assume that U is an open neighborhood of p and L is not tangent to U at p.
Denote by ϕ : U → P n the continuous map defines L . Since T p S = R n × {0} we have ϕ(0, R) ⋔ R n × {0}. By continuity, up to a homothety ∀ x ∈ D n we have (x, f (x)) ∈ U and ϕ(x, f (x)) ⋔ R n × {0}. It is clear that f | D n ∈ C (D n ) and f | ∂D n ≡ Constant. Since L ⊆ i * L we conclude that int (∆ * L ) ⊇ int (∆ * i * L ) = ∅.
As an application of Poincaré-Hopf index formula, we know that there exists no continuous nonsingular tangent vector field over even dimensional spheres. Taking this into account, from Proposition 5.2.6 one deduces that any line bundle over an even dimensional sphere generated by a continuous unit vector field attains its expected size.
Actually more is true, as we have the following proposition: Proposition 5.2.7. Let n 1 and D be a continuous line distríbution over a n-sphere S in R n+1 , then ∆ * D = R n+1 \B where B is the open ball bounded by S.
Acknowledgements
I would like to express my sincere gratitude to Professor Keleti who taught me geometric measure theory. I would like to express my special thanks to Professor Keleti and Professor Csíkós for very useful comments and suggestions.
|
2021-10-19T01:15:44.459Z
|
2021-10-17T00:00:00.000
|
{
"year": 2021,
"sha1": "ad5662959e55d926b01279b59a3b6a9829e9a3b2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ad5662959e55d926b01279b59a3b6a9829e9a3b2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
227950202
|
pes2o/s2orc
|
v3-fos-license
|
Ca-doped rare earth perovskite materials for tailored exsolution of metal nanoparticles
Copyright c © International Union of Crystallography Author(s) of this article may load this reprint on their own web site or institutional repository provided that this cover page is retained. Republication of this article or its storage in electronic databases other than as specified above is not permitted without prior permission in writing from the IUCr. For further information see https://journals.iucr.org/services/authorrights.html
Introduction
The general formula of oxide-type perovskites is ABO 3 (see Fig. 1a), where A and B are cations of different sizes. The smaller cations B form corner-connected coordination octahedra with O atoms. Many different combinations of A-and B-cations can be realized, and mixed cations on both A and B substructures are possible as well. Due to the wide range of properties resulting from this compositional flexibility, perovskites are a highly versatile class of materials -possible applications include sensors (Rahimi et al., 2019), electrode materials in fuel cells (Lu et al., 2016), photodetectors (Ahmadi et al., 2017), light-emitting diodes and three-way converters (Keav et al., 2014). Moreover, these materials are promising candidates for a large variety of uses in catalysis (Hwang et al., 2017). It is possible to modify and fine-tune perovskites according to the desired properties, thus enabling rational material or catalyst design . ISSN 2052-5206 This outstanding feature has led to intense research on possible novel perovskites and their chemical composition. For example, Vieten et al. (2019) highlighted in a recent theoretical and experimental study how they can optimize the perovskite composition and structure for their application as oxygen carriers in thermochemical processes. A recent comprehensive overview on the design, synthesis, properties and applications of rare-earth-doped perovskites is given by Zeng et al. (2020). Also, for application as a cathode material in solid oxide fuel cells, the constant development of improved perovskites has been reported (Cascos et al., 2019). The application of rare-earth-based perovskites as suitable catalysts has already been investigated by many groups, for instance by Lim et al. (2018) for doped and undoped LaFeO 3 , and related materials. Furthermore, the group of Irvine reported a novel approach for exchanging A-site cations with Ni (Lee et al., 2020). They suggested that Ni exsolving from the A-site of a perovskite leads to the formation of catalytically highly active Ni nanoparticles, and thus to superior electrocatalysts for the oxygen evolution reaction.
A key requirement for excellent catalyst performance is the presence of catalytically active sites -typically, these consist of metal, alloy or oxide nanoparticles embedded in an oxide support material. Usually, catalytically active nanoparticles are prepared by deposition (Yates & Campbell, 2011), impregnation (Gorte & Vohs, 2009) or precipitation (Rousseau et al., 2010) techniques followed by catalyst activation prior to reactions via oxidation and reduction cycles. Often, these methods offer only limited control over the exact structure of the catalyst surface (Neagu et al., 2013). A recently emerging alternative method for preparing catalytically active sites uses nanoparticles grown in situ, directly from the oxide support itself (Neagu et al., 2013). Perovskites are able to incorporate catalytically highly active elements [e.g. Ni (Neagu et al., 2013;Kobsiriphat et al., 2010), Fe (Neagu et al., 2013;Opitz et al., 2015), Co (Adijanto et al., 2012), Cu (Neagu et al., 2013;Adijanto et al., 2012), Pt (Katz et al., 2012;Tanaka et al., 2006) and Pd (Tanaka et al., 2006;Katz et al., 2011;Eyssler et al., 2011)] -both as dopants or as main B substructure components. Subsequently, these elements can be partially exsolved as nanoparticles under reducing conditions (Fig. 1b), leading to smaller and more finely distributed particles on the surface (Nishihata et al., 2002).
The mechanism and driving forces of nanoparticle exsolution from bulk perovskites have been the topics of multiple studies (Neagu et al., 2013;Oh et al., 2015;Thalinger et al., 2015;Haag et al., 2010). Factors that play a role in exsolution are the reducibility of the B-site cation or dopant element (Kwon et al., 2017), the oxygen partial pressure (connected also to the chemical potential of the H 2 /H 2 O gas phase) (Opitz et al., 2015) and the presence of oxygen vacancies (Neagu et al., 2013). Haag et al. (2010) proved by an in situ neutron diffraction study on La 0.3 Sr 0.7 Fe 0.7 Cr 0.3 O 3-that by lowering the p(O 2 ) to 10 À21.5 atm, Fe nanoparticles were formed. A possible mechanism was proposed by Neagu et al. (2013), starting with the formation of oxygen vacancies upon reduction. The presence of these vacancies destabilizes the perovskite structure, locally causing the B-site species to exsolve in order to re-establish stoichiometry. For the morphological evolution of nanoparticles on the surface, provided an explanation by applying a simple energy-based model, taking into account the interplay between the surface free energy and the strain energy due to metal nucleates being included in the matrix. They showed an influence of these factors on the exsolution process using quantitative strain field modelling.
In this study, previous preliminary work on Nd 0.6 Ca 0.4 FeO 3and Nd 0.6 Ca 0.4 Fe 0.9 Co 0.1 O 3- was extended to examine three additional materials, namely, La 0.9 Ca 0.1 FeO 3-, La 0.6 Ca 0.4 FeO 3-and Nd 0.9 Ca 0.1 FeO 3-. For all five perovskite materials, the atomic and electronic structure, morphology and exsolution behaviour were investigated, with a focus on the effect of A-site doping with Ca. As reported previously by the authors, all five perovskites have been tested for their catalytic performance and are potential catalyst materials for high-temperature water-gas shift reactions . The characterization was performed utilizing various experimental methods: in situ powder X-ray diffraction (XRD), scanning electron microscopy (SEM) combined with energy dispersive X-ray spectroscopy (EDX), and inductively coupled plasma-optical emission spectrometry (ICP-OES). The experimental results were supported by density functional theory (DFT) calculations. Additionally, exsolution properties were studied using in situ XRD and SEM/EDX.
Synthesis of doped perovskites
The different perovskite powders were synthesized via the Pechini (1967) route. For this, the cations were mixed in the desired stoichiometric ratio using the following chemicals as Darmstadt, Germany). Cation complexes were formed by adding citric acid (99.9998% trace metals pure, Fluka) in a molar ratio of 1.2 with respect to the cations. After evaporation of H 2 O, the resulting gel was heated until self-ignition. The obtained powder was calcined for 3 h at 800 C. After grinding, the resulting products were used for the (in situ) XRD, BET, ICP-OES and SEM studies.
Characterization methods
The powder XRD measurements were carried out on a PANalytical X'Pert Pro diffractometer (Malvern Panalytical, Malvern, UK) in Bragg-Brentano geometry using a mirror to single out the used Cu K 1,2 radiation and an X'Celerator linear detector (Malvern Panalytical, Malvern, UK). For the in situ experiments, an Anton Paar XRK 900 chamber (Anton Paar, Graz, Austria) was used. The pristine samples were pretreated in O 2 at 600 C for 30 min. After cooling to room temperature, the atmosphere was changed to humidified H 2 (using a bubbler at room temperature, p = 1 bar, H 2 :H 2 O ' 32:1) and the temperature was increased gradually by 25 C. At each step, an in situ XRD measurement was performed after waiting for 30 min. The data were analysed using the HighScore Plus software (Degen et al., 2014) and the ICDD PDF-4+ 2019 database (ICDD, 2018).
To determine the actual composition of the synthesized samples, ICP-OES was used. The samples were digested in HCl and, after diluting, analysed with an iCAP 6500 ICP-OES spectrometer (Thermo Scientific, Waltham, MA, USA) equipped with a Meinhardt nebulizer and a cyclonic spray chamber (Glass Expansion, Port Melbourne, Australia). The observed signal intensities were converted into concentration units by means of external aqueous calibration. A detailed description of the measurement and quantification procedure is given in the supporting information (xS2).
The SEM experiments were carried out on a FEI QUANTA 250 FEG scanning electron microscope (FEI Company, Hillsboro, OR, USA) equipped with an EDAX Octane Elite X-ray detector (EDAX Inc., Mahwah, NJ, USA) using (if not mentioned otherwise) an acceleration voltage of 5 kV to achieve sufficient surface sensitivity. The images were recorded using secondary electrons.
DFT calculations
Spin-polarized DFT (DFT+U) calculations were carried out with the software package WIEN2k , which uses the augmented plane wave plus the local orbital method (Karsai et al., 2017), treating all electrons in a full potential [FP-(L)APW+lo]. To treat exchange-correlation, the GGA functional PBE (Perdew et al., 1996) was chosen and a Hubbard-U (Anisimov et al., 1991) was added to properly consider localized electrons: U was always included for Fe 3d electrons and for Nd 4f electrons for the Nd materials. For all materials, U eff was set at 4 eV (Kraushofer et al., 2018;Nilsson et al., 2013), both for Fe and Nd states.
The simulation of Nd 0.5 Ca 0.5 FeO 3 has been described in a previous study . Here, the following computational parameters have been used: structure models for the calculations were created using the experimentally found lattice parameters (XRD data are presented in x3.2; see Table 3 and Fig. 2). The unit cells of the bulk materials contain four formula units of LaFeO 3 and NdFeO 3 , respectively. These unit cells (cf. Fig. 3) were used directly to simulate La 0.5 Ca 0.5 FeO 3 and Nd 0.5 Ca 0.5 FeO 3 (corresponding to a 50% doping), where two of the four La/Nd atoms were substituted by Ca. Additionally, ffiffi ffi 2 Figs. S4 and S5 in the supporting information) were set up, resulting in cells with eight formula units, thus enabling simulations of La 0.875 Ca 0.125 FeO 3 and Nd 0.875 Ca 0.125 FeO 3 (corresponding to a 12.5% doping) by substituting one La/Nd atom with Ca. The O substructure was chosen fully occupied to emulate materials under highly oxidizing conditions to reduce computational cost. From a defect chemical point of view, this corresponds to charge compensation by electron holes rather than oxygen vacancies. This assumption is further justified by the fact that the materials were prepared in air and cooled slowly, which means that the expected vacancy concentration is low. Consequently, the effects of Ca doping are assumed to play a more important role than oxygen vacancies. The atomic positions were optimized until residual forces were below 1 mRy/Bohr for all cases.
While La could be treated nonmagnetically, both Nd and Fe exhibit magnetic ordering: Fe atoms are arranged in a type G (Wollan & Koehler, 1955) antiferromagnetic substructure (Sławiń ski et al., 2005), where the spin (up and down, respectively) of any given Fe atom orients antiparallel to all next nearest neighbours. Experimental studies have shown long-range antiferromagnetic ordering of the Nd atoms in NdFeO 3 at very low temperatures of about À271 C (Sławiń ski et al., 2005;Bartolomé et al., 1997). However, to reduce computational cost, a ferromagnetic set-up was used in initial simulations, since ferromagnetic Nd ordering only weakly affects the moments of other atoms (in the order of 0.01 m B ). The results of ferromagnetic calculations (optimized positions) have been used to check structures with antiferromagnetically ordered Nd atoms -no significant changes of the residual forces could be observed (all forces remained below 1.5 mRy/Bohr). Therefore, no additional calculations with antiferromagnetic Nd have been performed.
Atomic sphere radii for all calculations were chosen as follows: 2.20, 1.92 and 1.60 Bohr for La/Nd/Ca, Fe and O, respectively. The basis set size is given by the plane wave cutoff RK max , where the smallest atomic sphere radius is R and K max is the largest vector in reciprocal space to be considered. RK max = 9 (La cases) and RK max = 8 (Nd cases) were used. Brillouin-zone integrations of the doped materials were done on an 8 Â 6 Â 8 Monkhorst-Pack k-mesh (Monkhorst & Pack, 1976) for 50% doping and a 6 Â 6 Â 6 mesh for 12.5% doping.
The results of the A-site-doped materials were compared with those of undoped bulk LaFeO 3 (Falcó n et al., 1997) and NdFeO 3 (Streltsov & Ishizawa, 1999), which were obtained using the same parameters as the respective 50%-doped perovskites.
Choice of materials
The studied perovskites should fulfil several prerequisites for their intended application in catalysis. First, it should be possible to prepare them reproducibly, and they should be Perovskite structures of the A-site-doped materials without B-site doping, found by Rietveld refinement. All synthesized perovskites are isotypic and crystallized in the orthorhombic space group Pnma (No. 62). The structures are related to the ideal cubic perovskite structure, but with tilted Fe coordination octahedra. Differences of the structures between the various materials were found only in the extent of the tilting and the distortion of the octahedra. stable. They should show a good performance as catalysts and they should be able to incorporate catalytically active elements, which can be exsolved upon reduction (Neagu et al., 2013) to form catalytically highly active nanoparticles on the surface. In particular, the exsolution of the most catalytically active elements under reasonable conditions is preferred. Lastly, to be able to use them as electrode materials in electrochemical cell design (e.g. in fuel cells or to apply a polarization for in situ fine-tuning of the exsolution process), they should have mixed ionic electronic conductivity (MIEC) (Opitz et al., 2018).
To achieve these properties, the elements for the A-and B-sites of the ABO 3 perovskites have to be chosen appropriately. Furthermore, both sites can be doped or a nonstoichiometry can be introduced, with a deficiency on either of the sites. The starting point for the choice of composition for our materials was the commercially available material La 0.6 Sr 0.4 FeO 3-(LSF). This type of perovskite is widely used by different research groups and therefore extensive reference data for defect chemistry (Kuhn et al., 2011;Schmid et al., 2018b) and electrochemical properties (Preis et al., 2004;Schmid et al., 2018a;Søgaard et al., 2007) exists. The replacement of La 3+ with the acceptor dopant Sr 2+ leads to a charge imbalance in the perovskite structure, giving rise to a complex defect chemistry. Depending on oxygen partial pressure and temperature, this charge mismatch is compensated by either oxygen vacancies (denoted by O 3-in the formula) or mixed Fe 4+ , Fe 3+ and Fe 2+ valence states. These defects are responsible for the required MIEC properties (Yoon et al., 2009).
Additionally, the A-site elements and doping amount influence the thermochemical stability and exsolution behaviour . A-site cations with smaller ionic radii (e.g. Ca and Nd) lead to distortions in the perovskite structure, affecting its stability (see x3.2). The stability is also affected by the chemical properties of the elements, e.g. reducibility and segregation tendency. Thus, varying the A-site composition allows tuning of the exsolution properties. Therefore, we exchanged the Sr doping for Ca with the aim of increasing the stability and investigated the effect of Ca doping (40 and 10% of A-site ions). Also, we exchanged La with Nd to increase the structural distortions and facilitate XPS studies. Since a future goal is to exsolve metallic Ni particles from these host perovskites (respective work is in progress), the overlap of Ni 2p and La 3d lines can be avoided by exchanging La with Nd, thus allowing an easier XPS characterization of such an Ni-containing exsolution catalyst. Furthermore, rare earth materials are generally known for their catalytic activity, and both elements were reported in the literature to have a promotional effect on catalytic reactions (e.g. for water-gas shift) (LeValley et al., 2014).
With these prerequirements, the following materials, La 0.9 Ca 0.1 FeO 3-(La0.9), La 0.6 Ca 0.4 FeO 3-(La0.6), Nd 0.9 Ca 0.1 -FeO 3-(Nd0.9) and Nd 0.6 Ca 0.4 FeO 3-(Nd0.6), were synthesized. In addition, B-site doping with the catalytically promising element Co was tested. For this purpose, the material Nd 0.6 Ca 0.4 Fe 0.9 Co 0.1 O 3-(NdCo) was investigated. The specific A-site composition for the doped material was chosen, because it proved to be the most stable with respect to exsolution. A stable host perovskite should favour the preferential exsolution of the dopant metal, making it possible to produce only pure Co particles instead of mixtures with Fe particles or alloys with Fe. Table 1 gives an overview of all the materials that were synthesized. Also, doping with Ni was considered, but as it could not be synthesized phase pure, this attempt was dropped (see xS1 in the supporting information for details).
The real composition of the samples after synthesis was determined by ICP-OES; Table S2 in the supporting information summarizes the results. The composition derived by ICP-OES deviates from the desired nominal stoichiometry by a maximum of 10%, which confirms that the respective synthesis was successful. In all materials, there is a slight A-site deficiency. This deficiency is almost negligible for the samples with La, but slightly more prominent for those with Nd. However, an A-site deficiency should enhance the exsolution properties, because exsolution re-establishes the stoichiometry within the material in such a case (Neagu et al., 2013). Therefore, the deviations from the nominal compositions should be beneficial for the later catalytic application of the investigated materials.
Structure characterization
To determine the crystal structure of the freshly synthesized samples, powder X-ray diffractograms of the pristine samples were recorded (see xS3 of the supporting information for experimental tables). For comparison, the commercial mate- Table 1 Investigated materials with different A-and B-site compositions.
The following notations will be used. Table 2 Compositions and PDF numbers of the database structures used as a reference pattern in Fig. 2 and as a starting point for Rietveld refinement.
The positions of the maxima of the largest Bragg peaks are given (the notation refers to the markings in Fig. 2). rial LSF is also displayed. The results for the samples are displayed in Fig. 2. The diffractograms of La0.9 and La0.6 showed similarities in appearance, differing mostly in the exact position of the Bragg peaks and their intensity. Nd0.9 and Nd0.6 showed a very similar diffraction pattern to the samples with La, but some reflections were split and, again, the positions and intensities varied. These similarities of the diffractograms suggested that all the materials had similar structures (as is expected, because they were all perovskite materials from the same family). Also, LSF shows a related pattern sharing the main peaks with the synthesized materials, but the absence of peak splitting (i.e. of weaker reflections) indicates a changed symmetry of the LSF unit cell. In Fig. 2, the measured diffractograms are overlaid with simulated stick patterns of the perovskite structures taken from a database of the same materials (if available, as for La0.9 and LSF) or similar materials (with a slightly different A-site composition). The database structures used were taken from the ICDD PDF-4+ 2019 database (ICDD, 2018) and are listed in Table 2. For Nd0.6, no database entry for a material with a similar A-site composition existed; thus, no stick pattern is shown. The measured Bragg peaks coincided well with the sticks, especially for La0.9 and LSF. For the other samples, when only a similar material was used as a reference, a slight shift could be observed. Also, no additional peaks which are not part of the stick patterns were visible in the diffractograms. This confirmed that the syntheses were successful and perovskite structures were obtained. Furthermore, the absence of additional peaks proves the phase purity of the synthesized materials.
When comparing the materials with Ca doping to Sr-doped LSF (Fossdal et al., 2004), the latter shows fewer Bragg peaks in the diffractograms (which is also reflected in the stick patterns). This is due to the different space group of these materials. The Ca-doped perovskites crystallized in the orthorhombic space group Pnma (No. 62), while LSF has the rhombohedral space group R3c (No. 167). The differing symmetry of a rhombohedral lattice results in fewer peaks in the diffractogram. The appearance or absence of those peaks could be used to differentiate between the two lattice symmetries.
For an exact determination of the crystal structures, Rietveld refinements were performed with the HighScore Plus software (Degen et al., 2014), using the whole 2 range 15-120 . For this, the same structures that were used for the reference stick patterns served as a starting point for the respective phase used in the refinement (see Table 2). For the materials with a differing A-site composition, the composition of the starting material was corrected before the refinement. Next, the scale factor, the unit-cell parameters and the profile variables (Caglioti parameters) of the phase were refined. Finally, the atomic positions that are not symmetry-fixed were released in a last refinement step. Geometric differences between the starting structures and the final refined structures are listed in Tables S7 and S8 of the supporting information. The simulated intensity profiles after the Rietveld refinement are shown in Fig. 2 as thin black lines. They agree very well with the measured intensities. This agreement was also quantified with the quality parameter 2 [this is related to the goodness-of-fit as given by Toby (2006); see Table 3 for the determined values -values of 2 closer to 1 suggest good refinement results (further agreement indices are listed in the experimental tables -see supporting information). For all samples, this parameter is below 3, indicating a reasonable fit. For the samples with Nd, the fit is even better than for those with La. In all cases, this supports that the determined structure models were correct.
There was no refinement performed for the Co-doped material. Its diffractogram is almost identical to that of Nd0.6; all the Bragg peaks visible for Nd0.6 appear also for NdCo, with nearly the same positions, intensities and shapes. Therefore, we concluded that the small amount of doping did not have a strong influence on the structure and assumed that the structure found for Nd0.6 sufficiently well describes also that for NdCo. Any differences between the structures obtained with Rietveld refinement would be within the error of the method.
The structures found with Rietveld refinement are displayed in Fig. 3 and the corresponding unit-cell parameters and unit-cell volumes are given in Table 3. All synthesized doped perovskites are isotypic. Their structures can be derived from the ideal cubic perovskite structure (see Fig. 1a) through a tilting of the Fe coordination octahedra, resulting in reduced symmetry and an orthorhombic structure. As in the ideal Table 3 Results of the Rietveld refinement.
The quality parameter 2 quantifies the agreement of the calculated to the measured diffractogram (values closer to 1 suggest good refinement results). Additionally, the unit-cell parameters a, b and c, and the unit-cell volume V of the orthorhombic unit cells of the perovskite structures determined by Rietveld refinement are given. LSF data [taken from PDF #04-007-6517) are shown for comparison (rounded to the same number of digits)] -the volume is normalized to four formula units (as in the orthorhombic cell).
Cell parameters
Cell volume La0.9 2.575 5.558 (1) 7.837 (2) structure, the oxygen anions are at the corners of these octahedra. The octahedra are connected via these corners, forming a network with the A cations in between. Similarly, LSF has a rhombohedral lattice instead of the cubic one, induced by another type of octahedral tilting. These structures and the differences between the various materials are discussed further below. The reason for the deviation from the cubic structure can be found in the size of the elements involved. Goldschmidt (1926) gives a criterion for the ideal ratios of the ionic radii. To compensate for too-small A-site cation radii (Goldschmidt tolerance factor below 1), in the investigated materials distortions and tilting of the Fe-O octahedra occur. As Sr 2+ is larger than Ca 2+ , a different tilt system appears in LSF. The two possible symmetries are compared in detail in Fig. S3 (see supporting information). For both symmetries, the alternate tilting of the octahedra can be seen very well, but the tilting patterns are different for the different symmetries. According to Glazer's classification (Glazer, 1972), the tilt system of the orthorhombic structure is given as a À b + a À and as a À a À a À for LSF ( Table 4 and marked in Fig. S3 of the supporting information). This is necessary due to magnetism and doping, consequently leading to calculations done in low symmetry space groups (P1 and P1) for which no Glazer tilt systems are defined.
In an ideal cubic structure, all considered angles a, b, and should be 0 . Tilting of the octahedra increases these angles. Thus, larger angles mean a more pronounced distortion. The perovskites containing La 3+ , which is larger than Nd 3+ , have a lower Glazer angle a and less tilting than the materials with Nd 3+ . The same trend was confirmed when looking at LSF with the larger Sr 2+ ions instead of Ca 2+ ions. Consequently, LSF has the lowest observed distortion. In general, the tilting is stronger in the orthorhombic structures, induced by the smaller A-site cations and the higher non-ideality. The Glazer angle b does not change in the case of Nd0.9, but decreases significantly for Nd0.6 (compared to the respective La materials). However, the overall distortion of Nd0.6 is still larger than in La0.6 (since the difference in angle a is larger than in angle b). Additionally, it was observed that in the case of La perovskite, a higher Ca content increases the angle a, while angle b remains unchanged. For the Nd perovskites, angle a increases also with increasing Ca content; however, angle b decreases.
The DFT angles and considered for comparison are averaged due to local distortions present during the calculations (the individual angles for one averaged value vary by up to 4 ). In general, the calculated averages are larger than the respective experimental values (with the exception of Nd0.6) and agree within 10% (with the exception of La0.9, where the deviation is larger by up to 20%). However, the trend of increasing distortion when exchanging La with Nd was observed as well.
The volume V reflects the sizes of the ions in the respective perovskite, as well as the extent of the tilting (reduced volume corresponds to a higher Glazer angle a, as tilting reduces the respective unit-cell parameters). When exchanging La 3+ with the smaller Nd 3+ , V is reduced. Larger amounts of Ca doping increase the amount of Fe 4+ , which replaces the larger Fe 3+ (which is consistent with DFT results). For La0.9 and La0.6, this effect is combined with the replacement of La 3+ with the slightly smaller Ca 2+ , thus also resulting in a lower V. For Nd0.9 and Nd0.6, replacing Nd 3+ with Ca 2+ leads to larger A-site cations on average, but the volume still decreases with the higher amount of Ca doping. In this case, the effect of changed B-site cation size due to a changed oxidation state seems predominant. However, the difference in V between Nd0.9 and Nd0.6 is smaller than between La0.9 and La0.6. The volume of LSF is not comparable due to the different synthesis -it has a slightly lower volume per formula unit than La0.6, even though Sr 2+ is larger than Ca 2+ and the Glazer angles are smaller.
DFT calculations
DFT calculations were performed to further investigate the effect that A-site doping with Ca has on the atomic and electronic structures of the perovskites. In doped NdFeO 3 , this effect has been studied previously by Wang et al. (2015), treating the Nd-4f electrons in the frozen core approximation. In a previous study by the authors , full potential [FP-(L)APW+lo] calculations for Nd 0.5 Ca 0.5 FeO 3 were carried out. Here, we extend these calculations to Nd 0.875 Ca 0.125 FeO 3 , La 0.875 Ca 0.125 FeO 3 and La 0.5 Ca 0.5 FeO 3 for comparison.
The densities of states (DOS) of all simulated perovskites were calculated. Both undoped bulk materials LaFeO 3 and NdFeO 3 are insulating (each with a calculated band gap of 2.4 eV), while all doped materials show metallic behaviour. With Ca doping, the DOS gets shifted to the right with respect to the Fermi level due to the reduced number of valence electrons (exchanging La 3+ /Nd 3+ with Ca 2+ reduces this number, leading to an electron hole), i.e. states that are occupied in the bulk are empty in the doped material. The nature of the now-empty states together with the magnetic moments of the Fe atoms can be used to assess the effect of Ca doping on the electronic structure. In the case of La 0.875 Ca 0.125 FeO 3 (Fig. 4, top), the DOS remains qualitatively the same compared to the bulk LaFeO 3 , apart from being shifted to the right. Both O-p and Fe-d states contribute to the empty states above the Fermi level, suggesting a delocalization of the electron hole over many O atoms (since there are 24 O atoms in the unit cell, which contribute to those states), as well as Fe atoms. The magnetic moments of Fe do not change (the changes are below 0.05 m B per atom), which indicates no change of oxidation state.
La 0.5 Ca 0.5 FeO 3 (Fig. 4, bottom) shows further differences. While there is still a large contribution of the O-p states to the states above the Fermi level (indicating delocalization of the electron hole), two different 'types' of Fe atoms can be seen now: (i) Fe 3+ with hardly any contribution to the empty states above the Fermi level and (ii) Fe 4+ with empty states (both spin-up and spin-down). Those empty states of Fe 4+ are evidence for partial oxidation, which is also supported by the reduction of the magnetic moments of these atoms (the spinup and spin-down moments change from 4.0 to 3.4 and 3.6 m B , respectively; this change of the moment occurs due to changes in the occupation of the respective Fe states), which turn out to be Fe with slightly larger Fe-Ca distances. The moments of spin-up Fe 3+ do not change with doping and spin-down Fe 3+ exhibits a very slight reduction of about 0.06 m B in magnetic moment, which might be interpreted as a comparatively miniscule partial oxidation. This is further supported by a minor contribution of spin-down Fe 3+ d states to the empty states above the Fermi level.
Nd 0.875 Ca 0.125 FeO 3 [see Fig. S6 (top) in the supporting information] behaves similarly to La 0.875 Ca 0.125 FeO 3 , with the electron hole delocalized over many O atoms. However, slight changes of the magnetic moments (around 0.1 m B ) indicate additional partial oxidation of spin-up Fe atoms, which is not present in the La analogue. As shown previously by the authors , Nd 0.5 Ca 0.5 FeO 3 [see Fig. S6 (bottom) in the supporting information] shows empty O-p and Fe 4+ -d states above the Fermi level, comparable to La 0.5 Ca 0.5 -FeO 3 (with slightly larger changes in the Fe moments of 0.6 m B ). However, for this material, the Fe 4+ moments of both spins change by the same amount, while the Fe 3+ moments do not change at all compared to the bulk. Fig. 5 shows changes in the calculated Fe-O distances in the perovskites containing 50% Ca at the A-site due to doping. Both bulk LaFeO 3 (Fig. 5a) and bulk NdFeO 3 (Fig. 5c) exhibit virtually undistorted Fe-O octahedra (the difference between the shortest and longest distances is of the order of 1%), while a clear tilting of the octahedra can be seen. In addition to the tilting, the doped perovskites [La 0.5 Ca 0.5 FeO 3 in Fig. 5b and Nd 0.5 Ca 0.5 FeO 3 in Fig. 5d] show distortion compared to the bulk distances. In both cases, two of the four Fe atoms of the unit cell (shaded blue and marked 'Fe 4+ ') display a pronounced Jahn-Teller distortion, where Fe-O distances along two axes of the octahedra contract by about 5-7%, while the bond along the third axis elongates by 3-7%, which serves as a strong indication of a change of the oxidation state to Fe 4+ . The magnetic moment of these Fe atoms are significantly reduced (see above), further hinting toward a partial oxidation. The second type of Fe atoms (shaded yellow and marked 'Fe 3+ ') behave differently in the two materials. In Nd 0.5 Ca 0.5 FeO 3 , the distances along two axes remain unchanged compared to the bulk, while one distance gets elongated by about 5%. In La 0.5 Ca 0.5 FeO 3 , one Fe atom [marked with an asterisk (*) in Fig. 5b] behaves similarly to the Nd analogue, aside from a larger elongation, while for the second atom [marked with a double asterisk (**) in Fig. 5b], two bonds lengthen by 3 and 5%, respectively, while the third remains unchanged. This -together with the slightly changed magnetic moment of this atom -might be a further hint toward a very slight partial oxidation.
In the case of a smaller doping concentration (12.5%), no pronounced Jahn-Teller effect is found for either La or Nd perovskites. Here, the distortion most likely arises primarily due to size effects of the dopant. SEM images of the pristine La0.9 sample with different magnifications. A foam-like morphology and globular cavities can be seen in image (a). The structure formed during synthesis and was later crushed into smaller pieces during the grinding step. In image (b), more details of the crumbled pieces are visible at higher magnification. Image (c) displays a section of a cavity wall. While there are pores inside, the outside parts are denser. The thickness ranges from $230 to 500 nm. Very porous structures exist as well, as is shown in image (d) (a higher accelerating voltage of 20 kV was used for increased resolution). They are built up by crystallites of sizes around 60 nm, grown together and forming a network. Fig. 6 shows images of the La0.9 sample. The powder consists of smaller crumbled pieces, some of them resembling globular cavities (Figs. 6a and 6b). These result from the synthesis, where the emerging gases (from combustion of the organic species) were responsible for the formation of a foamlike structure of the perovskite material during heating of the gel and calcination. Similar macroscopic structures after synthesis were observed for LaFeO 3 by the group of Biniwale (Gosavi & Biniwale, 2010). Later, when the material was ground, the foam cavities were crumbled but retained their globular microstructure (one of the cavities is shown in more detail in Fig. S7 in the supporting information). A cross section of a cavity wall at high magnification (Fig. 6c) reveals that its thickness ranges between 230 and 500 nm. Furthermore, only the outside parts of the wall are dense, with the inside being quite porous. Besides the cavity walls, there exist very porous structures, probably grown inside the bubbles (Fig. 6d), consisting of networks of crystallites with sizes around 60 nm.
Macroscopic structure of the doped perovskites
Although there are generally slight differences between the different examined materials, the other synthesized perovskites exhibit a very similar morphology compared to La0.9. The same crumbled pieces of the foam structure are found for La0.6 (Fig. S8), Nd0.9 (Fig. S9) and NdCo (Fig. 7). Also, the cavity walls in the undoped materials resemble the morphology of those in La0.9, with denser outermost layers built from distinguishable crystallites, a porous inner structure and similar thicknesses. On the other hand, the Co-doped perovskite material behaves slightly differently. While the inside of the cavity wall of the NdCo sample, as shown in Fig. 7b, is as porous as in the other samples, its outermost layer looks different. It is still denser than the inside and very smooth with hardly distinguishable crystallites. Unlike the samples without doping, several pores with diameters up to 70 nm reach the surface. The wall thickness at this part of the sample varies widely. Parts with a similar thickness (400 nm) as in La0.9, but also with very thin walls (around 150 nm thick) and very thick parts (around 2.8 mm thick), where the walls of several cavities meet, can be observed.
A possible explanation for the slightly changed surface morphology in NdCo is that more pronounced sintering occurred in the calcination step, resulting in the smoothly fused crystallites. At the same time, the holes where the emerging gases left the structure were not clustered around bumps (as they were in La0.9; Fig. S7), but were distributed more uniformly over the whole surface. Similar observations have been made in a study on the grain growth and porosity of perovskites by Tan et al. (2003), where sintering resulted in denser structures.
Characterization of exsolution properties
To investigate the behaviour of the doped perovskites under reducing conditions and to determine if exsolution of Fe and/or Co can be achieved, in situ XRD studies were performed. For these, the samples were mounted in the reaction chamber of a diffractometer in Bragg-Brentanogeometry. Each experiment started with a pretreatment in O 2 at 600 C for 30 min. This was done in order to always have a fully oxidized material with the least possible amounts of oxygen vacancies, and thus a defined starting point. After cooling to room temperature, the atmosphere was changed to reducing conditions. For this, H 2 was guided through a bubbler filled with water at atmospheric pressure. In this way, a ratio of H 2 to H 2 O of around 32:1 was obtained for the humidified H 2 , which corresponds to an equivalent oxygen partial pressure p(O 2 ) of only 5.3 Â 10 À26 bar (for details, see xS6 in the supporting information). The role of water in the gas phase is to provide an oxygen source for a well-defined p(O 2 ). This is required to be able to compare exactly our results with future electrochemical studies on these materials and with other work on exsolution in the literature (Opitz et al., 2017).
Experiments were conducted with a constant gas flow of around 0.5 l min À1 , the temperature was increased stepwise Diffractograms of the La0.9 sample during the reduction with wet H 2 at increasing temperature. The perovskite structure (peaks labelled 'P'; for the respective hkl indices, cf. Fig. 2) stayed intact over the whole temperature range and no La 2 O 3 phase was formed. A small peak of elemental Fe appeared (green border), first clearly seen at 625 C, indicating Fe nanoparticle exsolution. and at each step an XRD measurement was taken after waiting for 30 min. The reflections of the new phases in the diffractograms were assigned according to database entries in the ICDD PDF-4+ 2019 database (ICDD, 2018) and are, in the following figures, labelled with the respective hkl indices. The reflections of the perovskite phase are labelled with 'P'; for their hkl indices, refer to Fig. 2. The results of in situ reduction of La0.9 can be seen in Fig. 8. The appearance of a diffraction peak at 2 of 44.4 was observed, which could be assigned to metallic Fe, thus indicating that exsolution occurred, i.e. formation of metal (nano)particles on the surface of the perovskite. It is first clearly distinguishable at 625 C. Furthermore, no decomposition of the perovskite (formation of an La 2 O 3 phase) took place over the whole temperature range tested (up to 675 C).
During the in situ reduction of La0.6 with humidified H 2 , the diffraction peak corresponding to metallic Fe (2 of 44.4 ) started to appear at slightly higher temperatures (650 C) than for La0.9 (Fig. S10), but generally showing the same trend as observed before with less Ca doping. Up to 675 C, there was no decomposition of the material and no formation of CaO was observed, although the Ca content of this material was much higher.
To prove that exsolution actually occurred, SEM measurements were performed after reduction of La0.6 in wet H 2 (Fig. 9). The sample used for the in situ XRD experiment was investigated directly after the last temperature step at 675 C (Fig. 9b). Additionally, another batch of the pristine sample was freshly reduced in a flow reactor at 650 C, i.e. the temperature at which exsolution was first observed in the in situ XRD experiment (Fig. 9a). In both samples, nanoparticles can be seen decorating the perovskite surface. After reduction at 650 C, they are regularly shaped, some of them with the geometries of smoothly grown crystals, distinguishing them from crumbled pieces of the perovskite material after grinding. They have diameters of around 210 nm. After reduction at 675 C, the particles are larger on average, with more variation in shape. Some still have quite regular shapes, with diameters around 270 nm, but also needle-like particles can be observed. The largest needles have lengths of around 630 nm. The higher temperature facilitated further growth of the nanoparticles after their formation, partly needle-like, but no obvious difference in the number of formed particles was found. A needle-like Fe exsolution from LSF has already been reported by Thalinger et al. (2015).
One particle of the La0.6 sample after reduction at up to 675 C was further investigated with EDX. A line scan was performed across the exsolved particle. The net intensities of the Fe L, the O K and the La M peaks along the line are displayed in Fig. 10. At the position of the particle, the Fe peak increases significantly, while at the same time the O peak (strongly) and the La peak (slightly) decrease. This confirms the assumption of Fe nanoparticle exsolution, and that the Fe phase observed in the diffractograms can indeed be attributed to the nanoparticles visible with SEM. These findings prove that exsolution of Fe nanoparticles upon reduction was successful.
The diameter of the particle in the direction of the scan can be determined, considering that, due to the limited resolution of EDX, the Fe signal increase is not a step, but follows a slope. As the signal for one pixel originated from a nonzero SEM images of the La0.6 sample after reduction in humid H 2 at (a) 650 and (b) 675 C. The lower images show details of the images in the first row at a higher magnification (green box). Exsolved nanoparticles decorate the perovskite surface. At 650 C, they are smaller, with diameters $210 nm. Some are very regularly shaped, indicating a controlled crystal growth. On the other hand, at 675 C, the nanoparticles are larger on average. While some retain the regular shape, with diameters of $270 nm, a needle-like growth set in. The longest needles are around 630 nm in length.
Figure 10
SEM secondary electron image of the La0.6 sample after reduction in wet H 2 at 675 C. A regularly shaped particle is visible in the centre. An EDX scan was performed along the green line crossing the particle. The overlaid diagram shows the net intensity of Fe, O and La peaks in the recorded EDX spectra. When crossing the particle in the centre of the image, the Fe L peak increases, while the O K peak and the La M peak decrease. The positions of the extremal slopes of the Fe signal are marked with yellow lines. These positions coincide with the edges of the particle in the secondary electron image. The particle diameter in the direction of the line is 350 nm. In contrast, the signals do not change when crossing the particle on the right, which is just a crumbled piece of the perovskite material.
area, the extent of the signal increase reflects the percentage of the signal originating from the particle. When the beam was at the position of the centre of the particle, the Fe peak reached its maximum value. The edges of the particle are assumed to be located at the position where the slope of the Fe signal has the maximum value (which means a maximum change of Fe content). Thus, a diameter of 350 nm was determined, agreeing well with the diameter obtained from the secondary electron image.
The line scan also crosses another particle, but without a significant change in any of the signals. This particle is probably a finely crumbled piece of the perovskite material. It also lacks a regular morphology, while the exsolved and grown Fe particle is smoothly shaped and shows some symmetry (the projection resembles a polygon with six corners and parallel edges).
For the Nd-based A-site-doped materials, in situ XRD experiments were performed in the same way in humidified H 2 . Exsolution (Fe peak at 2 of 44.4 ) from an intact perovskite structure was seen for both Nd0.9 (Fig. 11) and Nd0.6 ( Fig. S11) at 650 and 675 C, respectively. This increase of the onset of exsolution upon exchanging La with Nd confirms the expected increase of the material stability, which was stated in the Introduction. Please note that owing to an instrumentation problem during the experiment on Nd0.9, the temperature was erroneously held for 5 h (instead of 30 min) at 650 C before the diffractogram was recorded. Thus, the Fe peak had already clearly evolved. In contrast, for Nd0.6, as for most other samples, it was only very small at the first temperature where it appeared. This suggests that, in addition to temperature, the reduction time is quite important for the size of the resulting nanoparticles. For both materials, up to the highest temperatures investigated, the perovskite structure was stable and none of the decomposition products (Nd 2 O 3 or CaO) could be observed.
For the investigation with SEM, the Nd0.9 sample was again freshly reduced in a flow reactor in humidified H 2 at 650 C (Fig. 12). The sample surface is observed at an angle. Thus, it can be clearly seen that the particles were sticking out from the surface. They had diameters of approximately 200 nm. To monitor variations in the composition, an EDX mapping was performed in the region marked with the green box. The net intensity of the Fe L spectral peak was used for characterizing the Fe distribution in the material, shown as an overlay of the secondary electron image. The positions of the particles coincided well with an increased Fe signal (and a lower Nd M peak, which is not shown here). Similar to La0.6 (see Fig. 10), this strongly suggests the Fe nature of the formed particles.
In the in situ XRD experiment with the doped perovskite NdCo (Fig. 13), already at a temperature of 550 C, a new Bragg peak arose at a 2 value of 44.8 . The peak grew with increasing temperature and shifted to lower angles. At 650 C, its maximum was at a 2 value of 44.5 . In contrast to the Cofree materials, the reflection was significantly more intense, indicating a larger amount of the formed phase. Similar to the other samples, this phase could be ascribed to exsolution of a metallic bcc phase, but it was not clearly distinguishable whether it was a pure phase or an Fe-Co alloy. Since exsolution did not occur on the materials without Co doping at such low temperatures (compare with Nd0.6 with an exsolution onset at 675 C) and since cobalt oxides are generally easier to reduce than iron oxides, it is rather likely that, especially at lower temperatures, the phase contained predominately Co. This assumption is also supported by the position of the bcc (110) reflection at a significantly higher 2 value than in the samples without doping (44.8 at 550 C for NdCo versus 44.4 at 650 C for Nd0.9 and Nd0.6). It is worth mentioning that the temperature difference is not sufficient to explain this difference (changes of ca 0.05 /373 C are reported for both Co and Fe in the respective temperature range). Rather, the position of the (110) reflection for pure Fe and Co can be found in the literature to differ by 0.40AE0.05 .
Also, the angle shift to 44.5 at 650 C (which is again more pronounced than would be expected only due to thermal expansion) could be explained by an increase of the Fe Diffractograms of the Nd0.9 sample during the reduction with wet H 2 at increasing temperatures. The material was completely stable up to a temperature of 625 C. However, after holding the temperature at 650 C for 5 h, a peak (green border) corresponding to an elemental Fe phase could be observed, while the perovskite structure (peaks labelled 'P'; for the respective hkl indices, cf. Fig. 2) remained intact.
Figure 12
SEM images of the Nd0.9 sample after reduction in wet H 2 at 650 C. The image on the right is a detail (green box) of the image on the left at a higher magnification. Exsolved nanoparticles are visible at the perovskite surface, with diameters of around 200 nm. An EDX mapping was performed of the magnified area. The Fe distribution, characterized by the net intensity of the Fe L peak, is shown as an overlay. The particles visible in the secondary electron image coincided with a higher Fe concentration. content in the exsolved metal particles with increasing temperature. At higher temperatures, when Fe could also be reduced, the composition of the particles would shift toward an Fe-enriched alloy. Because Fe 0 has a larger atomic radius than Co 0 , an increase of the Fe content in an Fe-Co alloy is expected to cause an increase of the unit-cell dimensions and thus a decrease of the diffraction angle (cf. Bragg's law). In a related study, Chen et al. (2018) recently reported exsolved Co 1-x Fe x particles from La 0.5 Sr 0.5 Co 0.45 Fe 0.45 Nb 0.1 O 3-, with an increasing Fe content at higher temperatures, in accordance with the conclusions reported here.
During the in situ reduction, the perovskite structure remained intact over the whole temperature range tested, but a CaO phase (peaks at 2 values of 37.2 and 53.5 ) evolved, starting from 600 C. Interestingly, the appearance of this phase correlates with the onset of the bcc (110) shift, which was interpreted above to be caused by (enhanced) Fe exsolution. This segregation of CaO may thus be triggered by the A-site excess stoichiometry in the perovskite resulting from proceeding exsolution of B-site cations (predominantly Fe). A similar segregation of an alkaline earth metal in a perovskite material has already been reported for Sr in LSF (Koo et al., 2018) and for alkaline-earth-metal-doped lanthanum manganite (Kim et al., 2020). However, there is a temperature window between 550 and 575 C where exsolution occurs without CaO segregation. These would be the ideal conditions for nanoparticle exsolution and for further catalytic applications.
Additionally, NdCo was freshly reduced in a flow reactor in humidified H 2 before it was investigated with SEM to prove that exsolution was successful (Fig. 14). A temperature of 575 C was chosen for the reduction, to ensure being in the ideal temperature window for exsolution without CaO segregation. Very small and homogeneously distributed nanoparticles are visible. They are regularly shaped, with diameters around 50 nm.
As a comparison, commercially available LSF was also tested for exsolution (see Figs. S12 and S13 in the supporting information). Here, exsolution of Fe (Bragg peak at 44.5 ) had started already at 425 C. The perovskite structure was intact and no segregation of an Sr-containing phase could be observed over the whole temperature range (up to 500 C).
Comparison of perovskite materials
In Table 5, the materials are compared with respect to the lowest temperature at which exsolution occurred during the in situ XRD experiments. This value reflects the stability of the respective materials and how easily exsolution can be achieved.
Several trends can be observed. Using Nd on the A-site instead of La with the same amount of Ca doping slightly increased the exsolution temperature by 25 C (from 625 to 650 C for La0.9 and Nd0.9, and from 650 to 675 C for La0.6 Diffractograms of the NdCo sample during the reduction with humid H 2 at increasing temperature. A new phase, which could be assigned to bcc Fe/Co, was formed above 550 C -the corresponding (110) reflection is marked by the green box. A shift of the corresponding peak to lower angles was observed at higher temperatures, which was stronger than expected from thermal expansion (compare position to red line). At 600 C, a CaO phase (brown box) appeared. Both the metallic bcc phase and the CaO phase were more prominent at higher temperatures, while the perovskite structure (reflections labelled 'P'; for the respective hkl indices, cf. Fig. 2) stayed intact.
Figure 14
SEM images after reduction in humidified H 2 of the NdCo sample at 575 C. The image on the right is a detail of the image on the left (green box) at higher magnification. Uniformly distributed exsolved nanoparticles up to 50 nm in size decorated the perovskite surface. There were more particles and they were smaller than for Nd0.9 (diameters around 200 nm). Similar to Fig. 10, there is a crumbled piece of the perovskite visible in the lower right corner of the magnified area. and Nd0.6). As La and Nd are chemically similar, an explanation could be based on the crystal structure of the materials. Due to their size, the La ions fit better into the perovskite structure, whereas the smaller Nd ions lead to more pronounced distortions. This is reflected in larger values for the Glazer angles (most relevant for this consideration is angle a, as angle b only changes in the case of Nd0.6) of the Fe coordination octahedra (see Fig. 3 and Table 4) of the samples with Nd compared to those with La. A less distorted structure seems to facilitate the exsolution process, but it is not yet clear if that happened due to thermodynamics (less stable perovskite structure) or because of a kinetic effect (easier B-metal migration through a structure with fewer distortions). Similarly, exchanging Ca with Sr (while leaving the amount of doping constant) resulted in an exsolution temperature of 425 C for LSF (see supporting information), a significant decrease by 225 C with respect to La0.6 (650 C). In this case, the change of structure accompanying the change of the A-site element even lead to a different symmetry (orthorhombic for La0.6 versus rhombohedral for LSF). The rhombohedral structure deviates less from the ideal cubic perovskite structure. In accordance with the proposed effect of easier exsolution from a less distorted structure, this symmetry change could explain the large temperature difference between Ca and Sr doping. Another possible factor is the crystallinity and morphology of the sample, which is different for the commercial LSF material than for the freshly prepared perovskite materials.
The results seem to contradict those of other groups who found the opposing trend that higher symmetry increases the exsolution temperature (Steiger et al., 2019) and larger distortions due to strain enhance exsolution (Han et al., 2019). Therefore, further experiments are necessary to fully understand the observed behaviour.
The change of the amount of Ca doping also changed the distortion. In the case of La0.9 and La0.6, a greater amount of Ca increased the distortion. As before, this increased the exsolution temperature from 625 C for La0.9 to 650 C for La0.6. However, a contrary trend was observed for Nd0.9 and Nd0.6. Here, the higher Ca content slightly decreased the distortion (the larger Glazer angle a is compensated by the smaller angle b). Nevertheless, the exsolution temperature was still higher for Nd0.6 (675 C) than for Nd0.9 (650 C). Also for La0.9 and La0.6, the distortions were similar and the difference was probably not big enough to explain the increase of exsolution temperature alone. Apparently, the changed electronic properties (see x3.3) may play a role here, but again the effect is not yet understood.
Another observation was that B-site doping with Co reduces the exsolution temperature. This metal is more easily reducible than Fe. Thus, the required temperature was lower than for Nd0.6, which had the same A-site composition (550 C for NdCo versus 675 C for Nd0.6). Jiang et al. (2020) found exsolution of a Co-Fe alloy from La 0.9 Fe 0.9 Co 0.1 O 3even at 500 C after reducing for 4 h. Their study agrees with ours in that exsolution at that temperature is not possible without Co doping. This is also in accordance with theoretical considerations by Kwon et al. (2017) regarding exsolution trends. Furthermore, with Co doping the resulting nanoparticles are smaller (around 40 nm for NdCo) and more homogeneously distributed. The Co-doped perovskite has the lowest exsolution temperature of all synthesized perovskites, which makes it an interesting starting point for further investigations. Similar to the effect of a different amount of Ca doping, the varying exsolution temperatures for different B-site dopings could not be explained by a changed perovskite structure, but by the reducibility of the B-site cations. Both parameters, i.e. perovskite structure and B-site cation reducibility, defined the resulting exsolution behaviour.
In addition to the exsolution properties, the reduction behaviour of the materials differed by their tendency for CaO segregation. In general, Kim et al. (2020) showed that segregation of the alkaline earth A-site dopant can occur both under oxidizing and reducing conditions. However, in this study, CaO was only observed in the samples with Co B-site doping during reduction. If CaO segregation happened with the other materials and/or under oxidizing atmosphere, the amount of CaO was too low to detect with the methods used. For the Co-doped material, the segregation started at 600 C, indicating that the perovskite structure was not completely stable anymore and a partial decomposition set in. In this case, the segregation accompanied the formation of an exsolved metallic phase. This leads to the conclusion that CaO segregation is driven not only by oxygen vacancy formation (as described by Kim et al.) but also by the necessity to balance the stoichiometry of the perovskite structure, as B-site cation exsolution otherwise would result in an A-site excess, resulting in a sufficiently large and hence detectable amount of CaO. The reason why no CaO was found for the other samples might be the amount of exsolved metal. This amount was significantly higher for the Co-doped sample, where the better reducibility facilitated the exsolution process. A possibility to reduce CaO segregation might be the use of A-site deficient perovskite materials as a starting point for exsolution. Here, exsolution of B-site cations would establish the stoichiometry, thus strengthening the perovskite structure stability instead of decreasing it (Sun et al., 2015).
Conclusions
The investigated perovskite materials, i.e. La 0.9 Ca 0.1 FeO 3-, La 0.6 Ca 0.4 FeO 3-, Nd 0.9 Ca 0.1 FeO 3-, Nd 0.6 Ca 0.4 FeO 3-and Nd 0.6 Ca 0.4 Fe 0.9 Co 0.1 O 3-, were successfully synthesized, which was proven by ICP-OES (composition) and XRD (structure) measurements. Their crystallographic structures were determined by Rietveld refinements of the XRD pattern and the results were supported by DFT calculations.
The structures are isotypic and can be derived from the ideal cubic perovskite structure. The average A-site ion radius is too small for a cubic perovskite structure, which results in tilting of the Fe coordination octahedra according to the Glazer system a À b + a À . This reduces the symmetry and the unit cell becomes orthorhombic. The extent of tilting depends on the mismatch of the ionic radii of the involved cations.
Exchanging the larger La 3+ ion with the smaller Nd 3+ ion increases the distortions and reduces the cell volume.
The effect of Ca doping could be understood by means of DFT calculations, which revealed that the addition of Ca changes the electronic structure of the materials (visible in the DOS). The undoped bulk perovskites are insulating, but due to the introduction of Ca 2+ on the A-site of the perovskite, electron holes are created as charge compensation. This electron hole is delocalized over many O atoms (p states) and Fe atoms (d states); thus, the A-site-doped perovskites show metallic behaviour. For the materials with lower Ca doping, the DOS changes are mainly limited to shifts relative to the Fermi level, while the DOS shape remains qualitatively almost unchanged. In contrast, a partial oxidation of Fe is found for the perovskites with more Ca. Here, half of the Fe atoms change their oxidation state from Fe 3+ to Fe 4+ . This is supported by a significant reduction of magnetic moments, accompanied by a Jahn-Teller distortion of their O-atom coordination octahedra, where the Fe-O distances along two axes are contracted while the bond along the third axis is elongated.
The exsolution behaviour of the materials was investigated with in situ XRD experiments and SEM. For all of the five examined perovskites, Fe (or Fe/Co in the case of the Codoped material) nanoparticle formation on the surface could be observed upon controlled reduction in a humidified hydrogen atmosphere. The particle sizes mostly ranged from 150 to 350 nm for the materials without B-site doping, while with Co doping the particles were smaller (around 50 nm) and more homogeneously distributed. An easier exsolution (already at lower temperatures) was found for less distorted structures and less Ca doping. Doping with the more easily reduced Co strongly facilitated the exsolution process and reduced the necessary temperature for exsolution significantly. The observations suggest a preferential exsolution of Co over Fe for low exsolution temperatures. The easy exsolution of well-dispersed Co nanoparticles makes the material a highly promising candidate as a catalyst for reactions related to chemical energy conversion.
The results show that fine-tuning of the perovskite composition will allow tailored exsolution of nanoparticles. The exsolution process (necessary conditions and resulting exsolved nanoparticles) can be influenced by structural distortions, introduction of A-site acceptor dopants and doping with more or less reducible and catalytically active B-site transition metals. This knowledge is the basis for a rational material design and can be used to create catalysts with properties well adjusted to their intended application and operation environment.
|
2020-11-26T09:03:19.055Z
|
2020-11-14T00:00:00.000
|
{
"year": 2020,
"sha1": "b8f30076f042b6892dbc1c7f529872bd2b972eb4",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/b/issues/2020/06/00/je5034/je5034.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fab077660443701f33851e8472a22e706abb0b21",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
250679159
|
pes2o/s2orc
|
v3-fos-license
|
Fatigue properties of high Nb TiAl alloy
Fatigue properties of the new generation of TiAl alloys with high Nb content are studied. Comparison with the previous alloy with 2at% Nb shows that the new alloy resists better to cyclic deformation at high temperatures. The microstructural observation proved that at 750 °C, the easiest deformation mode is the glide of ordinary dislocations followed by superdislocation glide and twinning. Nevertheless, all three modes seem to be active.
Introduction
While TiAl alloys are already used in some applications, their development continue since their potential of a material with low density and good high temperature properties (both strength and corrosion resistance) is limited by the brittleness and difficult machinability. The third generation of these alloys contains high Nb addition, typically 7-8 at. %.
In the most foreseen applications, the low-cycle and/or high cycle fatigue as well as the crack growth resistance of the material are important [1]. The data on low cycle fatigue properties of the high Nb content alloy are rather scarce. Christ et al. [2,3] observed stable cyclic behaviour, satisfactory strength and lifetime at high temperatures and environmental surface embrittlement which increases with temperature.
The first results measured in our laboratory have been reported recently [4]. The microstructure of the material and deformation conditions (symmetrical strain controlled tests) are similar to those used by Gloanec et al. [5] who studied previous generation of TiAl alloy with 2at%. Nb. Their results are used for comparing the behaviour of low and high Nb content alloys and are complemented by TEM observation of dislocation structure.
Material and specimens
The material was delivered in the form of a ingot of 70 mm in diameter and 1.2 m in length prepared by casting and subsequent hot isostatic pressing. Chemical composition of the material in at.% is: 44Al -48Ti -7.8Nb -0.2Ni. The majority of the volume consists of fine lamellar microstructure of γ and α 2 phases. The grains of irregular but not elongated shape have the average size of about 1 mm. Some larger γ phase islands are present on the grain boundaries, the microstructure thus can be called as nearly lamellar (Fig. 1). The structure is not completely homogeneous across the diameter of the ingot; the fatigue specimens were prepared from the outer regions of the ingot with the axis parallel to the ingot axis. Cylindrical specimens with 6 mm in diameter were carefully mechanically and electrolytically polished.
Mechanical testing
Cyclic deformation tests were performed in a MTS 810 servohydraulic machine. Strain amplitude was kept constant as well as strain rate of 2x10 -4 s -1 . The loading cycle was symmetrical (R ε = -1).. Tests were performed at room temperature and at temperatures of 700 °C and 750 °C in air. Three thermocouples were used to monitor the homogeneity and stability of the temperature during testing.
Microscopy
Light microscopy and SEM was used for the inspection of the as-received microstructure and fatigue fracture surfaces. The thin foils for TEM were prepared from the slices cut perpendicularly to the specimen axis. The direction of the axis was conserved during the foil preparation, consisting from the mechanical polishing and final thinning in a double jet electropolishing apparatus. The foils were inserted in the Philips CM 12 TEM in the manner that the holder axis was parallel to the specimen axis. It was thus possible to relate crystallographic planes and directions of individual grains to the specimen geometry and to calculate Schmid factors for observed deformation modes.
Cyclic response and fatigue life
In all three test temperatures, stable cyclic response of the material was observed. Almost the same behaviour was found by Christ and Bauer [3] who observed slight cyclic hardening for temperatures below 500 °C.. On the contrary, significant cyclic hardening was observed at RT in 2at.%Nb alloy [5]. At 750 °C, stable cyclic response is found in both types of alloy. Fatigue life curves are shown in Fig. 2. If total strain amplitude is considered, the fatigue life curves of both materials remarkably overlap (Fig. 2a). Nevertheless, the fatigue lifetime of the 8Nb alloy is one -two orders of magnitude longer if the saturated stress amplitudes are compared (see the derived Wöhler curve in Fig. 2b). The reason for this is the higher strength of the 8Nb alloy in comparison with the 2Nb alloy. The yield stress is higher and the cyclic deformation curve is shifted to higher stresses for the 8Nb alloy. It means that at the same stress amplitude, the plastic strain amplitude is lower for 8Nb and the fatigue life is longer. Both life curves at 750 °C change their slopes at about N F ~ 2000. The two observed deformation regimes are characterized by different exponents of Manson-Coffin and Basquin laws [4]. The same sudden change in slope of the fatigue life curves was observed also at T = 700 °C. The lower slope of the Basquin curve at low stresses results in even more important improvement of fatigue life for low loading levels. It suggests that the main deformation mode is the slip of ordinary dislocation, in spite of the fact that the slip of superdislocations and twinning are significantly more favoured by the crystallographic orientation, as can be seen from the Schmid factors shown in Table 1. The two latter modes require higher flow stress but apparently it is possible to activate them. This is important for the appearance of the interlamellar fracture since according to the von Misses criterion, five independent deformation modes must be available to adapt deformations in between neighbouring grains or lamellas. Figure 3. Micrographs of the same area using four different diffraction vectors g. Specimen cycled with ε a = 0.41% at T = 750 °C up to the fracture (N F = 438).
Conclusions
• The TiAl alloy with high Nb is cyclically stable in the investigated temperature interval (RT -750 °C). • The fatigue life curves of the low and high Nb alloys are similar in N F vs. ε a representation. In N F vs. σ a representation the fatigue life of the high Nb alloy is shifted more than one order of magnitude to longer fatigue life. • Glide of ordinary dislocations seems to be the most easy deformation mode at 750 °C.
|
2022-06-28T03:26:36.715Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "1934bd79ddb397f3638647b495c2f0b6ae084219",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/240/1/012057/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1934bd79ddb397f3638647b495c2f0b6ae084219",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
59606690
|
pes2o/s2orc
|
v3-fos-license
|
Large-cage assessment of a transgenic sex-ratio distortion strain on populations of an African malaria vector
Background Novel transgenic mosquito control methods require progressively more realistic evaluation. The goal of this study was to determine the effect of a transgene that causes a male-bias sex ratio on Anopheles gambiae target populations in large insectary cages. Methods Life history characteristics of Anopheles gambiae wild type and Ag(PMB)1 (aka gfp124L-2) transgenic mosquitoes, whose progeny are 95% male, were measured in order to parameterize predictive population models. Ag(PMB)1 males were then introduced at two ratios into large insectary cages containing target wild type populations with stable age distributions and densities. The predicted proportion of females and those observed in the large cages were compared. A related model was then used to predict effects of male releases on wild mosquitoes in a west African village. Results The frequency of transgenic mosquitoes in target populations reached an average of 0.44 ± 0.02 and 0.56 ± 0.02 after 6 weeks in the 1:1 and in the 3:1 release ratio treatments (transgenic male:wild male) respectively. Transgenic males caused sex-ratio distortion of 73% and 80% males in the 1:1 and 3:1 treatments, respectively. The number of eggs laid in the transgenic treatments declined as the experiment progressed, with a steeper decline in the 3:1 than in the 1:1 releases. The results of the experiment are partially consistent with predictions of the model; effect size and variability did not conform to the model in two out of three trials, effect size was over-estimated by the model and variability was greater than anticipated, possibly because of sampling effects in restocking. The model estimating the effects of hypothetical releases on the mosquito population of a West African village demonstrated that releases could significantly reduce the number of females in the wild population. The interval of releases is not expected to have a strong effect. Conclusions The biological data produced to parameterize the model, the model itself, and the results of the experiments are components of a system to evaluate and predict the performance of transgenic mosquitoes. Together these suggest that the Ag(PMB)1 strain has the potential to be useful for reversible population suppression while this novel field develops.
Background
Many bloodsucking arthropods are efficient vectors of pathogens responsible for human diseases worldwide [1]. Their genetic manipulation as a promising tool to control vector-borne diseases is promoted by the lack of vaccines for the majority of the infections they transmit, the spread of insecticide and drug resistance [2], and the expense of the vector control methods currently in place. Malaria is the arthropod-borne disease that causes the most mortality and morbidity and is transmitted by Anopheles mosquitoes primarily in tropical and sub-tropical environments [3]. Population suppression and population replacement [4] are two transgenic mosquito strategies that are being proposed to complement current malaria control methods, and in the past decade researchers have engineered Anopheles spp. strains able to block parasite development [5,6] or bearing genes for population control [7,8]. To be effective at low cost, these strategies must rely on transgenes able to spread through target populations with a super-Mendelian inheritance i.e. 'gene-drive' systems [9,10]. This is predicted to result in a high transgene prevalence with relatively small field releases [11,12].
The development of CRISPR-Cas9 technology recently allowed modification of the genomes of Anopheles stephensi [13] and Anopheles gambiae [14,15] with transgenic constructs that are able to spread (drive) in target populations conferring resistance to Plasmodium falciparum infection or targeting female fertility genes. These proofs of principle for malaria control are tremendously promising, but a gap between the laboratory development and the field deployment of such technology exists, consisting in part of validated models that accurately predict the effects of transgenic insects in natural environments. The advances in driving transgenes represent a potential environmental and security concern [16], and researchers must demonstrate that their products can predictably and successfully reduce disease transmission and are safe for humans and the environment.
In order to obtain public and regulatory acceptance, the lab-to-field transition of transgenic mosquito strains is a multi-disciplinary, multi-step process, the final goal of which is to prove its reliability in terms of effectiveness, safety, and feasibility for field deployment. An important part of the transition includes testing in appropriate contained conditions that include increasingly realistic environments and the testing of strains that have less powerful capabilities than driving transgenes to minimize their spread when tested in endemic countries. These may have less effect or be more technically demanding to deploy [17].
The predicted benefit of genetic interventions is generally based on models of varying sophistication, ranging from simple algebraic calculations [18] to spatially explicit elaborate mathematical models [19]. In some cases, models are developed as an aid to design experiments [20] and in others the model is developed and tested in simplified laboratory experiments by comparing their predictions with actual outcomes [21]. Differences between predictions and outcomes are an indication of the value of models since they can identify parameterization errors, due for example to differences between parameter values in small and large cages, or model over-simplification. An iterative process of model refinement and testing is recommended as a process to develop models that are useful for planning interventions that will be implemented in natural environments [17]. This process is an essential part of evaluation because the effects of variation in parameter estimation are naturally greater when extrapolated to larger scales of testing.
Because male mosquitoes neither transmit pathogens nor feed on humans, they can be released safely to introduce heritable characters into wild populations. Causing female infertility by inundation with sexually sterile males (Sterile Insect Technique, SIT) is a proven and widely used form of genetic control, but other methods have been proposed using transgenic insects. Among these are forms that bias the sex ratio toward males [7]. One such strain is the transgenic Ag(PMB)1 (aka gfp 124L-2) strain which is characterized by 95% male-bias among the progeny of transgenic males (but not females) with no reduction in the number of eggs produced by female mates nor in the egg-hatching rate [7]. Male-bias in this strain is achieved by expression of a modified I-PpoI nuclease in the testes that cuts its 15 bp target site in the ribosomal DNA resulting in chromosome breakage which, in An. gambiae, is generally located solely on the X chromosome [22]. Therefore, expression of I-PpoI results in a majority of the sperm in Ag(PMB)1 males carrying only a Y chromosome. Because female sex in Anopheles spp. is determined by an XX karyotype and maleness by an XY karyotype [23,24], matings by Ag(PMB)1 males in which the X chromosome has been cut result mostly in sperm carrying Y chromosomes and consequently a large majority of male progeny, half of which are transgenic.
As part of the progressive evaluation of a male-bias strain, large cages studies were performed to determine the effect of regular releases of hemizygous Ag(PMB)1 males on stable age-distribution An. gambiae populations. The resulting data were compared with a model that predicted the outcomes of the large cage studies. Subsequently, a field-informed model was applied to determine the effect of releases on mosquito populations in a West African village.
Mosquito strains
Two strains of An. gambiae were used for these studies: the transgenic gfp 124L-2 strain and the 'wild-type' G3 strain (MRA-112, Malaria Research and Reference Reagent Resource Center, Manassas, VA, USA). The G3 strain originated in The Gambia in 1975. The life history data reported and cited in this manuscript were all measured in this genetic background. The area where this strain originates is known to contain high levels of hybridization between A. coluzzii and A. gambiae but the MR4 reports that their holding consists only of A. gambiae rDNA. While this is an old laboratory strain, it has been demonstrated to have maintained at least one natural characteristic, male swarming [25]. The transgenic strain has been renamed ' Ag(PMB)1' to reflect the Paternal Male Bias phenotype by the organization that supported its development, Target Malaria [7], and this name is used hereafter in this paper. Ag(PMB)1 was created by genetic transformation of the G3 strain and it was maintained by crossing either transgenic males or females to G3 resulting in the two strains having the same genetic background. The 3XP3-DsRed transgene marker is visible in the thoracic and abdominal ganglia and the optic lobes. Backcrossing is performed to avoid the accumulation of rDNA damage that might occur in inbred strains. The G3 strain was also used as the experimental target population.
Baseline measures of life history and mating competitiveness
In order to parameterize the model, life table studies were performed to compare the transgenic with non-transgenic sibling individuals of both sexes for four characteristics: (i) mortality during the larval stage; (ii) duration of the larval stage; (iii) pupa mortality; and (iv) adult survival. Throughout this paper, the terms 'non-transgenic' is used interchangeably with 'wild-type' in the context of laboratory studies. This equality reflects in part an assumption we made in this design, particularly that the extensive backcrossing (> 170 generations) used to maintain this strain is believed to have resulted in near genetic and life history identity between the G3 strain and the non-transgenic progeny of hemizygous individuals.
For life table studies of the immature stages, a 17.5 cm cube cage was populated with 400 virgin adults (200 Ag(PMB)1 males and 200 G3 females). Females were offered a blood meal for 45 min as described below and their eggs were collected, hatched and first stage larvae separated according to the fluorescent marker using a Complex Object Parametric Analyzer and Sorter (COPAS, Union Biometrica, Boston MA, USA) which separates larvae based on the fluorescent transgene marker. Eight trays of 250 larvae were established, four with transgenic larvae and four with non-transgenic larvae. Larval development time, larval survival, pupal survival and eclosion were recorded.
Larval survival was calculated from the starting number of first-stage larvae (L1) and those that pupated. Pupal survival was calculated by the number of pupae that eclosed. Larval and pupal mortality were analysed using quasibinomial generalised linear models with the replicate (four trays for each larval type) fit as a block to account for the within-tray pseudoreplication inherent to these data. Mosquito type (transgenic status) was the main effect during the larval stage and the influence of both sex and transgenic status were evaluated at the pupal stage. Analysis of larval duration also used a quasibinomial generalised linear model to assess the influence of mosquito type and sex on the length in days of the larval stage. In all cases, the influence of main effects was assessed by stepwise deletion testing.
For adult longevity studies, transgenic individuals were distinguished visually in the fourth (final) larval stage using an Olympus BX7 stereomicroscope equipped with DsRed filters (Chroma, Bellows Falls VT, USA) and an X-Cite 120Q illuminator (Excelitas, Waltham MA USA). Transgenic males and females were produced by crossing Ag(PMB)1 transgenic females to G3 males. Twelve cages of the design of Savage & Lowe [26] were populated with 30 male and 30 female pupae that were either transgenic or not in all combinations with three replicates of each. Adults that did not eclose were replaced on the following day with adults from cages set aside for this purpose. A 10% sucrose solution containing 0.1% methylparaben added as a preservative [27] was provided for adults and renewed on a weekly basis. Caged adults were checked daily and dead specimens were counted and removed until almost all adults were dead (42 d) after which those remaining were counted.
The data arising from the experiments are a daily-interval time series for each of 12 cages. To allow for the temporal pseudoreplication arising from repeated measurement of sequentially-linked cohorts (replicates), mixed effects models were used to identify whether there was a significant effect on the proportion surviving over time as a function of mosquito sex, transgenic status and whether the other sex they were caged with was itself transgenic. Random effects were used to represent the pseudoreplication of within-cage trajectories. The survival was compared over the 42 days of the experiment and assessment of the main effects and their interactions was by model simplification using L-Ratio tests at P < 0.01 to avoid over-interpretation, with Akaike information criterion (AIC) comparisons to evaluate model fit. Mixed effects models here and elsewhere used the nlme package [28]. Throughout, statistical analyses were performed using R 3.4.1 [29].
The ability of similar-aged males to compete for virgin females was determined. Ag(PMB)1 transgenic larvae and non-transgenic siblings were cultured according to a standard procedure [30]. Pupae were separated in three small cages according to sex and fluorescent marker in order to obtain virgin transgenic males, virgin wild type males and virgin wild type females. Three to four day-old virgin adults (50 transgenic males, 50 wild type males and 100 wild type females), were introduced into three 16 m 3 cages provided with the visual stimuli inducing An. gambiae males to swarm [25] and which are described further below. Adults were provided with a 10% sucrose solution and allowed to mate. After 48 h these were collected, separated by sex and transgenic status and females were provided with a blood meal and put individually in oviposition cups. Paternity type was identified by offspring analysis for the fluorescent marker. Data were the number of females mated by either transgenic or non-transgenic individuals; a χ 2 proportion test was used to determine whether this differed from an expectation of equal numbers.
Large-cage experimental facilities
Three large cages (each c.16 m 3 ) located in one insectary room were provided with visual stimuli to encourage An. gambiae males to swarm, the typical natural mating behaviour of this species. These cages, the lighting arrangement and cycle has been described in detail previously [25]. Insectary rooms were kept at a stable temperature and relative humidity (RH) of 27 ± 0.5°C and 70 ± 5% RH. A stack of clay bricks (24 × 24 × 36 cm) in each cage was wetted with water daily and mosquitoes were seen to use it as a resting shelter. Three cups containing cotton and a 10% sucrose solution were used as sugar feeders in each cage. As sucrose has no fragrance, a spoonful of honey was added to each cup to attract mosquitoes to the sugar feeders. The cups hung from a cord at three distances from the entrance: near, mid and far. A pulley system allowed refreshing of the feeders on a weekly basis without entering the cages. Other objects and shelters were introduced into each cage to increase environmental heterogeneity: one Correx® equivalent black tunnel (60 × 60 × 40 cm, W × L × H); a vertical X-shaped structure consisting of one blue and one black 40 × 100 cm Correx® equivalent panel, was constructed by inserting one into the other by half-length slots in the centre of the short side to form the figure resting on the cage floor.
Releases into the large cages
Three sequential release trials were performed. In each, three cages were used: one control and two in which transgenic males were released at initial ratios of 1:1 or 3:1 (transgenic:wild type males) and which were expected to change as progeny of transgenic individuals appeared. Before starting the releases of transgenic mosquitoes into the experimental cages, G3 strain 'target populations' were established. The aim was to create populations that included individuals of all ages and mating status with a sex composition similar to a wild population. This was achieved by twice-weekly additions of G3 mosquitoes into each cage and allowing mortality and aging to stabilize the population. There were differences between the population establishment and duration of release observations between the first trial and the second and third (Fig. 1). In Trial 1, stable populations in the cages were established initially by adding 300 females and 178 males from a pre-existing stable G3 population (estimated sex ratio based on the model described below). After this, 60 G3 females and 60 males were added twice weekly for 3 weeks. After 3 weeks, 50 G3 females and 50 G3 males were added twice weekly for 3 weeks. After the experience of the first trial, the procedure for Trials 2 and 3 was altered slightly as the procedure for trial one was considered overly elaborate. Target G3 populations were established by introducing 50 females and 50 males twice-weekly for 4 weeks prior to releases. As the longevity studies indicated that adults live less than 6 weeks, these differences had negligible effect on the size of the populations when experimental releases began. However, to account for these differences when comparing the results to the model predictions, the model (described below) simulated each set of initial conditions separately.
The numbers of transgenic males released after the stable age populations had been established were either equal to (1:1) or three times (3:1) the numbers of G3 males being introduced previously, thus either 50 or 150 transgenic males twice-weekly. Mosquitoes added to the control cages were progeny only of that cage with no additional males added. During Trial 1, treatments were randomly assigned to each cage; during Trials 2 and 3, the treatments were rotated to different cages in order to minimize possible effects due to cage location in the environmental room.
To produce males for releases, Ag(PMB)1 transgenic males were crossed with G3 females. The life-cycles of mosquitoes used for release into the cages and those removed from the cages were maintained in synchrony so that similar-aged adults were available for releases and crossing. Larvae were reared using a slurry diet [31] using the method of Valerio et al. [30]. Ag(PMB)1 males used for cage releases were distinguished from non-transgenic individuals on the basis of the 3X-P3-DsRed fluorescent marker which was selected for using the COPAS in the first larval stage. The remaining c.5% females were removed manually by examining the terminalia under a stereomicroscope. In Trial 1, the introduction of Ag(PMB)1 males began 6 weeks after target population initiation and the experiment was terminated after 2 months. For Trials 2 and 3, releases began after 4 weeks of target population establishment and were terminated after 4 months.
During target population establishment and after transgenic mosquito releases began, females in the large cages were offered a blood meal on Monday and Friday at dusk for 2 h using a Hemotek membrane feeder (Discovery Workshops, Lancashire, England) containing sterile cow blood (Allevamento Blood di Fiastra Maddalena, Teramo, Italy). Parafilm® was used as an artificial blood-feeding membrane and was rubbed on human skin before covering the feeders in order to increase its attractiveness. Eggs were collected on 16 cm diameter polystyrene Petri dishes containing a water-soaked sponge covered by a filter paper disk. The oviposition dish was placed at the entrance of the cage close to the resting shelter 2 days after the blood meal was provided. A dim light was directed on the dish to concentrate oviposition during the dark hours. Previous work established that in the absence of this, eggs were often found on the cage floor (which was reflective aluminium) rather than in the oviposition dish (data not shown). The number of eggs laid was determined the following day by digital analysis of the disks using the Egg-Counter v1.0 software [32]. Egg hatching rate was determined after 3 days by microscopic examination of samples of approximately 200 eggs.
Once transgenic mosquito releases began, mosquitoes came from two sources: (i) the transgenic males released to simulate a suppression programme; and (ii) the progeny of adults of each cage to maintain the effects on the target population. To obtain the latter, eggs collected from each cage were hatched in trays and from these, c.500 larvae were reared in two trays. The L3-L4 stage larvae (approximately 250) from one randomly selected tray were divided between two trays, one of which produced the adults for restocking the experimental cages while the second one was maintained as backup. In the restocking tray, sex and transgenic status were determined in the pupal stage by examination under the Olympus BX7 stereomicroscope equipped with GFP filters and sex-separated to emerge in separate cages. All 1-2 day-old virgin adults from these emergence cages were then introduced twice-weekly into each cage. As the absolute numbers of eggs laid remained high and was variable through the trials, no effort was made to adjust the number of adults returned to the cages to reflect the numbers of eggs oviposited.
Evaluating the effects of the releases Several outcomes were anticipated to vary as a result of Ag(PMB)1 male releases: the number of eggs produced; the egg hatching rate; the frequency of transgenic offspring; and the proportion of females among offspring. Variation in these over time as a function of treatment . In Trial 1, the target population was established with females and males from a preexisting stable age distribution population based on the model predictions of the population structure. In both Trials, after population establishment, semi-weekly releases of Ag(PMB)1 males were performed at two different ratios, or in the control, only progeny were returned to the control cage. (1) Mosquitoes were blood-fed using the artificial membrane feeder; (2) eggs were collected 3 days after the blood meal, bleached and incubated for 1 day; (3) 500 larvae were reared in two trays at a density of 250/liter/tray. When pupation started, immature stages from each tray were split in two trays and one of the four trays obtained was selected to be saved for restocking; (4) pupae were collected from the selected tray, sexed and screened for fluorescence before being divided according to sex in two small cages where adults could emerge and mature for 1-2 days; (5) twice a week virgin adults were introduced in the corresponding cage to maintain the population. At the same time, Ag(PMB)1 males the same age as the restocking adults were introduced into the treatment cages was analysed by using linear mixed-effects (lme) models with day within cage fit as random variables to allow for the pseudoreplication created by repeated measures from within each cage. The key explanatory variable was the 'Treatment' , a factor with three levels (Control, 1:1 and 3:1), the effect of which was also assessed by sequential trials.
Similarly, mixed-effects statistical models were fit to the proportion of females predicted in the computer simulation models and observed in the twice-weekly samples. Both the simulation model predictions and the experimental data were largely sigmoidal as a function of time and a logistic term was used. Maximum likelihood methods were then used to compare sequential lme models with progressively simplified fixed effects and this allowed assessment of consistency/comparison within and between model runs and experimental data [33]. As the different initialisation led to slightly different models, Trial 1 was examined separately to Trials 2 and 3. In all cases, as sampling effects were likely to be present and considered to contribute to the variability of the data, a threshold of P < 0.01 was applied to identify systematic effects.
Predicting the effects of releases on the proportion of females
We modelled the effect of releasing Ag(PMB)1 transgenic males into a population using an iterative simulation model of the large cage experiments, and a related model of a village population. Both models track through time the numbers of juveniles (categorised by age, genotype and sex), unmated adult females (by genotype), adult males (by age and genotype), and mated adult females (by age, her own genotype and her mate's genotype). Juveniles are assumed to emerge as adults 10 days after their oviposition (if they survive this long), and unmated females are assumed to mate with a random male on the day of their emergence. Mated females lay a Poisson-distributed random number of viable eggs per day (with expectation 9), though oviposition timing is restricted in the cage model (see below). The numbers of each possible egg genotype are randomised using a multinomial distribution that depends on the parent genotypes (assuming Mendelian inheritance). The sexes of the eggs are binomially-distributed, with male probability 0.95 if the father is hemi-or homozygous Ag(PMB)1 and 0.5 otherwise.
Cage model
The cage model simulated oviposition only on 2 days/ week (corresponding to Mondays and Thursdays), of which a random sample of 100 eggs is kept to become juveniles. All juveniles survive to be 'added' to the adult population 10 days after their oviposition. To simulate the treatment cages, zero-age Ag(PMB)1 hemizygous males were also added to the adult population on Tuesdays and Fridays after the treatment begun. Adult males and mated females have a Weibull-distributed randomised life-span, with Weibull shape and scale parameters fitted from the survival experiments. We simulated this model following the precise initial conditions of each replicate of the cage experiment.
Village population model
The structure of the village model follows that of North & Godfray [34], except that here we consider only a single population rather than multiple connected populations. While the cage model does not consider juvenile mortality, the village model assumes juveniles suffer mortality due to both density independent causes (with probability 0.05 per day, estimated from measurements of larval survival when larval density is low ( [35,36]; see [37] for details), and from competition which varies with rainfall and local standing water [34]. We suppose competition mortality risk per day is ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi αðtÞ=ðαðtÞ þ J T Þ 10 p where J T is the total number of juveniles in the population, and α(t) controls the strength of density-dependent competition and is approximately proportional to the population carrying capacity. Specifically, α(t) is the number of juveniles at which the probability of death from larval competition over the course of development is 0.5, and we assume this variable depends on rainfall and the length of water courses in the vicinity of a population [34]. The time-dependence of α(t) stems from the input (weekly) rainfall data, and results in large seasonal population fluctuations for the West African setting we use (see below). This contrasts with the cage model for which population size is approximately constant in time. The village population model also differs from the cage model by assuming that adult males and mated females have constant daily survival probability, reflecting the numerous causes of mortality in an outdoor population which occur largely independently of age. Adult male daily survival was estimated to be in the range 0.69-0.87 (mean 0.77) from four mark-release-recapture experiments that took place in the village of Bana in Houet, South-West Burkina Faso [37], which we use as our study location. We use both the lower and upper of these estimates to investigate this parameter, and we set female survival at the somewhat higher value of 0.875 which is consistent with endemic malaria transmission.
The remaining parameters of this model arise from long-term studies in Bana, and are set to correspond with seasonal variations in population size determined from the same MRR experiments used to estimate mortality [37]. For the purpose of this analysis, we assume there is no migration in or out of Bana. This model uses the rainfall data from the ERA-interim reanalysis [38], and the water course data from the Digital Chart of the World (DCW) (available from http://www.diva-gis.org/ Data), as described by [34].
The village population was simulated for 2 years prior to transgene releases, which was enough time to minimise effects of initialising the model with an arbitrary population structure. Transgene releases were simulated in the third year by adding various numbers of Ag(PMB)1 hemizygous males to the population at regular time intervals. Rather than attempting to simulate fixed release ratios such as were used to initiate the cage populations, these release numbers were based on discussions of potential production levels for existing and possible future insectary infrastructure.
Baseline life history
In order to parametrise the predictive models, life table studies were performed to estimate any differences between Ag(PMB)1 transgenic individuals and non-transgenic siblings that would need to be taken into consideration. Larval mortality was 8.5% and did not vary as a function of transgenic status (F (6,7) = 0.18, P = 0.69).
As all combinations of males and females according to transgenic status were used, it was possible to determine whether there was an effect of the combinations on longevity. There were no interactions between the main effects, nor did the transgenic status of the mosquitoes or that of the accompanying mosquitoes affect their survival. The males and females did have different survival (L.Ratio (6,7) = 25.00, P < 0.001); females lived slightly longer than males (median longevity 30 vs 28 days, Fig. 2).
Mating competitiveness was expected to be a critical factor for predictions of the transgene behaviour. This reflects the ability of one type of male to compete with another for matings with virgin females. Here, all males were of similar ages and of 176 mated females whose progeny were assessed for the transgene, Ag(PMB)1 achieved 54% of matings in competition with G3 males; this did not differ from an assumption of equal competitiveness (χ 2 = 0.4, df = 1, P = 0.52).
The effect of releases The frequency of transgenic offspring in cage populations
Six weeks after the trials had started (from day 45 of measures), the frequency of transgenic mosquitoes in the target populations reached averages of 0.44 ± 0.02 and 0.56 ± 0.02 in the 1:1 and in the 3:1 initial release ratio treatments (transgenic male:wild male) respectively. The frequencies reached did not differ among trials (L.Ratio (7,9) = 2.37, P = 0.30) or rise systematically with ongoing releases (L.Ratio (6,7) = 1.14, P = 0.28), but was higher at the higher release rate (L.Ratio (5,6) = 24.39, P < 0.001). Table 1). Although the difference between the two treatment levels is statistically significant, the effect size is small.
Effects on egg production
Because the proportion and number of females in the population were predicted to be reduced by the releases, the numbers of eggs produced as a function of time might also decline. There was no significant variation in egg production pattern between the three sequential trials (L.Ratio (13,15) = 3.31, P = 0.19). Treatment did have a significant effect (Fig. 3). Egg production remained stable in the controls, but declined in both release treatments (L.Ratio (10,13) = 30.42, P < 0.001). The decline was steeper in the 3:1 treatments than in the 1:1 (L.Ratio (11,13) =17.58, P < 0.001).
Effects on egg hatching rate
It was also anticipated that the releases could lead to additional mutations to the rDNA that did not result in broken X chromosomes that could accumulate in the population, possibly causing semi-sterility [7]. This could result from transmission of the possibly damaged X chromosome introduced into the population by the approximately 5% of female progeny that result from transgenic males. Therefore, to estimate this, the egg-hatching rate was determined. An average of 211 ± 3 eggs was assessed in each sample. There was no variation in the proportion of eggs hatching between the trials (L.Ratio (13,15) = 0.07, P = 0.96, Fig. 4). The proportion hatching was the same in the 1:1 and 3:1 treatments (L.Ratio (11,13) = 0.08, P = 0.96), but was slightly lower in these than in the control (0.86 ± 0.01 vs 0.88 ± 0.01) (L.Ratio (9,10) = 8.13, P = 0.004). Hatching rate did not decline further during the experiment in any treatment (L.Ratio (9,10) = 1.75, P = 0.18).
Effects of releases on proportions of females vs model predictions
The results of Trials 1-3 are shown in more detail in In the 1:1 release ratio, Trials 2 and 3 did not differ from each other in the proportion females identified in the samples ( L.Ratio (7,8) = 0.06, P = 0.79), but both do differ from the model predictions (Trial 2: L.Ratio (7,8) = In both these trials the proportion of females found is often above those predicted by the model. In the 3:1 treatment, the two trials did not differ from each other in the proportion female found (L.Ratio (7,8) = 0.55, P = 0.49). Both Trials 2 and 3 did differ from the model predictions (Trial 2: L.Ratio (7,8) = 35.53, P < 0.001; Trial 3: L.Ratio (7,8) = 31.02, P < 0.001), they were often both higher than predictions and displayed greater variability.
Effects of hypothetical releases on the mosquito population of a village in West Africa
Regular releases of Ag(PMB)1 males into wild populations are predicted to have significant suppression effects on the female population of the species that is released (Fig. 6a) and predicts that large numbers of individuals carrying the transgene will exist in the population at the end of the calendar year which occurs during the dry season (Fig. 6b).
This leads to the question whether release frequency, in contrast to numbers, is an important operational consideration for the use of this transgenic technology. Releases at any frequency less than quarterly are predicted by the model to have negligible differences in effect as long as the cumulative numbers released remain the same (Fig. 7).
The relationship of increasing release size to the number of females at the end of the rainy season deserves further discussion since it has a direct impact on the cost-benefit and likelihood of detecting an effect of conducting a suppression program with strains such as this. Considering the example village, Bana, in which the unperturbed population size at the end of the rainy season is in the region of 35,500, weekly releases of 5000 would reduce the number of females by an estimated 25% to c.27,000 (Fig 8). the male-release size 10-fold to 50,000 is estimated to only decrease the predicted population a further 37% to c.13,500 females.
Discussion
The release of transgenic mosquitoes as a novel control method should be developed in a progressive manner starting with types of transgenic strains whose effects and persistence will be limited [17]. This has been proposed as a means of technology implementation that best results in environmentally safe programs with predictable effects. We propose that Ag(PMB)1 might in part serve this purpose. Ag(PMB)1 has a useful phenotype whose life-history traits in controlled large cages trials demonstrate no negative effect of the transgene that would make it a poor candidate for release. Unlike a transgenic sterile male strain with which we have previous similar experience [21], few differences between model predictions for cages and the experimental outcomes were observed. In the case of the sterile males, the degree of difference between the model prediction and transgene frequency indicated that there were significant fitness costs associated with the transgene that were not apparent from life-history studies. Even if only modest numbers of Ag(PMB)1 male mosquitoes were released that did not result in measurable reductions in females, any persistence observed in natural populations would provide a valuable description of the behaviour of this transgene that could, in turn, be compared with model predictions. Identifying discrepancies between the model predictions and observations would provide further insight into life history parameters that may have been inadequately understood in the previous underpinning studies.
Generally, differences between outcomes of studies conducted at different scales can be attributed to two causes: inaccurate estimates of parameters due to sampling error and differences in parameters due to differences in stressors, complexity and scale that affect the biology of the organism. The latter are more problematic since merely increasing the number of trials in the laboratory cannot improve the estimates. In recognition of the latter, laboratory estimates of adult survival were used for the cage simulation model whereas field estimates of male daily mortality were used for the population model. Extrapolating the laboratory parameter would be highly misleading. Other values such as mating competitiveness were not available for natural populations but could reasonably expected to differ from the laboratory studies. Sometimes, there is an experimental basis for doubting extrapolation of small cage studies to natural populations; Facchinellli et al. [25] demonstrated that male competitiveness of a transgenic sterile-male strain was negatively affected by larger cage size, a trend which could reasonably be expected to continue to natural populations. There was no basis on which to expect such a difference for the Ag(PMB)1 studies.
Therefore, an essential value of iterative testing, parameterization and prediction is to arrive at more realistic estimates of the parameters that are finally used to model population effects. Field studies of transgenic mosquito interventions provide the most rigorous test of parameters and models but these can only be approached cautiously based on the best information available. This process also highlights the value of realistic indoor and outdoor contained studies of population simulations.
The large cage data observed here indicated that the village population model might be optimistic in its predictions of the extent of observable suppression as the proportion of females found in the large cage studies of Ag(PMB)1 was slightly higher than the intervals predicted by the model. Estimating the size of subtle differences between data and model can be challenging in the light of high-variance in samples (see below). These observations do largely support the predictions that significant effects might be observed in field populations with the release of only modest numbers of adults compared to 'sterile insect technique' types of programs in which e.g. 10-fold inundation is often considered a minimum to obtain an effect [40].
This difference from the release of sterile males is due in part to the fact that selection against the Ag(PMB)1 transgene is much weaker than that of either males that are sexually sterile or which confer lethality to their progeny against which selection is acute and final. The resulting accumulation of transgenic females and males and persistent phenotypic effect of the transgene in target populations and the lack of sensitivity to release frequency should permit greater flexibility than SIT which is sensitive to release frequency [19]. When releases of sterile males stop, the population immediately begins to rebound. That is not anticipated to be the case with male-bias strains such as the example that was studied here. Therefore, suppression effects should be relatively resilient to interruptions in releases due to e.g. bad weather, transportation difficulties and production-level fluctuations. Natural Anopheles populations in Burkina Faso are often a seasonally-dependent mixture of three members of the An. gambiae complex [39], and we considered only one species in the simulation village, Bana; therefore the model demonstrates the effect only on the species that is released, a factor that would need to be considered when predicting possible epidemiological effects.
Experimental design has a decisive effect on the outcome of simulations such as those which were conducted here. The outcomes of these large cage studies contrast with the results observed by release of a similar male-bias strain in small cages [7] in which extinction of the target populations was usually observed within six generations and egg-hatching rates were only 20% in the terminal generations even though the transgenic male release rate was also 3:1. These differences can be attributed simply to two changes that were made in the experiments reported here: (i) a stable age distribution population was established before the introduction of transgenic males; and (ii) the populations were continuously breeding rather than consisting of discrete generations. This results in a more-realistic simulation of the conditions that occur in nature and reflects the stable equilibrium of the sex ratio that is expected to be reached in such caged populations. A natural effect that our experiments do not reflect is the potential reduction in the reproductive rate of the population that might occur due to a decline in the population rate of increase and the resulting increase in the effective transgenic male release rate. These effects are captured, however, in the village population model. Another aspect of experimental design, the longer duration of Trials 2 and 3, enabled the identification of effects not apparent during Trial 1. Sampling variability was anticipated in all measures, for example, the egg sample could be affected by the number of ovipositing females, and thus their mating partners, laying eggs in an aggregated manner and affecting the arising measures. This potential 'founder effect' could then influence the measure of hatch rate or the proportion females identified in that time step sample. The longer runs thus give a clearer picture of both the sampling variability and the systematic effects, so whereas the shorter Trial 1 does not differ from model predictions, the difference from model predictions in Trials 2 and 3, though slight, is apparent by virtue of the length of the experiment. Identifying these systematic differences is vital to model validation and enables further refinements so that predictions at wider spatial scales will be more realistic. This may well call for the explicit inclusion of an element of sampling variability in predictive models to reduce the likelihood of variability in data masking general conformity to the predictions. In field trials the sampling effect is likely to be greater still and will depend on the availability and tractability of monitoring techniques. Many that are deployable at scale, such as ovitraps, are known to produce high deviance, overdispersed data [41,42]. This will require careful thought as the field progresses and will be helped by trials designed to explicitly estimate sample deviance.
Producing male mosquitoes for release can be logistically challenging if the strains are not pure-breeding and there is no genetic method to eliminate females. For example, the ability to suppress an effector that is counter-productive to rearing -bi-sex lethality -has been essential for production of the Aedes aegypti OX513A strain which can be cultured in a pure-breeding colony in the presence of tetracycline [43]. Females of Ae. aegypti can also be eliminated based on pupa size [44], the combination of these two methods enables production of tens of millions of males [45]. The Ag(PMB)1 strain of An. gambiae does not have either characteristic: it is not pure-breeding and there is no en masse male selection method. However, generic means that are suitable for routine production of Ag(PMB)1 mosquitoes have been devised. Use of the high-throughput COPAS sorter to segregate transgenic mosquitoes [46] and a male-specific fluorescent marker [47] could provide a larval sorting method for both transgenic males to release and to select non-transgenic females for backcrossing to maintain the stock. Using the COPAS in this way could permit rearing of 50,000 larvae per week in compact facilities using high density larval rearing systems (e.g. [48]) and pupa selection [49].
Strains such as the one considered here have potential to contribute towards local suppression of females at a village-scale, though they do not appear feasible for area-wide suppression at a country or continental level. This is because of their limited persistence and production challenges, therefore for wider-scales, 'gene-drive' systems have been proposed as potentially effective [4,10]. These would offer the potential to spread female infertility [14] or male-bias via 'Y-drive' [47,50] and would need to be developed as technologies that could be deployed at feasible cost [15].
The insertion on the strain tested here is on an autosome [7], but if the transgene could be inserted and expressed from the Y chromosome it would be inherited by all male progeny, resulting in a 'driving Y chromosome' [47,50] that is expected to increase in frequency and potentially result in population suppression. Reproducible insertion of genes on the An. gambiae Y chromosome has been accomplished [47], so two essential parts of such a system have been realised, the male-biasing transgene and modification of the Y chromosome.
The other aspect of the potential of these strains that takes full advantage of the limited persistence and incomplete eradication they offer is the stepwise development and evaluation of these transgenic technologies [17]. Local reductions would enable monitoring and assessment of non-target-organisms (NTOs) potentially affected by changes in the density of An. gambiae. There is an understanding of what potential interacting species there are [51] and key non-target organisms could be studied more specifically in tandem with releases to evaluate hypothesised effects.
Conclusions
An integrated process of modelling, experimental trials, analysis and reparameterization must be conducted in progressively more realistic settings to arrive at predictions for field behaviour that approximate real field outcomes. The results presented here are an example of such a process and demonstrate that, to the degree tested, Ag(PMB)1 could be considered for field release as a female suppression technology. The insensitivity of suppression to release frequency makes this strain and others with high persistence and similarly weak negative selection attractive for release programs that might be interrupted by production shortfalls and other disruptions. Novel transgenic technologies are likely to require demonstrations of conformation to prediction, and the process and strain used here are a key part of developing confidence in this field.
|
2019-02-07T15:42:18.373Z
|
2019-02-06T00:00:00.000
|
{
"year": 2019,
"sha1": "ca4badd95efd6968fa231df3150e4c17dfed88f3",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-019-3289-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca4badd95efd6968fa231df3150e4c17dfed88f3",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.