id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
214373626
|
pes2o/s2orc
|
v3-fos-license
|
THE EFFECT OF STEEPING ROBUSTA COFFEE BEANS ON MONOCYTES: EXPRESSION OF IL-1β AND TNF-α AGAINST Streptococcus mutans
Adhesion, IL–1β, TNF–α are components that affect in inflammation. So, the effect of steeping green and black Robusta coffee beans to adhesion of Streptococcus mutans on this components. This study used monocytes isolated from healthy human peripheral blood using Ficoll-Hypaque centrifugation method. Monocytes were divided into eight groups, i. e. (i) Control group (untreated monocytes), (ii) S. mutans group (monocytes + S. mutans), (iii) Black Coffee 2.5 % group (monocytes + black coffee beans 2.5 % + S. mutans), (iv) Black Coffee 5 % group (monocytes + black coffee beans 5 % + S. mutans), (v) black Coffee 10 % group (monocytes + black coffee beans 10 % + S. mutans), (vi) Green Coffee 2.5 % group (monocytes + green coffee beans 2.5 % + S. mutans), (vii) Green Coffee 5 % group (monocytes + green coffee beans 5 % + S. mutans), (viii) Green coffee 10 % group (monocytes + green coffee beans 10 % + S. mutans). S. mutans adhesion on monocytes was analyzed using histochemistry method, while immunocytochemical staining was used for analyzing IL–1β and TNF–α. Cells counting was done per 100 monocytes under a light microscope with 400 x magnification. Data were analyzed using ANOVA followed by LSD test. Results showed that steeping green and black Robusta coffee beans increased the adhesion of S. mutans on monocytes, but it decreased of IL–1β, TNF–α expression (P <0.05). In conclusion, steeping of Robusta coffee beans increased adhesion and decreased IL-1β, TNF-α against S. mutans.
Robusta coffee beans have the function of anti-inflamatory activity based on previous studies. It increased the cells viability, inhibited to growth S. mutans, decreased inflammatory cell count (in vivo). Robusta coffee also increased the number of fibroblast cells and decreased the expression of IL-1α in vitro and in vivo (DEWANTI, 2016).
Among those inflammatory activities is phagocytosis. It is one of the immune system against pathogens such as S. mutans. The process of 1.2 Department of Biomedical Science -Faculty of Dentistry of Jember University -Jl. Kalimantan No. 37 Jember 66131 -East Java Indonesia -idewadewanti@yahoo.com, el_pujiana.fkg@unej.ac.id 3.4 phagocytosis as follows: (i) the recognition, which is a process in which foreign microorganisms or particles are detected by phagocyte cells. (ii) the movement (chemotaxis), phagocyte cells move toward the pathogen. (iii) the adhesion, pathogen will attached to the receptors on the phagocyte cell membrane. (iv) the ingestion, the process of ingesting pathogens into the cytoplasm, which will enter the cytoplasm inside a vacuole-like bubble called the phagosome. (v) the digestion, lysosomes containing destructive enzymes such as acid hydrolase and peroxidase, will fuse with phagosomes to form phagolysosomes, then digest foreign matter. Sixth, the secreting, the remaining product of foreign particles that are not digested will be excreted by phagocyte cells (ABBAS et al., 2015).
Regarding the process of Phagocytosis is inflammation. Accordingly, important chemical mediator in inflammation is TNF-α and Il-1β. TNF-α ascachectin is a strong proinflammatory cytokine and plays role in the immune system. Inflammation must occur, but it also causes damage to cells because it can release of chemical mediators, phagocytic enzymes (phagocyte The effect of steeping robusta coffee beans on ... 478 (monocytes + green coffee beans 2.5 % + S. mutans), (vii) Green coffee 5 % group (monocytes + Green coffee beans 5 % + S.mutans), (viii) Green Coffee 10 % group (monocytes + green coffee beans 10 % + S. mutans). All groups were incubated for 24 h at room temperature. Monocytes were made preparates and fixation with methanol. S. mutans adhesion on monocytes was analyzed using histochemistry method (Giemza staining), while immunocytochemical staining was used for analyzing IL-1β and TNF-α. Immunocytochemical analysis were carried out in the following ways: the preparation was soaked in blocking solution with peroxidase at room temperature for 10 min, then incubated in back-ground sniper (protein blocking solution) for 10 min at room temperature. Primary antibodies were added 2 × 10 -5 L, incubated at 25 °C for 60 min and washed with PBS. Secondary antibodies were added, incubated and washed with PBS. Preparation added with Trek Avidin-HRP reagent, washed with PBS, preparation with DAB chromogen substrate, washed with tap water. Hematoxylin mayer (counterstain) was added to the preparation, then incubated for 1 min to 3 min, then washed under tap water and dried. Cells counting was done per 100 monocytes under a light microscope with 400× magnification. Data were analyzed using ANOVA followed by LSD test.
RESULTS AND DISCUSSION
Results of this study showed in figures and tables. Figure 1 showed S. mutans looked around the monocytes. Adhesion (Figure 1 and Figure 2). Analysis with ANOVA and LSD showed a difference (P < 0.05) ( Table 1). Whereas LSD analysis (P < 0.05) there was a significant difference between the control group and the black and green coffee group and between the S. mutans group and the black and green coffee groups. On the other hand, it was no significantly different between groups of the black coffee and the green coffee. The higher concentration of steeping black and green robusta coffee beans, the more number of S. mutans were attached on monocytes.
The result of IL-1β and TNF-α expression described as brown in cytoplasmic of monocytes, but also expressed on extracellularly, so next it must be analyzed to know level of these cytokines (Figures 3 and Figure 5). ANOVA (P < 0.05) showed that a significant difference between groups, while LSD (Table 2 and Table 3) showed that no significantly different between the black coffee and the green coffee (P < 0.05).
So the higher of the concentration of oxidase, inducible nitric oxide synthase, and lysosomal protease), free radical compounds and superoxide (BRADLEY, 2008;HANA & JAN, 2013;CHARLES, 2011). The Il-1β family has been extensively reviewed in various literature having many roles in acute and chronic inflammation. The cytokine interleukin-1β (IL-1β) like TNF-α is a plays role as a mediator of the inflammatory response that plays an important role in the host's response. However, it also causes damage during chronic diseases and acute tissue injury (CHARLES, 2011;GLORIA & DAVID, 2011). The aim in this study is to analyze the effect of black and green of steeping robusta coffee beans on adhesion of S. mutans on monocytes and the expression of IL-1β, TNF-α in monocytes. The following is the methods of the research. Peripheral blood from healthy people as much as 6 × 10 -3 L was mixed with anticoagulant (heparin). Then, the blood was layered on Ficollhypaque and centrifugated (198.968 rad s -1 , 30 min, 26 °C). The monocyte layer was then taken and add with HBSS in the ratio of 1:1. The next step after pipetting, it was centrifugated (178.024 rad s -1 , 10 min, 26 °C). The supernatant was discarded and add with HBSS, Fungizone 5 × 10 -6 L, and Penstripe 2 × 10 -5 L, then it was incubated for 24 h at room temperature. Monocytes were then layered inside the culture dish and added with RPMI. Afterward, cells were placed on 24well microtiter plate 8 × 10 5 cells/well, then it incubated for 45 min 37 °C, then it was washed 4× with HBSS medium. Monocytes were divided into eight groups, i. e. (i) Control group (untreated monocytes), (ii) S. mutans group (monocytes + S. mutans), (iii) Black Coffee 2.5 % group (monocytes + black coffee beans 2.5 % + S. mutans), (iv) Black Coffee 5 % group (monocytes + black coffee beans 5 % + S.mutans), (v) black Coffee 10 % group (monocytes + black coffee beans 10 % + S. mutans), (vi) Green Coffee 2.5 % group Dewanti, I. D. A. R. et al. 479
MATERIAL AND METHODS
FIGURE 1 -Adhesion activities S. mutans in monocytes after exposed by steeping of green and black and green robusta coffee beans. Analyzed with a light microscope with magnification 1 000 x. Adhesion activities were shown with S. mutans that surround a monocyte cells (black arrow). Monocytes were lysis (red arrow). FIGURE 6 -Diagram of TNF-α of monocytes after exposed by steeping of green and black Robusta coffee beans and S. mutans.
Sum of
Coffee Science, Lavras, v. 14, n. 4, p. 477 -483 oct./dec. 2019 The effect of steeping robusta coffee beans on ... 482 steeping green and black Robusta coffee beans caused decreasing of IL-1β, TNF-α ( Figure 2 and Figure 4). Steeping green and black Robusta coffee beans caused decreasing of inflammation against S. mutans. The result showed that S. mutans group appears fewer adhesion activities than the coffee group and many monocytes were lysis, it suspected monocytes cells were not against to S. mutans with a maximum. Cell damage can be caused by cell-derived NO resulting in cellular respiration disorders, cell function and proliferation (cytostatic). NO will bind to Ferrum and prevent Ferrum from leaving the cell causing host cell damage (cytocidal) (ALLAIN et al., 2011). The coffee groups were proved that the higher the concentration, the higher the adhesion activity of S. mutans in the monocytes cell. Besides that, very few monocytes cells undergo lysis. That process was thought to be suspected by the bioactive components of coffee beans that also have antibacterial, and it could maintain cells survival. Robusta coffee beans could increase cells viability (DEWANTI, 2003). Antioxidants can inhibit the action of cytokine-induced NO synthase (iNOS) enzymes through iNOS control of mRNA and inhibit the transport of arginine by the control mechanism of CAT-2 mRNA (cationic amino acid transporter-2 mRNA) (FRANCESCHELLI et al., 2019).
Bioactive components of coffee beans were flavonoids, cafein, chlorogenic acid, and alkaloids (RAMANAVICIENE et al., 2003). These components were alleged had role as an immunomodulator. In studies of other natural ingredients that contain flavonoids have the ability to improve the immune system. A study of in vivo cellular immunity function in mice proves that flavonoid compounds can stimulate lymphocyte proliferation, increase T-cell count and increase IL-2 activity. Flavonoids potentially work against lymphokines produced by T cells that will stimulate phagocyte cells including monocytes to perform phagocytic responses (SHEN & JU-HUA, 2018). Monocytes have receptors that can recognize S. mutans. The major receptors known to play a role against S, mutans is Dectin-1, TLR2, and TLR4. Dectin-1 induces phagocytosis whereas TLR2 induces activation of cytokine production (NETEA et al., 2006;DENNEHY et al., 2009). Coffee beans are thought to bind to receptors on monocyte cells that affect the transcription proteins and cell nuclei, subsequently increasing activity of Dectin-1, TLR2, and TLR4 receptors of monocytes cells thus increasing activity in recognizing S. mutans, thus increasing the number of monocytes cells active. Also, monocytes cells release cytokines such as IFNγ, IL-1β, and TNF-α which are known to be factors that trigger adhesions, especially IL-1β known as immunoregulators can stimulate the expression of intercellular adhesion molecule-1 (ICAM-1), ICAM-1 causes monocyte to easily adhesion (THICHANPIANG et al., 2014).
CONCLUSION
Steeping of Robusta coffee beans increased adhesion and decreased IL-1β, TNF-α against S. mutans. If the concentration is higher, so the adhesion activities is higher. So, steeping of green and black Robusta coffee beans reduce inflammation caused by S. mutans.
|
2019-12-12T10:22:22.639Z
|
2019-12-09T00:00:00.000
|
{
"year": 2019,
"sha1": "6345185af9add3161f7fe482e5a5fa2ac0718aad",
"oa_license": "CCBY",
"oa_url": "http://www.coffeescience.ufla.br/index.php/Coffeescience/article/download/1619/PDF1619",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6d662a838c0cfb8fde151efe4264b9c8fa85ba56",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
231699425
|
pes2o/s2orc
|
v3-fos-license
|
Electrochemotherapy and basal cell carcinomas: First-time appraisal of the efficacy of electrochemotherapy on survivorship using FACE-Q
Summary Introduction The establishment and success of new treatments are significantly influenced by patient satisfaction. Post-operative scarring is an important outcome for patients, and subsequently influences overall satisfaction with treatment. The objective was to measure post-treatment scarring satisfaction using a novel scale, the FACE-Q Skin Cancer Module, to compare electrochemotherapy (ECT) to traditional surgical excision (SE) to demonstrate equivalence of ECT and SE regarding outcome and survivorship. Methods and materials This was a multicentre first-time appraisal study of the efficacy of ECT. All patients with facial BCCs treated with either ECT or SE were deemed eligible and subsequently recruited from either a previous clinical trial or outpatient clinics, respectively. Of the 40 participants invited, 25 responses were received. Patient information recorded included age, gender, location and size of BCCs, and time since treatment. Patient outcomes were measured using the FACE-Q Skin Cancer Module. Results The ECT and SE groups consisted of 14 and 11 patients, respectively. Mean age was 68 years (M:F = 16:9), while mean time since treatment was 4.98 years (range 0.3–9.58 years). Appraisal of scars was significantly higher in the ECT cohort versus SE (p = 0.034). Cancer worry was equivalent across both cohorts (p = 0.804). According to treatment type, no correlation was detected between time since treatment and both appraisal of scars (ECT p = 0.466 and SE p = 0.214) and adverse effects (ECT p = 0.924 and SE p = 0.139). Conclusion Based on this study, ECT has superior scar outcomes and overall equivalence to SE. This demonstrates high patient satisfaction for those treated with ECT without any additional cancer worry.
Introduction
Basal Cell Carcinoma (BCC) is the commonest form of cutaneous malignancy accounting for 62% of all skin cancers. 1 The annual incidence of BCCs is increasing by approximately 2.5% and 3% for female and male subjects, respectively. 1 Both non-melanoma and melanoma skin cancers affect all socioeconomic classes, ethnic groups and age categories. 2 Given the increasing incidence and high rates of synchronous lesions, averaging 1.4 per patient 1 , and recurrent lesions, there is an ever-increasing burden on healthcare systems. 3 Therefore, skin cancer should be considered a global health concern. The destructive nature of BCCs can lead to high rates of physical and psychological morbidity posttreatment as a significant number of lesions occur in both functional and aesthetic areas. 4 , 5 The main risk factor for developing BCCs is intermittent ultraviolet radiation exposure, particularly during childhood and adolescence, 6 explaining why approximately 80% of all BCCs are located on the head and neck. 2 The traditional methods of treating BCCs are standard surgical excision (SE) and Mohs micrographic surgery (MMS). 7 SE is recommended for low-risk lesions and, while SE may be considered in some high-risk cases, MMS is the recommended treatment for high-risk lesions. 7 Five-year recurrence rates are lowest with MMS for both primary and recurrent BCCs, 1% and 5.6%, respectively. 7 , 8 Both standard SE and MMS have been shown in randomised control trials to have similar aesthetic outcomes, despite the fact that MMS conserves more tissue resulting in smaller defects. 7 Other non-surgical treatment options include cryotherapy, topical therapies, such as imiquimod, radiation therapy, photodynamic therapy, 7 , 8 and most recently electrochemotherapy (ECT). 9 ECT is a locally ablative tumour treatment that has recently been approved for many cutaneous tumours and skin metastases. 10 We have recently published the first prospective randomised control trial comparing ECT against the standard of care SE showing excellent efficacy of the treatment in the control of primary BCC and showing a durable response of this treatment on over 90% of lesions treated after 5 years of follow up. 11 The principle of ECT is based on the local application of electrical pulses to increase cell permeability allowing normally poor or impermeable chemotherapeutic drugs to enter the tumour cells, without affecting healthy tissue surrounding the lesion. 10 , 12 Typically, bleomycin or cisplatin are administered either locally or intravenously prior to the application of the electrode, while bleomycin is by far the most commonly used. 10 ECT is particularly useful for patients with significant co-morbidities who are unsuitable for other treatments, those who have a high burden of disease, or those who have an increased risk of functional or aesthetic impairment due to location. 13 Given the increasing incidence yet low mortality rates of BCCs, survivorship post-treatment is very important. Survivorship includes both physical and psychosocial side effects following treatment, which can be disabling and often permanent. 14 Furthermore, these outcomes can place a heavy burden on healthcare services. 14 Of these post-treatment side-effects, scarring can significantly affect patients' quality of life and self-perception, leading to psychological morbidity 15 and it is often underestimated by medical professionals. 16 In turn, this can impact on overall patient satisfaction regarding treatment and care. 5 Accordingly, improving old and developing acceptable new treatments are essential to improve physical outcomes, which in turn may aid in improving psychosocial outcomes and, therefore, survivorship. However, the objective measurement of patient outcomes following BCC treatment has previously been difficult due to a lack of validated patient-reported outcome measures (PROMs). 5 PROMs are essential as patient perception of successful outcomes, including satisfaction with scarring and appearance can often differ from the surgeon's opinion. 5 The FACE-Q Skin Cancer Module, developed from the original FACE-Q, [17][18][19] is a recently validated, novel PROM that determines outcomes specifically important to facial skin cancer surgery, such as cancer worry and overall scarring and appearance. 5 While there has been a significant increase in the number of PROMs, there remains a paucity of outcome data for ECT treatment of BCCs.
The aim of this study was to objectively and comprehensively compare SE and ECT outcomes using this validated, novel scale. The objectives included determining post-treatment scarring satisfaction to allow a comparison between treatments; and demonstrating equivalence between traditional SE and ECT regarding outcome and survivorship using the new FACE-Q Skin Cancer Module.
Study design
This is a first-time appraisal study of the efficacy of ECT that was undertaken in two centres: Cork University Hospital, Cork, Ireland, and The South Infirmary Victoria University Hospital, Cork, Ireland. All patients aged 18 or over with facial BCCs who underwent either surgical excision or ECT from January 2010 to July 2018 were invited to take part. Ethical approval was sought from the Clinical Research Ethics Committee of Cork Teaching Hospitals.
Participants
Two patient groups were identified based on treatment received: SE or ECT. Participants in the SE cohort were identified when attending the plastic surgery. Outpatients' department and were subsequently recruited. Patients in the ECT cohort were recruited, in part, from a previous clinical trial of ECT on BCCs carried out by the plastic surgeon eight years previously. This cohort comprised of 55 patients in total, of which 30 were eligible to participate.
Study measures
The FACE-Q Skin Cancer Module questionnaire (FACE-Q TM Memorial Sloan Kettering Cancer Centre) was used to objectively measure patient outcome and satisfaction post-operatively. This module is comprised of four individual scales: Satisfaction with Facial Appearance, Cancer Worry, Psychosocial Distress, Appraisal of Scars; and two checklists: Adverse Effects, and Sun Protection Behaviour (Appendix 1). In addition to the FACE-Q scales and checklists, other variables collected from the patients were age, gender, location of and size of lesion, time since treatment, and any history of previous BCCs.
Data analysis
Data were analysed using GraphPad Prism 8 (GraphPad Software, Inc., California). Descriptive statistics were ascertained, while significance was set at p < 0.05. The Shapiro-Wilk test was applied to test for normality. All data sets were non-normally distributed; therefore, the non-parametric Mann-Whitney U test was applied to assess treatment type and outcome. Correlation between outcome variables and patient variables were analysed using the Pearson's correlation coefficient. The relationship between time since treatment and adverse effects, and the appraisal of scars was determined using the linear regression analysis.
Patient demographics
Of the 40 patients invited to participate, a total of 25 patients (16 men and nine women) completed the FACE-Q questionnaire. The ECT and SE cohorts consisted of 14 and 11 patients, respectively. The average age of participants was 68.2 years (range, 52-87 years). The overall mean time since treatment was 4.98 years (range 0.3-9.58 years), while time since treatment was significantly longer in the ECT group as compared to the SE cohort ( p < 0.0 0 01). Patient characteristics are described in detail in Table 1 . The Shapiro-Wilk test for normality demonstrated non-normal distribution for all outcome measures. Therefore, the non-parametric Mann-Whitney U test and the Pearson's correlation coefficient were utilised.
Appraisal of scars
Mean appraisal of scars was 91.45 in the SE cohort (95% CI = 82.72 and 100.18) and 98.5 in the ECT cohort (95% CI = 95.26 and 101.74), where higher values reflect higher satisfaction with scar outcome. ECT had significantly higher satisfaction with scars than SE ( p = 0.043) ( Fig. 1 A). When divided into treatment type, Pearson's correlation coefficient showed no correlation between the appraisal of scars and time since treatment (SE: r = 0.407, p = 0.214 and ECT: r = 0.212, p = 0.466) or lesion size (SE: r = 0.04, p = 0.907 and ECT: r = −0.002, p = 0.992). The linear regression analysis also showed no significant relationship between time since treatment and appraisal of scars (SE: r 2 = 0.166 and ECT: r 2 = 0.045) ( Fig. 1 B and 1 C, respectively).
Adverse effects
Mean score for adverse effects was 13.27 in the SE cohort (95% CI = 9.75 and 16.79) and 10.36 in ECT cohort (95% CI = 9.82 and 10.89), where higher values represent more adverse effects experienced in the previous week. SE patients reported a significantly higher number of adverse effects than ECT patients ( p = 0.043) ( Fig. 2 A). When divided into treatment type, no correlation was detected between adverse effects and time since treatment (SE: r = −0.476, p = 0.139 and ECT: r = 0.0286, p = 0.923) or lesion size (SE: r = 0.132, p = 0.699 and ECT: r = 0.0156, p = 0.9579). Additionally, linear regression failed to show any significant relationship with time since treatment and adverse effects (SE: r 2 = 0.226 and ECT: r 2 = 0.001) ( Fig. 2 B and C, respectively).
Cancer worry
Mean score for cancer worry was 33.27 in the SE cohort (95% CI = 14.85 and 51.70) and 33.79 in the ECT cohort (95% CI = 20.59 and 46.98), higher values inferred higher levels of worry regarding their cancer. When tested, no significant difference was detected between treatment groups ( p = 0.804) ( Fig. 3 A). Similarly, no correlation was detected between cancer worry, and time since treatment (SE:
Satisfaction with facial appearance
Mean score for satisfaction with facial appearance was 81.36 in the SE cohort (95% CI = 70.34 and 92.38) and 82.14 in the ECT cohort (95% CI = 73.33 and 90.95), where higher values show higher satisfaction. Analysis showed no significant difference between SE and ECT cohorts ( p = 0.999) ( Fig. 3 B). Furthermore, no correlation was observed between satisfaction with facial appearance and both time since treatment (SE: r = 0.154, p = 0.651; ECT: r = 0.349, p = 0.222) and size of lesion (SE: r = 0.212, p = 0.532; ECT: r = −0.471, p = 0.089), when divided into treatment type. Similarly, linear regression detected no significant relationship with time since treatment (SE: r 2 = 0.024 and ECT: r 2 = 0.122) or size of lesion (SE: r 2 = 0.045 and ECT: r 2 = 0.222).
Appearance-related psychosocial distress
Mean score for appearance-related psychosocial distress was 18.00 in the SE cohort (95% CI = 3.11 and 32.89) and 17.14 in the ECT cohort (95% CI = −4.02 and 18.31), where higher scores represent higher levels of distress. The analysis failed to show a significance between patient groups ( p = 0.164) ( Fig. 3 C). When analysed according to treatment type, no correlation was detected between appearance-related psychosocial distress, and either time since treatment (SE:
Sun protection behaviour
Mean value for sun protection behaviour was 16.73 in the SE cohort (95% CI = 15.11 and 18.35) and 15.5 in the ECT cohort (95% CI = 13.48 and 17.52), with higher scores indicating more protective behaviours. Significance was not detected between patient cohorts ( p = 0.304) ( Fig. 3 D). When divided into treatment type, correlation was detected in the SE cohort between sun protection behaviour and lesion size (SE: r = −0.711 and p = 0.014). However, this was not detected in the ECT cohort (ECT: r = 0.175 and p = 0.55). Similarly, a correlation was not observed in either patient groups regarding sun protection behaviour and time since treatment (SE: r = −0.003, p = 0.993 and ECT: r = 0.49, p = 0.074). Linear regression showed there to be a significantly negative relationship between lesion size and sun protection behaviours in the SE group (r 2 = 0.506), but not in the ECT cohort (r 2 = 0.031).
Discussion
The importance of survivorship has been brought to the forefront of clinical practice as a result of the increasing understanding of its importance to patients. Scarring and adverse effects following treatment for low-recurrence, low-mortality cancers such as BCCs 1 are important outcomes that can have a significant impact on patient quality of life and satisfaction with treatment. 20 As a result, PROMs including the FACE-Q Skin Cancer Module are an important tool in improving treatment of such cancers on cosmetically sensitive areas. The FACE-Q Skin Cancer Module has been validated as a tool to objectively measure survivorship post-treatment through measuring satisfaction with scars and facial appearance, adverse effects, and the feeling of comprehensive treatment in the form of cancer worry. 5 , 21 Satisfaction with scarring is an important outcome for patients and can significantly affect patients' overall satisfaction with treatment. 5 Amongst our patients, ECT reported higher satisfaction with scarring than that of SE, while the SE patients reported significantly more adverse effects. Given the large differences in time since treatment between ECT and SE cohorts (mean difference = 5.19 years), time could account for the significances achieved. However, linear regression suggests that time is not a confounding factor for either treatment, further implying that superior scar outcomes and reduced adverse effects were achieved with ECT.
Given the low mortality rates for BCCs, reducing scarring should be an important priority. 1 However, some lesions, because of size, number or location, are not possible to remove without signifi-cant disfigurement. This is where methods, in particular ECT, are proving beneficial 11 , 13 ; yet, it is not widely available for such patients. Increasing ECT accessibility could improve patient outcomes and, therefore, survivorship. This would reduce the burden on healthcare services, including support services, as a result of function impairment from scarring, but also psychological aspects, including societal withdrawal and psychosocial distress. 17 Similarly, adverse effects such as pain, numbness, tingling or itchiness can seriously affect patients' quality of life for weeks to months post-treatment, increasing both psychological and physical morbidity. 5 , 14 ECT showed superiority regarding two physical yet subjective aspects of survivorship, further suggesting that ECT should be increasingly considered as part of the treatment toolbox for BCCs.
Of particular importance amongst the non-significant results, cancer worry was near equivalent amongst the two treatment groups. Post-treatment worry about cancer recurrence is a notable concern for a considerable number of patients. 14 Additionally, patients treated for one cancer often have an increased risk of developing the malignancy elsewhere, because of either a genetic predisposition or due to the same causative environmental exposure, 14 for example, Gorlin's syndrome 22 and ultraviolet radiation exposure, 3 respectively. The equivalence of cancer worry between both groups demonstrates equal patient confidence in treatment efficacy.
The results interestingly showed a negative relationship between fewer sun protection behaviours and larger lesion sizes, but only in the SE patient group. One patient in the ECT group suffered for many years from the Gorlin syndrome, also known as naevoid BCC syndrome, and despite having the largest lesion size, the patient scored very high on the sun protection behaviour checklist. Outliers such as this patient may account for the lack of significance in the ECT cohort. However, given that UV exposure, particularly prolonged intermittent exposure in adolescence, is a well-known considerable risk factor for developing BCCs 2 , 6 , 23 ; it is not surprising that there is a negative relationship between lesion size and sun protection behaviours. In addition, the Irish population in general have several other risk factors, including fair skin, red hair and light eye colour, 2 , 6 , 23 possibly further accounting for the association detected.
This study is the first of its kind to assess the use of the novel PROM, the FACE-Q Skin Cancer Module for ECT. The results from this study could be generalisable to the Irish population, and given the prevalence of BCCs in Ireland, this information could be very valuable to clinicians when they suggest treatment for BCCs. However, future studies are still required to fully corroborate the results found in this patient group. Another strength lies with the inclusion of two centres for patient recruitment to reduce selection bias.
The first and main limitation of this study is the sample size. Because of the small sample size of each cohort, definitive conclusions cannot be drawn from the data as they may overestimate the associations detected. A larger sample size would increase the power of the study and ultimately the results. Secondly, there is a possibility of selection bias as 100% of patients approached in the outpatients' department completed the questionnaire, while this was notably lower with questionnaires issued by post. Thirdly, equal number of male and female patients in each treatment arm were not achieved, with more male than female patients. As females are often more affected by changes in facial appearance, 24 the outcomes measuring such may underestimate the effect in the female population. Lastly, time since treatment was notably different between treatment groups and should be controlled for in future studies.
To definitively demonstrate equivalence between treatments or, potentially, the superiority of ECT, a large blinded randomised control trial should ideally be undertaken. This could allow for a wider availability of ECT for skin cancer patients, particularly those with significant co-morbidities or increased risk of scarring due to size, location or number of lesions.
Conclusion
This study, however, achieves its two main aims: it shows the benefit and worth of the FACE-Q Skin Cancer Module in assessing outcomes after the treatment of BCCs, while also demonstrating equivalence between ECT and SE regarding outcomes. This PROM shows merit in describing patient satisfaction after treatment by incorporating a score acknowledging the impact of scarring and facial appearance offset against cancer worry. Moreover, this is the first objective PROM that assesses the impact and efficacy of ECT as a treatment for BCCs. BCCs are successfully and durably treated by ECT and here, we show that this results in equivalent cancer worry. This demonstrates patient satisfaction with treatment in addition to improved satisfaction with scarring, suggesting a potential benefit of ECT in aesthetically sensitive locations.
Declaration of Competing Interest
None.
|
2020-12-31T09:05:33.333Z
|
2020-12-25T00:00:00.000
|
{
"year": 2020,
"sha1": "471714c9a6a3875705f50ec3801d90e2bb0a745b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jpra.2020.12.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dcf9e4919eb1379eeb55f665d712fcff6fdb9ea8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235849378
|
pes2o/s2orc
|
v3-fos-license
|
Existence of unstable stationary solutions for nonlinear stochastic differential equations with additive white noise
This paper is concerned with the existence of unstable stationary solutions for nonlinear stochastic differential equations (SDEs) with additive white noise. Assume that the nonlinear term \begin{document}$ f $\end{document} is monotone (or anti-monotone) and the global Lipschitz constant of \begin{document}$ f $\end{document} is smaller than the positive real part of the principal eigenvalue of the competitive matrix \begin{document}$ A $\end{document} , the random dynamical system (RDS) generated by SDEs has an unstable \begin{document}$ \mathscr{F}_+ $\end{document} -measurable random equilibrium, which produces a stationary solution for nonlinear SDEs. Here, \begin{document}$ \mathscr{F}_+ = \sigma\{\omega\mapsto W_t(\omega):t\geq0\} $\end{document} is the future \begin{document}$ \sigma $\end{document} -algebra. In addition, we get that the \begin{document}$ \alpha $\end{document} -limit set of all pull-back trajectories starting at the initial value \begin{document}$ x(0) = x\in\mathbb{R}^n $\end{document} is a single point for all \begin{document}$ \omega\in\Omega $\end{document} , i.e., the unstable \begin{document}$ \mathscr{F}_+ $\end{document} -measurable random equilibrium. Applications to stochastic neural network models are given.
1.
Introduction. During the past decades, stochastic differential equations (SDEs) have been widely used to account the integrated effects of interior interactions and environmental fluctuations. A fundamental question in the study of SDEs is to consider the existence and global stability of stationary solutions under minimal conditions.
Various mathematical methods exist for verifying the stability of SDEs, including Lyapunov functions [7,10,11] and random dynamical systems (RDSs) [1,2,8]. If we consider the stability of the zero solution for SDEs, the former approach may be more effective, see [9,12]. However, sometimes there are no trivial stationary solutions for SDEs. At the moment, the later method can be used to investigate the long-term behaviour of SDEs. For example, let us consider the following scalar where W t is a Wiener process in R and µ, ν are constants. This equation generates an affine RDS (θ,ψ) with the cocycle ψ(t, ω, x) = e µt x + ν t 0 e µ(t−τ ) dW τ for all x ∈ R. It is easy to see that the RDS (θ,ψ) possesses an exponentially stable F − -measurable random equilibrium is the past σ-algebra. In the case that µ > 0, it admits an unstable F + -measurable random equilibrium where F + = σ{ω → W t (ω) : t ≥ 0} is the future σ-algebra. Motivated by our recent works [5,6], this paper is devoted to the existence of unstable stationary solutions for nonlinear SDEs with additive white noise. To be specific, we will prove that under the condition that the nonlinear function f is monotone (or anti-monotone) and the global Lipschitz constant of f is moderately smaller than the positive real part of the principal eigenvalue of the competitive matrix A, the stochastic flow (θ,ψ) has an unstable F + -measurable random equilibrium, which yields a stationary solution for nonlinear SDEs. In addition, we conclude that the α-limit set of all pull-back trajectories starting from any initial value in R n is a single point for all ω ∈ Ω, i.e., the unstable F + -measurable random equilibrium.
Our main result gives a criteria to guarantee the existence of unstable stationary solutions for nonlinear SDEs, which can be applied to stochastic neural network models, see Corollary 1, Examples 5.1 and 5.2. Our problem, assumptions and definitions of RDSs are stated in Section 2. Some useful lemmas and the long-term behavior of solutions for SDEs are presented in Section 3. Proofs of the existence of unstable stationary solutions are given in Section 4. Finally, we apply our results to several stochastic models from neural networks in Section 5.
2.
Preliminaries. In this section, we consider the following n-dimensional SDEs with the initial value Moreover, A = (a ij ) n×n is an n × n-dimensional matrix, f : R n → R n and σ = (σ ij ) n×m is an n × m-dimensional matrix. Throughout this paper, we use the maximum norm |x| := max{|x i | : i = 1, . . . , n} and A := max{|a ij | : i, j = 1, . . . , n}, where x ∈ R n and A ∈ R n×n . In order to show our result, we will impose some conditions on A and f . (A1) A is competitive, i.e., a ij ≤ 0 for all i, j ∈ {1, . . . , n} and i = j. In addition, we suppose that all real parts of eigenvalues of A are positive. That is, −A is cooperative and there exists a constant λ > 0 such that for all t ≥ 0. Here, Ψ(t) = exp(At) is the fundamental matrix of the following linear ordinary differential equations (ODEs): (A2) f : R n → R n is bounded and satisfies the Lipschitz condition for all x, y ∈ R n , where L > 0 is the Lipschitz constant. Moreover, we suppose that f is monotone, i.e., for all x, y ∈ R n , or anti-monotone, i.e., Here, x ≤ R n + y shows that y − x ∈ R n + for all x, y ∈ R n , where R n The main thought in this paper is to consider the long-term behaviour of stochastic flows generated by (2) and prove that the α-limit set of (pull back) trajectories emanating from the initial value x(0) = x ∈ R n is a single point for any ω ∈ Ω. For the convenience of the reader, we will recall some basic notations related to RDSs. For more details, we refer the reader to [1,2].
where (Ω, F , P) is a probability space and θ is a flow: In addition, we also assume that θ t P = P for all t ∈ R.
Let X be a separable complete metric space, i.e., a Polish space, which is equipped with the Borel σ-algebra B(X) generated by open sets of X.
Definition 2.2. An RDS with two-sided time R and the phase space X is a couple (θ, ψ) consisting of a metric dynamical system θ ≡ (Ω, F , P, {θ t , t ∈ R}) and a cocycle ψ over θ, i.e., a (B(R) ⊗ F ⊗ B(X), B(X))-measurable mapping By the standard theory of SDEs [10,14], it is easy to obtain the existence and uniqueness of solutions for (2). Let ψ(t, ω, x) = x(t, ω, x) be the unique solution of (2) with the initial value x(0) = x ∈ R n , which generates a two-sided RDS (θ, ψ) in R n , see [1,Chap. 2]. Here, θ is the Wiener shift operator defined by θ t ω(·) = ω(t + ·) − ω(t) for all t ∈ R, which is an ergodic metric dynamical system. Next, using the variation-of-constants formula [10, Theorem 3.1] and the backward Itô integral (see Arnold [1, p.97]), it follows that for all t ≥ 0, which together with the definition of θ implies that In the remainder of this section, motivated by the recent work [5], we need to give a key operator L, which is defined by for all ω ∈ Ω. Here, the random variable g : Ω −→ R n is tempered with respect to the measure preserving flow θ, see Chueshov [2, p.23].
Remark 1. By a similar argument in [5], it is clear that the operator L is well defined and the pull back trajectories starting at the initial point 3. Some inequalities with respect to the stochastic flow ψ. In this section, we shall consider the dynamical behavior of stochastic flow ψ and show some helpful lemmas to prove our main results. First, we start with a lemma for convenience.
. Assume that (x t ) t∈Λ is a net in a normed space X endowed with a solid, normal cone X + ⊆ X. Moreover, assume that the net converges to a single point p ∞ ∈ X, and that x − t := inf{x τ : τ ≥ t} and x + t := sup{x τ : τ ≥ t} exist for all t ∈ Λ. Then the nets (x − t ) t∈Λ and (x + t ) t∈Λ also converge to the point p ∞ .
where x ∈ R n and ω ∈ Ω. Here, the supremum (sup) and infimum (inf) are the least upper bound and the greatest lower bound in R n , respectively. Then ξ f t and η f t are two F + -measurable random variables for all t ≥ 0, where F + = σ{ω → W t (ω) : t ≥ 0} is the future σ-algebra.
Proof. By definitions of the metric dynamical system θ and the future σ-algebra F + , it is obvious that the random variable ψ(−τ, θ τ ·, x) is F + -measurable for all τ ≥ 0 and x ∈ R n . The rest of the proof can be followed by the same argument in [5, Proposition 3.2], we omit it here. The proof is complete.
for all ω ∈ Ω. Here, Proof. For convenience, we only show that is correct, and other inequalities in (9) can be proceeded analogously. By (A2), Remark 1 and Lemma 3.2, we can easily have that lim θ ψ, lim θ ψ, lim θ f (ψ) and lim θ f (ψ) are well defined, which are all F + -measurable random variables. This together with Proposition 3.3 in [6] and Fubini's theorem gives that the same conclusion is true for [L(lim θ f (ψ))] and [L(lim θ f (ψ))]. In addition, using Lebesgue's dominated convergence theorem, it is evident that Consequently, in order to prove the inequality (10), it only remains to verify for all t ≥ 0 and ω ∈ Ω. To see this, we first observe that f is bounded, it follows that there exists a positive vector b = (b 1 , . . . , b n ) T ∈ intR n for all x ∈ R n , where [−b, b] is a conic interval. Furthermore, by the definition of the operator L, for all t ≥ 0 and ω ∈ Ω, we have that where the inequality (3.4) is due to that A is a competitive matrix, which yields that Ψ(−t)x = exp(−At)x ≥ R n + Ψ(−t)y for all x ≥ R n + y and t ≥ 0, i.e., Ψ(−t) is order preserving. The proof is complete.
Remark 2.
In this lemma, we do not assume that f is positive, i.e., f : R n → R n + , which is weaker than that in [5]. Lemma 3.4. Assume that (A1) and (A2) hold. It follows that (i) If f is monotone, we have that for all ω ∈ Ω, (i) If f is anti-monotone, we have that for all ω ∈ Ω, Proof. The proof is similar in spirit to Lemma 3.4 in [5], we omit it here. The proof is complete.
(i) If f is anti-monotone, then for all t ≥ 0, ω ∈ Ω and k ∈ N, Proof. By Lemma 3.2, it is immediate that for all t ≥ 0 and ω ∈ Ω. In addition, since A is a competitive matrix, which together with (8) shows that L is anti-monotone with respect to the tempered random variable g. Therefore, we can easily see that (ω) This together with Lemma 3.3 yields that (ω). The rest of the proof can be followed in much the same way as Lemma 3.5 in our previous paper [5], so we omit it here. The proof is complete. Lemma 3.6. Assume that nL λ < 1, (A1) and (A2) hold. In addition, we define is a conic interval. Moreover, we consider a metric on M F+ (Ω; [−b, b]) as follows: Proof. First, it is easy to check that (M F+ , d) is a complete metric space. In order to get the result, it is necessary to verify the well-posedness of the operator L f . For any given g ∈ M F+ , using Proposition 3.3 in [6], we see at once that g(θ τ ω) is (B(R + ) ⊗ F + , B(R n ))-measurable. Combining this and Fubini's theorem, we immediately have that L f is well defined. Now, we proceed to show that the operator L f is contracted. Given any g 1 , g 2 ∈ M F+ , note that |Ψx| ≤ n|Ψ| · |x| for all x ∈ R n and Ψ ∈ R n×n , it follows that The proof is complete.
4. Main results. In this section, we will state our main results as the following theorem.
Proof. By Lemma 3.5 and Lemma 3.6, analysis similar to that in the proof of Theorem 4.2 in [5] shows that (17) holds, which together with the definition of the cocycle ψ yields that [L(g)](ω) is an unstable F + -measurable random equilibrium. The proof is complete.
5.
Applications to stochastic neural networks. In this section, we will present some applications of Theorem 4.1. First, we consider the following stochastic model with additive white noise which can describe the dynamical behavior of a neural network with n neurons under stochastic noise perturbations. Here, the matrix T = (T ij ) n×n shows the connection strengths between neurons, the transfer function f ij is assumed to be sigmoid and σ i W i is the turbulent noise in the external environment. If we ignore the noise in (18), which has been investigated by Hopfield [3,4]. Next, define g ij = T ij · f ij for all i, j ∈ {1, 2, . . . , n}, we assume that g ij satisfies the condition that (B1) g ij : R → R is globally Lipschitz continuous with the Lipschitz constant L ij ≥ 0, monotone (or anti-monotone). In addition, there exist some positive constants b ij such that |g ij (x)| ≤ b ij for all x ∈ R and i, j ∈ {1, 2, . . . , n}.
From now on, for simplicity of notation, we only discuss the case that n = 3.
Example 5.1. First, we consider the following stochastic model with the initial value x = (x 1 , x 2 , x 3 ) T ∈ R 3 and x 0 represents t for all t ≥ 0. Proofs of the remainder components of Ψ(−t) are obvious. Since the matrix A is competitive, which implies that Ψ ij (−t) ≥ 0 for all t ≥ 0 and i, j = 1, 2, 3, see Proposition 3.1.1 in [15]. This gives that we only need to show that
|
2021-07-16T00:05:55.078Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "76f812da76ee44c07db65e8f8efe6b5c3bf79a4a",
"oa_license": null,
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=be8eaea0-c046-4d0f-871f-d43d027ab6cd",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "51a3fcf0d8048959c2a70566c966290110e6a9fc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
244731788
|
pes2o/s2orc
|
v3-fos-license
|
Fully digital problem-based learning for undergraduate medical students during the COVID-19 period: Practical considerations
Digital problem-based learning (PBL) was originally introduced as a means to improve student engagement and increase flexibility. However, its use becomes mandatory during the coronavirus disease 2019 (COVID-19) period, accelerating changes in medical education. Few elaborated on the implementation details of digital PBL curricula. Technical guidance can be important but under-recognized prerequisite of a successful digital PBL session. In National Taiwan University College of Medicine, we established a digital PBL curriculum and previously validated a confidence questionnaire for surveying undergraduate students receiving digital PBL sessions. In this opinion piece, we gleaned multiple procedural details from our experiences based on students'/tutors' feedback, which we summarized in a 5″W″ recommendations (Who), timing/duration (When), location (Where), software/hardware/topics (What), and evaluation aspects (Why). Suggestions on how to optimally prepare for digital PBL session are also provided. We believe that these tips can further facilitate the wide adoption of digital PBL.
Technology-assisted education PBL sessions. In this opinion piece, we gleaned multiple procedural details from our experiences based on students'/tutors' feedback, which we summarized in a 5 00 W 00 recommendations (Who), timing/duration (When), location (Where), software/hardware/topics (What), and evaluation aspects (Why). Suggestions on how to optimally prepare for digital PBL session are also provided. We believe that these tips can further facilitate the wide adoption of digital PBL. Copyright ª 2021, Formosan Medical Association. Published by Elsevier Taiwan LLC. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Effective learning for health professionals is an important task in this era of information explosion. Technologyassisted education provides recipients with a better subjective efficacy compared to traditional pedagogy. Students prefer this education strategy due to its flexibility, high accessibility, and ease for tool usage. 1 A literature review summarized that the context and themes involved in internet-based learning and the learning theories should be tailored individually. 2 Among the spectrum of internetbased curricula, digital problem-based learning (PBL) gains popularity recently. Traditional learning process among medical students hinges on one-way knowledge transmission, but interactive modules such as PBL are better suited to enhance engagement and potentially increase learning effectiveness. 3 Three core components of PBL have been proposed; an initial problem-presentation and analytic phase, a second self-oriented learning phase, and the final result synthesis/reporting phase. 4 However, the coronavirus disease 2019 (COVID-19) pandemic, accompanied by social distancing and city lockdown, cripples medical students' opportunity to receive in-person lectures and to participate in face-to-face meetings/discussions. In turn, COVID-19 creates unprecedented opportunity for innovations in medical education. Digital PBL, especially fully online ones, have never been more important until the COVID-19 period. Nonetheless, questions emerge regarding the efficacy and effectiveness of digital PBL during this pandemic, and there are concerns stating that online courses are in essence "emergency remote teaching" instead of "online reaching". 5 These thoughts are reflective of the heterogeneity in course quality, which we believe stems from the meager evidence focusing on the practical details about digital PBL curricula. Technical guidance is therefore an under-recognized issue for a successful digital PBL. 6 In response to the COVID-19-related education crisis, the Center of Faculty Development and Curriculum Integration of National Taiwan University College of Medicine implemented a pilot digital PBL project involving undergraduate medical students during 2020e2021 (supplementary Table). 7 During course design, we involved multiple stakeholder groups including both junior and senior facilitators, course designers, medical school administrative staff, and experts from the Graduate Institute of Medical Education and Bioethics for course digitalization. The materials and themes were the same as those of face-to-face ones but the delivery platform was digitalized. We encountered challenges when implementing the pilot digitalized PBL curriculum, including two categories, potentially suboptimal learning efficacy among medical students and technical issues. For the former category, we used to assess medical students' confidence in satisfactorily completing the digital PBL curriculum and compared results with those in completing the traditional face-to-face PBL. 7 Totally 110 medical and pharmacy students voluntarily participated, and a recalibrated confidence questionnaire was administered, followed by exploratory factor analyses and dimension reconstruction. We found that a single session of digital PBL significantly attenuated medical students' confidence in completing the curriculum, while repeated practice up to 3 consecutive sessions might partially restore it. 7 For the latter category, we synthesized technical issues and made recommendations according to a 5 00 W 00 categorization (Supplementary Figure 1), based on feedbacks from medical students and tutors/facilitators, in Table 1.
Who? In our experiences, 7 tutors and students need to be well prepared prior to digital PBL sessions (Table 1). For tutors, we recommend pre-course workshops, aiming to elucidate values, importance and details of digital PBL and to enhance tutors' familiarity and satisfaction. 8 Junior tutors can receive briefing for digital etiquette, and live curricula observation, simulation or didactic lectures prior to sessions are helpful. Interestingly, digital PBL alters the atmosphere of traditional curricula; we implicitly find that introvert students may be more active in digitalized settings, while some become camera-shy. Atmospheric changes influence learning effectiveness of a digital PBL. For students, we recommend 3 core elements to be born in their mind: stewardship, sequence, and presentation style adaptation (Table 1). Stewardship means streamlining sessions to avoid frequent interruptions. Group members are suggested to designate chairperson(s) or auditor(s) to ensure that each session courses smoothly. It is important to address the presentation sequence for debatable points with a pre-registered or real-time updated agenda to place them in queue. The presentation style also needs to be modified using pre-course slide-sharing with in-course verbal explanation, electronic chalkboard, even social media forums, etc. Digital etiquette is another important element for a successful digital PBL, in which participants assume digital citizenship following two web manner principles, technical details and communication guidance. 9 The former refers to asking participants to turn on cameras/speakers for framing at session beginning, since the transmission of visual/auditory messages may be compromised. Participants should avoid covering their face and look at the camera but not images on the device screen. We notice that periodic checkups with audience regarding the clarity Journal of the Formosan Medical Association 121 (2022) 2130e2134 of words/images is helpful. Slower movements and wider gestures increase message clarity. The latter refers to the content and message organization. Presenters are expected to speak slower with terminology explained in details. A brief but concise summary at the end of presentation places the audience at a better position to absorb content. Both digital PBL tutors and students are expected to be more patient than they would be in face-to-face PBL sessions. There are students being "digital natives" (better adapted to the technology-assisted communication), while others are "physical natives" (used to face-to-face interactions). 10 We can enhance students' engagement if communication route, schedule flexibility, device accessibility and ease of usage, etc. are taken into consideration. In our experiences, students of Asian origin may be selfeffacing and tend to wait for others to express themselves. Autonomy in learning behavior assumes importance during digital PBL curricula. We encourage students' proactiveness and appraise them for such attitude.
When? Timing and duration should be carefully adjudicated for digital PBL (Table 1). We think that maintaining high concentration on visual/verbal information assumes importance. In our experiences, students reported intermittent inattentiveness owing to the monotonous background, while distracting exposures are not uncommon including environmental noise, side-chat, or connection interruption. Studies from United Kingdom identified that one-fourth medical students complained of family distraction during digital curricula. 11 We recommend that the duration of a digital PBL session ideally ranges from 1 to 2 h. A shorter duration is not recommended, as time-consuming technical issues frequently emerge during online sessions, and shorter sessions than 30 min preclude effective interactions. In contrast, prolonged digital PBL sessions impair students' concentrating capacity, since sessions longer than 2.5 h decrease students' satisfaction, leading to negative perceptions. 12 Finally, we believe that mid-session breaks similar to those in didactic lectures may sometimes be helpful for restoring students' concentration.
Where? A good location favorably influences learning effectiveness in digital PBL. Prior reports indicated that one-tenth medical students attending digital courses had difficulty searching for suitable places. 11 Internet speed frequently deranges session flow and sometimes worsens atmosphere if involving information-crowding topics. We advise students and tutors to secure places with an adequate internet connection speed and bandwidth prior to commencement. The presence of a noise-proof room/space in which individuals can talk and hear others' voice, in our experience, is a prerequisite. A good practice can be putting on a sign on one's door or behind one's chair, stating phrases such as "Do not disturb" or "In lecture", etc. 5 What? We think 3 types of apparatuses need to be prepared beforehand for a digital PBL session. First, connecting software counts. Virtual meeting/communication software such as Google Hangout Meet, Skype, Cisco WebEx, Cyberlink U-meeting, Zoom, each carries advantages/disadvantages. We recommend participants to preliminarily run the applications and acquaint themselves with procedures/functions. Adjunct communication routes incrementally benefit students during digital PBL. Digital natives often have alternative routes for real-time opinion exchanges, and students may interact privately with others to avoid disturbing discussion. Electronic devices for joining digital PBL are also important. We recommend disinfection of user surface during the COVID-19 period but recommend against sharing devices to permit social distancing. The order and topics in digital PBL sessions require careful adjudication; those of the initial 2e3 sessions should be shorter, containing less details but allowing greater room for discussion. We can imbue different levels of uncertainty to the topics across the curricula of digital PBL to fixate students on the discussion. In addition, we should design topics to encourage team work and be of supportive nature. 13 Why? Evaluating the best format of the entire landscape of digital PBL is difficult. We can extrapolate experiences from other forms of online learning, 14 with 3 domains identified including organizational capacity, effectiveness learning and assessment, and human resources. For organizational capacity, there should be an accountable organization/course leader that is supportive and willing to lead the designer process, allocate resources, and implement relevant regulations. 14 For learning effectiveness, components need to be inspected including digital course design, delivery, and student evaluation. 14 For human resources, stakeholders including faculty, students, and administrative staff, should all be considered. 14 We next provide period-dependent recommendations for implementing digital PBL, divided according to the timings, before, during, and after digital PBL (Table 2). From the 5 00 W 00 categories, students mostly reported difficulties related to the "Where" and "What" categories; that is, their comments frequently involved how to secure places with adequate internet connection, the selection of a suitable conference software, the style of presentation, etc. Feedbacks from the facilitators, on the contrary, spanned from "Who", "When", "Where", "What" to "Why". Prior to the first session, we can ask each group to assign a moderator to lead, organize, and streamline the preparation, avoiding blank period, awkward silence, speaker jamming, or crowding messages. Studies indicate that moderators can guide others to perform better if their ties to the moderators are strong. 15 We recommend allocating questions to participants on a mutual agreement basis to increase course smoothness. The moderator needs to be careful in leading the course and monitoring the atmosphere of discussion. We suggest introduce details of digital etiquette prior to digital PBL commencement, with all utilities and supporting measures tested for functional integrity. During session, we can consider offering multiple communication strategies and encourage interactions. Tutors can ask participants to serve as questioners. Feedback from tutors should be instantaneous and avoid losing catch of messages. All should turn off/silent other applications such as emails or recreational ones. We identify several options to ensure students' concentration ability (Table 2), including role playing, real-time polling, and monitoring the timeliness of student feedback. After completion, one or more of the participants make a succinct summary of the index case and respond to remaining questions. A good practice would be to devote additional time to the management of these questions, to further enhance postcourse learning.
In conclusion, digital PBL becomes mandatory during the COVID-19 period. There are pilot experiences regarding the performance and learning theory of digital PBL prior to the pandemic, but they may not completely fit the requirement of the post-COVID-19 era. Based on our experiences, we believe that process-related details for optimizing digital PBL curricula offer an important guidance missing from the current literature. A summary of our 5 W approach using a strategy activity map is shown in Supplementary Figure 2. Sharing of our experiences are expected to further enrich the knowledge base of digital PBL.
Funding
None declared.
Declaration of competing interest
The authors have no conflicts of interest relevant to this article.
|
2021-12-01T14:13:47.431Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e42f76285b5f8cc4c50e07a3dbd61cae42a95374",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jfma.2021.11.011",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fb31a3de15cb425a9a52d2078d3e6b3a5e7293d",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209519941
|
pes2o/s2orc
|
v3-fos-license
|
Post-treatment curcumin reduced ischemia–reperfusion-induced pulmonary injury via the Notch2/Hes-1 pathway
Objective To investigate the influence of curcumin on the Notch2/Hes-1 pathway after pulmonary injury induction via limb ischemia–reperfusion (I/R). Methods Adult male Sprague–Dawley rats were randomly divided into four groups (n = 30 each): sham, I/R, curcumin post-treatment (I/R+Cur), and inhibitor (I/R+DAPT). Hind-limb ischemia was induced for 4 hours, followed by reperfusion for 4 hours. After ischemia, curcumin (200 mg/kg) or DAPT (0.5 µm) was injected intraperitoneally in the I/R+Cur or I/R+DAPT group, respectively. PaO2 was examined after 4 hours of reperfusion. Pathological changes in the lung and the apoptotic index (AI) were examined. Lung malondialdehyde (MDA), tumor necrosis factor (TNF)-α, and interleukin (IL)-1β levels, the wet/dry (W/D) ratio, semi-quantitative score of lung injury (SSLI), and Notch2 protein and Hes-1 mRNA expression were also examined. Results In the I/R group, inflammatory changes were observed, AI increased, and MDA, SSLI, W/D, TNF-α, IL-1β, Notch2, and Hes1-mRNA expression increased, while PaO2 decreased compared with the Sham group. Pathological changes in the I/R+Cur group were reversed. All indexes in the I/R+DAPT and I/R+Cur group were similar. Conclusion Curcumin post-treatment reduced I/R-induced lung injury in rats. Its mechanism may be related to the inhibition of Notch2/Hes-1 signaling pathway and the release of inflammatory factors.
Introduction
In addition to limb injury, ischemia/ reperfusion (I/R) can also cause functional and organic pathological damage to distant organs, especially the blood-rich lung, and it can cause conditions such as acute respiratory distress syndrome (ARDS). 1 Studies have shown that limb I/R is a complex process that involves multiple signaling pathways and inflammatory factors. 2 Curcumin is a phenolic pigment that is extracted from the rhizome of Curcuma longa, and it is a main active ingredient with strong antiinflammatory and antioxidant effects. [2][3] The Notch signaling pathway exists widely in vertebrates and non-vertebrates, and it is highly conserved in evolution. Both humans and rats have four Notch receptors. Previous studies have shown that curcumin pre-treatment can alleviate lung injury that is induced by limb I/R through anti-inflammatory and antioxidative effects. 4 Other recent studies from our group indicated that curcumin postconditioning could be effective methods against renal injuries, which were induced by limb I/R in rats via the Notch2/Hes1 pathway. 5 However, it is unclear whether curcumin post-treatment has a protective effect on lung injury that is induced by limb I/R. We suggest that the post-conditioning with curcumin may also play a protective role in lung injury that is induced by limb I/R in rats through the Notch2/Hes-1 signaling pathway. Protective effects for rat lungs may be studied in future clinical trials. This study aims to further explore the effect of curcumin posttreatment on lung injury induced by limb I/R in rats.
Major materials
The following reagents and equipment were purchased from the respective suppliers: Curcumin (batch number: 86M1611V,
Animals and experimental groups
In these experiments, 120 adult male SD rats were used (280-320 g, 6 to 8 months old) and they were randomly divided into the following four groups (n ¼ 30): sham, I/R, curcumin post-conditioning (I/RþCur), and inhibitor (IRþDAPT) groups. The protocol was approved by the Ethics Committee of Central Hospital that is affiliated with Shenyang Medical College (approval No: SYXK(liao)201900007).
Ischemia-reperfusion model
The model of lung injury induced by limb I/R in rats was established based on a previously published study. 6 The rats were fasted for 12 hours with free access to water before the experiment was performed. Under 3% sodium pentobarbital (40 mg/ kg intraperitoneally) anesthesia, access to the right external jugular vein was established. The skin was cut in the femoral triangle of both hind limbs, and the femoral artery and vein were separated. The femoral artery was clamped near the inguinal ligament using a non-invasive microartery clamp. After ischemia of both hind limbs for 4 hours, the non-invasive microartery clamp was loosened and reperfusion occurred for 4 hours. Blood flow was monitored using an ES-1000 SPM ultrasonic blood flow meter (Hayashi Denki, Osaka, Japan). No blood flow was considered to indicate successful ischemia and the blood flow was monitored as a successful criterion for reperfusion. During the experiment, saline was infused intravenously (1.5 mL/ kg/hour).
The femoral artery and vein were separated without clamping in the Sham group. The limb I/R model was established in the I/R, I/RþCur, and I/RþDAPT groups. In the I/RþCur group, curcumin was immediately injected (200 mg/kg, dissolved in 2 mL saline) after ischemia for 4 hours. In the I/RþDAPT group, DAPT (a c-secretase inhibitor that has an inhibitory effect on the Notch2/Hes-1 signaling pathway) was immediately injected (0.5 mm, dissolved in 2 mL saline) after ischemia for 4 hours. The Sham and I/R groups were administered the same amount of saline.
At 4 hours after reperfusion, 3 mL of the arterial blood was collected from the carotid artery. An arterial blood gas analysis was immediately performed using a Gem Premier 3000 blood gas analyzer (Instrumentation Laboratory Co.). PaO 2 was recorded. The rats were then sacrificed by exsanguination. The upper lobe of right lung was taken for 1 cm 3 to examine the wet/ dry (W/D) ratio. The residual blood was washed out with saline at 4 C. The wet weight (W) was measured after treatment and absorption with the filter paper, and the lung tissue was dried at 80 C for 48 hours. The W/D ratio of lung was then calculated. The middle and lower lobes of the right lung (1 cm 3 ) were used to examine Notch2 protein and Hes-1 mRNA expression in the 10% lung tissue homogenate. The upper and middle pole of the left lung were taken (1 cm 3 ) to examine tumor necrosis factor (TNF)-a, interleukin (IL)-1b, and malondialdehyde (MDA) in the 10% tissue homogenate. The lower pole of the left lung was fixed in 10% neutral formalin, and it was embedded with paraffin and made into sections for hematoxylin and eosin (HE) and Hoechst 33258 staining. Pathological changes in the lung tissue were observed using a BX-41 microscope (Olympus Corporation). The Smith scoring method 6 was used to analyze pulmonary edema, alveolar and interstitial inflammation, alveolar and interstitial hemorrhage, atelectasis, and hyaline membrane formation using a 0 to 4-point scale, as follows: 0, no injury; 1, lesion range was less than 25%; 2, lesion range was 25% to 50%; 3, lesion range was 50% to 75%; and 4, lesion range was greater than 75%. Ten high-magnification fields of lung tissue slices were observed randomly in each rat. The semi-quantitative score of lung injury (SSLI) was the sum of the above scores.
Notch2 protein expression
The western blot method was used. Total protein was extracted from 10% lung homogenate by centrifugation and quantified. After SDS polyacrylamide gel electrophoresis separation, transformation, and closure, rabbit-anti-rat Notch2 polyclonal antibody and rabbit-anti-rat b-actin polyclonal antibody were added, and then incubated at 4 C overnight. Alkaline phosphatase-labeled antibody was added and incubated at room temperature for 2 hours. Enzyme was added for color, and then scanned. The Scion Image analysis system (Apple) was used to analyze Notch2 protein expression. The gray value ratio of the target product and b-actin was used to reflect Notch2 protein expression.
Hes-1 mRNA expression
Real time (RT)-PCR was used. Total RNA was extracted from 10% lung tissue homogenate using the Trizol method, and reverse transcription was used to synthesize the cDNA. DNA amplification was then performed. The sequence of primers was as follows: Hes-1 mRNA upstream primer 5 0 -AGAGAGGCGGCTCCGAACGG-3 0 and downstream primer 5 0 -TTGGATGGTGC AGTGGATTCG-3 0 ; and b-actin upstream primer 5 0 -GCGAAGAGAGTA GG-3 0 and downstream primer 5 0 -GGACAAGAGG AGC-3 0 . The reaction conditions were as follows: 35 cycles of pre-denaturation at 94 C for 2 minutes, 95 C for 45 seconds, 57 C for 45 seconds, and 72 C for 60 seconds; followed by extension at 72 C for 5 minutes. The b-actin reaction conditions were as follows: 30 cycles of predenaturation at 94 C for 2 minutes, 94 C for 40 seconds, 58 C for 40 seconds, and 72 C for 60 seconds; followed by extension for 5 minutes. The products were analyzed using 2% agarose gel electrophoresis, ethidium bromide staining, and a gel imager. Hes-1 mRNA expression was reflected by the gray value ratio of the target product and b-actin using the Scion Image analysis system.
TNF-a and IL-1b levels
The 10% lung homogenate was analyzed using ELISA. The TNF-a standard was diluted using a multiple gradient, and then added into the detection wells (100 mL/well). The supernatant was collected and then added into the well (100 mL/well). The plates were sealed and then incubated for 2 hours at 37 C in an incubator. The wells were then washed and biotinylated antibody working solution (100 mL/well) was added. The plates were sealed and incubated at room temperature for 1 hour followed by washing. HRP solution was added (100 mL/well) and incubated at room temperature for 20 minutes. Each well was then washed and color reagent was added (100 mL/well), followed by incubation for 20 minutes in the dark. Termination solution (50 mL/well) was then added followed by gentle shaking, and the labeled enzyme was then added to measure the optical density (OD) value at 450 nm. The OD value of a blank sample was calculated and the average value was taken as the measurement value. The standard curve was made using Curve Expert software, and the corresponding OD value was input to calculate the TNF-a concentration. Similarly, IL-1b was detected.
Detection of pulmonary epithelial cell apoptosis
The specimen was analyzed by routine sectioning and embedding and washed with phosphate buffered saline (PBS). The specimen was immersed in Hoechst 33258 staining solution (0.5 mL) for 5 minutes and then washed with PBS solution. The sample was then placed onto a slide, and anti-quenching sealing solution was dropped onto it. The slide was covered and observed under 400Â magnification using a fluorescence microscope. Microscopically, the nuclei of normal cells were round and dark blue. However, the chromatin of apoptotic cells was condensed, and the nuclei were densely stained, while the cells turned a bright white color. Each slice was randomly divided into five visual fields, and 100 cells were counted in each visual field. The apoptotic index (AI) was expressed as the proportion of apoptotic cells per 100 cells in the lung.
Statistical analysis SPSS 13.0 software (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Data that were not normally distributed were presented as the mean AE standard deviation (x AE s). A one-way analysis of variance (ANOVA) and the least significant differences (LSD) post-hoc test were used to compare between the groups. Additionally, p < 0.05 was considered to be statistically significant.
HE staining
Compared with the Sham group, the number of inflammatory cells in the visual field increased in the samples that were stained with HE, capillaries were congested and dilated, and the alveolar septum and lung interstitium were thickened in the I/R group (p<0.05). Inflammatory infiltration into the pulmonary interstitium and some localized atelectasis were also observed in the I/R group. Compared with the I/R group, HE staining in the I/RþCur group showed no localized atelectasis, and only a small amount of inflammatory infiltration in pulmonary alveoli and interstitium. The changes in the I/RþCur and I/RþDAPT groups were similar compared with the I/ R group, as shown in Figure 1.
W/D, SSLI, and PaO 2
Compared with the Sham group, W/D and SSLI increased (p<0.05) and PaO 2 decreased (p<0.05) in the I/R group. Compared with the I/R group, W/D and SSLI decreased (p<0.05) and PaO 2 increased significantly (p<0.05) in the I/RþCur group. There was no statistically significant difference between the I/RþCur and I/RþDAPT groups, as shown in Table 1.
TNF-a, IL-1b, AI, and MDA
Compared with the Sham group, the TNFa, IL-1b, and MDA levels and the AI increased in the I/R group (p<0.05). Compared with the I/R group, all of these factors decreased in the I/RþCur group (p<0.05). There was no statistically significant difference between the I/RþCur and I/RþDAPT groups, as shown in Table 2 and Figure 2.
Notch2 protein and Hes-1 mRNA expression
Compared with the Sham group, Notch2 protein expression and Hes-1 mRNA expression increased in the I/R group (p<0.05). Compared with the I/R group, Notch2 protein expression and Hes-1 mRNA expression decreased in the I/RþCur group (p<0.05). There was no statistically significant difference between these expression levels in the I/RþCur and the I/RþDAPT groups, as shown in Table 3 and Figures 3 and 4.
Discussion
As the main manifestation of the pulmonary gas-exchange function, partial pressure of oxygen is a sensitive index for evaluating the degree of lung injury and its protective effect. Compared with the Sham group, the I/R group showed inflammatory cell infiltration, alveolar septum thickening, localized atelectasis, and other obvious inflammatory changes in the sections. Additionally, the SSLI and the W/D ratio were significantly increased, and arterial blood gas analysis indicated that PaO 2 was decreased in the I/R group compared with the Sham group. The above results showed that the model of lung injury induced by limb I/R was successfully established in the rats.
Curcumin is a chemical constituent that is extracted from the rhizomes of some plants, such as the Zingiberaceae family and the Araceae family. 7 Curcumin is a phenolic compound that is the basic pigment of the Curcuma longa plant, which has anti-inflammatory, antioxidant, and anti-carcinogenic effects. 8 Studies have shown that curcumin has a wide range of pharmacological activities such as reducing oxidative stress, inhibiting the release of inflammatory cytokines, and anti-apoptosis. 9 Among them, the anti-inflammatory and anti-oxidative effects of curcumin have been widely recognized by researchers worldwide.
Notch genes encode a highly conserved class of cell surface receptors that regulate Note: Compared with the Sham group, a p < 0.05; compared with the I/R group, b p < 0.05; compared with the I/RþCur group, c p > 0.05. I/R, ischemia-reperfusion; Cur, curcumin; DAPT, c-secretase inhibitor that has an inhibitory effect on the Notch2/ Hes-1 signaling pathway; W/D, wet weight-to-dry weight ratio; SSLI, semi-quantitative score of lung injury; PaO 2 , pressure of oxygen in arterial blood. the development of many biological cells. Notch signaling affects many processes of normal cell morphogenesis, including cell apoptosis, cell proliferation, and cell boundary formation. The Notch signaling pathway exists widely in vertebrates and non-vertebrates and it is highly conserved in evolution. Notch, a transmembrane protein that was first found in flies, is ideally suited to precisely regulate cell-to-cell communication during development of complex tissues such as the lung. 10 Previous studies have confirmed that curcumin preconditioning has a protective effect against lung injury that is induced by limb I/R in rats, but the effect of administering curcumin post-treatment on this lung injury in rats is still uncertain. Our previous studies indicated curcumin postconditioning could play a protective role in the lung injuries induced by limb I/R via inhibiting the TRL4/NF-jB p65 pathway. 11 Our previous research results also showed similar protective effects in the rat kidney, as assessed using the matrix metalloproteinase (MMP)-9/ tissue inhibitors of metalloproteinase (TIMP)-1 ratio during the process of limb I/R, 12 but the protective effects in the rat lung via the Notch2/Hes-1 signal pathway were not reported. In our experiment, inflammatory cell infiltration, alveolar septum thickening, alveolar interstitial congestion and edema, and occasional localized atelectasis were observed in the lung HE pathology section in the I/R group. Further studies showed that Notch2 protein and Hes-1 mRNA expression increased in the I/R group. Inflammatory cytokine release was detected by ELISA. Studies have shown that most of the organs may be protected from the damaging effects of the reactive oxygen species by enzymatic and nonenzymatic antioxidant defense mechanisms [13][14][15] . MDA is a good indicator of free radical activity and the increasing presence of lipid peroxidation. 16 The lung is susceptible to systemic inflammatory responses. 17 A large number of oxygen free radicals can activate inflammatory factors, and TNF-a and IL-1b are the main cytokines that initiate the inflammatory response. 18 The above experimental results showed that limb I/R activated the Notch2/Hes-1 signaling pathway. After Notch2 activation, Hes-1, which is located downstream, was up-regulated. Then, Hes-1 stimulated the release of inflammatory cytokines, which eventually led to lung injury. However, curcumin post-treatment significantly reduced alveolar injury, such as infiltration of inflammatory cells, congestion of the alveolar septum, and alveolar exudation. Localized atelectasis disappeared. Western blot and RT-PCR were used to detect Notch2 protein expression and Hes-1 mRNA expression, respectively, in lung tissue. Compared with the I/R group, experimental results also showed that TNF-a, IL-1b, and MDA levels and the AI, W/D, and SSLI results were much lower compared with the I/RþCur group. These results suggest that curcumin posttreatment might play a protective role in lung injury induced in rats by down-regulating Notch2/Hes-1 signaling pathway. Based on the above experimental results, limb I/R injury activates the Notch2/Hes-1 signaling pathway and leads to distant lung injury, but curcumin postconditioning can significantly downregulate this pathway. We also used DAPT, a specific inhibitor of Notch2/Hes-1 signaling, which is the gold standard for inhibition, to further evaluate the extent of these protective effects.
Conclusion
Curcumin post-treatment can reduce the lung injury that is caused by hind-leg I/R injury in rats, and its mechanism may be related to the inhibition of the Notch2/ Hes-1 signaling pathway, the release of the inflammatory factors TNF-a, IL-1b, and MDA, and the AI. The potency of curcumin post-treatment is almost equivalent to that of DAPT (0.5 mm).
|
2020-01-01T15:20:50.102Z
|
2019-12-31T00:00:00.000
|
{
"year": 2019,
"sha1": "b9b9be167ab6e5b8829cf2d23d499f8ccfd10680",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0300060519892432",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1164c265849cef6f988ece195b841b6bfe4612b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254013750
|
pes2o/s2orc
|
v3-fos-license
|
Metagenomic Characterization of Multiple Genetically Modified Bacillus Contaminations in Commercial Microbial Fermentation Products
Genetically modified microorganisms (GMM) are frequently employed for manufacturing microbial fermentation products such as food enzymes or vitamins. Although the fermentation product is required to be pure, GMM contaminations have repeatedly been reported in numerous commercial microbial fermentation produce types, leading to several rapid alerts at the European level. The aim of this study was to investigate the added value of shotgun metagenomic high-throughput sequencing to confirm and extend the results of classical analysis methods for the genomic characterization of unauthorized GMM. By combining short- and long-read metagenomic sequencing, two transgenic constructs were characterized, with insertions of alpha-amylase genes originating from B. amyloliquefaciens and B. licheniformis, respectively, and a transgenic construct with a protease gene insertion originating from B. velezensis, which were all present in all four investigated samples. Additionally, the samples were contaminated with up to three unculturable Bacillus strains, carrying genetic modifications that may hamper their ability to sporulate. Moreover, several samples contained viable Bacillus strains. Altogether these contaminations constitute a considerable load of antimicrobial resistance genes, that may represent a potential public health risk. In conclusion, our study showcases the added value of metagenomics to investigate the quality and safety of complex commercial microbial fermentation products.
Introduction
Genetically modified microorganisms (GMM) are frequently employed for manufacturing food and feed microbial fermentation products, such as vitamins, additives, flavors, supplements and enzymes, because of the increase in microbial enzyme production efficiency and/or yield [1]. However, their presence is unauthorized in the final products commercialized in the European Union (EU) food and feed chain (EC/2003/1830). Contaminations with unauthorized GMM may raise serious public health concerns, especially since GMM often carry antimicrobial resistance (AMR) genes, and the ingestion of such contaminated products carries a risk of AMR horizontal gene transfer to pathogens and other gut microbiota.
However, development and implementation of detection methods for unauthorized GMM is problematic, since the dossiers with details concerning their properties and design are confidential, and not available to enforcement laboratories. In previous studies, PCRbased methods, including quantitative PCR (qPCR), were developed to screen samples for the presence of GMM contaminations, based on markers known to be often used in the construction of GMM, such as certain antimicrobial resistance (AMR) genes [2][3][4][5][6] Life 2022, 12,1971 2 of 24 and the shuttle vector pUB110 [7]. Using these methods, up until now three different transgenic constructs with insertions of protease (GMM protease1 and protease2) and alpha-amylase (GMM alpha-amylase1) encoding genes were found in food enzyme (FE) products from different brands, leading to 15 RASFF notifications (https://ec.europa.eu/ food/safety/rasff-food-and-feed-safety-alerts/rasff-portal_en (accessed on 12 September 2022)). From some of the FE preparations previously collected on the EU market, Bacillus velezensis isolates corresponding to the GMM protease1 could be obtained through microbial isolation experiments, which were subsequently further characterized by whole-genome sequencing (WGS) [8,9]. Apart from this GMM protease1, examples of other unauthorized GMM for which whole genomic characterization was performed remain very limited. To our knowledge, the only other reports of interest within this scope focused on the isolation and characterization of a vitamin B2-producing GM Bacillus subtilis strain (RASFF2014.1249) in feed additives [10,11].
In both cases, i.e., the protease1-producing B. velezensis [9] and the vitamin B2producing B. subtilis [10], the isolates were initially studied by short-read WGS, resulting in raw reads of 50-600 bp in length. Since one of the main limitations of short reads is that they cannot resolve repetitive regions in the genome, this approach did not allow to completely characterize the nature and location of the genetic modifications. In particular, it could not be unambiguously established whether the transgenic constructs were integrated into the host chromosome, or whether they were present as free plasmids. During follow-up studies [8,11], Illumina short-read and Oxford Nanopore Technologies (ONT) long-read WGS were combined using a hybrid assembly strategy, allowing for complete characterization of both GMM. Hybrid assembly methods leverage the strengths of both sequencing technologies by combining the highly accurate short reads with the long reads that are able to bridge repetitive regions, often resulting in a more complete, reliable, and accurate assembly than can be obtained by only employing either one of the sequencing technologies. In particular, D'aes et al. [8] demonstrated that the GMM protease1 construct in the B. velezensis strain is harbored on a high-copy episomal plasmid derived from shuttle vector pUB110 that carries two AMR genes and an insert with a protease encoding gene originating from the B. velezensis host strain. The AMR genes, ant(4 )-Ia and bleO, conferring kanamycin and bleomycin resistance, respectively, were a full-length match to known AMR reference sequences, indicating their completeness and therefore potential functionality. Since the inherent risk of the spreading of AMR genes increases when they are carried on mobile genetic elements such as plasmids, this knowledge is important for the assessment of the potential public health risk associated with a GMM contamination.
These examples showcase the added value of a hybrid assembly approach for isolated GMM strains. However, no isolate carrying either the GM amylase1 or the GM protease2 constructs could be obtained from the FE products, highlighting one of the main bottlenecks of the aforementioned strategies for GMM characterization, namely the required isolation step preceding WGS. Because of the confidentiality of the dossiers describing GMM used to manufacture microbial fermentation products, no prior knowledge is available to enforcement laboratories concerning the required growth conditions to culture the GMM of interest. Even if this information were available, other factors can hamper successful isolation, e.g., microbial competition for growth if several species are present. Alternatively, the GMM may have been genetically altered to render it auxotrophic or impair its ability to persist as viable spores. In some cases, DNA walking allows to investigate transgenic constructs of GMM if no isolates are available, but a minimum of prior information about the DNA walk anchor area is still required, while the size range of the characterized unknown regions close to the DNA walk anchor area is generally limited to a few hundred base pairs [5][6][7]. Shotgun metagenomics enables direct sequencing and analysis of all DNA present in a sample, bypassing the need for isolation and cultivation. Based on a previously characterized vitamin B2-producing GM B.subtilis strain, Buytaers et al. [12] delivered a proof-of-concept for the potential of metagenomics using both shortand long-read sequencing for the detection and identification of GMM without performing Life 2022, 12,1971 3 of 24 a prior isolation step. This study also highlighted that this promising approach requires optimization of suitable methods for DNA extraction from a complex matrix, as well as advanced bioinformatics methods for the analysis of the metagenomic data.
The aim of the current study was to investigate the added value of shotgun metagenomic sequencing, using both short-read Illumina sequencing and long-read ONT sequencing, to confirm and extend the analysis results of the classical characterization methods, i.e., qPCR and microbial isolation for complex samples, e.g., contaminated with more than one GMM. Our case study consisted of the complete genomic characterization of four commercial FE products from different brands, three alpha-amylases, and one protease sample. All four samples were contaminated with both GMM protease1, which was isolated and characterized previously [8], as well as with the unculturable GMM alpha-amylase1. Using hybrid assembly, the GMM alpha-amylase1 construct could be completely characterized. Moreover, a previously undetected novel GMM and transgenic construct was identified in the samples, carrying another alpha-amylase encoding gene, which was designated GMM alpha-amylase2. Additionally, three different unculturable Bacillus strains were discovered that all carried signs of genetic modifications affecting their sporulation ability, supporting that they are GMM and not incidental natural contaminations. The substantial novel findings of this study highlight the potential of metagenomics for the detection and genomic characterization of both known and novel transgenic constructs and their hosts.
DNA Extraction from FE Matrix
Four FE products from different brands were selected from previous studies [2,4,[6][7][8]13], based on their level of contamination with GMM alpha-amylase1 observed with qPCR (Table 1). Genomic DNA was extracted using the Quick-DNA™ HMW MagBead Kit (ZymoResearch) according to the manufacturer's instructions. Per extract, 200 mg of the FE product was used. Following a centrifugation of 1 min at 5000× g, the supernatant was transferred to a new microcentrifuge tube (mix A) while the pellet was suspended in 100 µL of PBS (Gibco). The latter was centrifuged for 1 min at 5000× g and the supernatant was combined with mix A. The pellet was suspended in 1 mL of PBS. After a centrifugation of 1 min at 5000× g, the supernatant was discarded and the pellet was suspended in 100 µL of TE buffer 1X (IDTE) and 20 µL of MetaPolyzyme (5 mg/mL; Sigma) for an incubation of 60 min at 37 • C. The digested sample was then added to mix A. After adding 20 µL of 10% SDS (Fisher) and 10 µL of Proteinase K (20 mg/mL), the sample was incubated at 55 • C for 30 min. The sample was then centrifuged for 1 min at 5000× g. The supernatant was mixed for 20 min with 800 µL of the Quick-DNA™ MagBinding Buffer and 33 µL of the MagBinding Beads. Following a magnetic bead separation, the supernatant was discarded. The sample was gently mixed for 5 min with 500 µL of the Quick-DNA™ MagBinding Buffer. After a magnetic bead separation, the supernatant was discarded and the sample was mixed with 500 µL of the DNA Pre-Wash Buffer. A magnetic bead separation was applied, the supernatant was discarded and the samples were washed by adding 900 µL of the g-DNA Wash Buffer. Following a magnetic bead separation, the supernatant was discarded and the sample was then air dried for 20 min. Finally, the sample was mixed with 50 µL of the DNA Elution Buffer for 10 min at 55 • C and the eluted DNA was then obtained after a magnetic bead separation step. Extracted DNA was visualized by capillary electrophoresis using the Tapestation 4200 device with the associated genomic DNA Screen Tape and reagents (Agilent). Each DNA concentration was measured by spectrophotometry using the Nanodrop ® 2000 (ThermoFisher, Waltham, MA, USA) and each DNA purity was evaluated using the A260/A280 and A260/A230 ratios.
Real-Time PCR Assays
DNA from FE products was analyzed using real-time PCR methods specific to a genetically modified (GM) B. velezensis producing protease (GMM protease1), a second GMM with a transgenic construct encoding a protease (GMM protease2), and a GMM producing alpha-amylase (GMM alpha-amylase1), developed and published previously [5,9].
Each real-time PCR assay was performed in a standard 25 µL reaction volume containing 1X TaqMan ® PCR Mastermix (Diagenode), 400 nM of each primer (Eurogentec), 200 nM of the probe (Eurogentec) and 10 ng of DNA. The real-time PCR program consisted of a single cycle of DNA polymerase activation for 10 min at 95 • C followed by 45 amplification cycles of 15 sec at 95 • C (denaturing step) and 1 min at 60 • C (annealing-extension step). All runs were performed on a CFX96 Touch Real-Time PCR Detection System (BioRad). For each assay, an NTC (no template control) was included.
Bacterial Isolation, DNA Extraction and Isolate WGS
Culturing experiments were performed to characterize potential viable Bacillus contaminations in the samples, in addition to the GMM protease1 that was isolated previously [7]. 1 g of the FE product was added to 250 mL of Brain-Heart Infusion broth (Sigma-Aldrich) for an incubation overnight at 30 • C. 100 µL of the culture was plated on nutrient agar (Sigma-Aldrich) without antibiotics for an incubation overnight at 30 • C.
DNA extracted from isolated bacteria was analyzed by the GMM protease1 qPCR method as described in Section 2.1.2, and the BSG qPCR method specific to the Bacillus subtilis group developed previously [13]. DNA from isolates being both positive to the BSG marker and negative to the GMM protease1 marker was extracted as described previously [8,9] to avoid selecting protease GMM1 isolates, which were already extensively characterized [8]. Short-read DNA libraries were prepared using the Nextera XT DNA library preparation kit (Illumina) according to the manufacturer's instructions. Sequencing was carried out on an Illumina MiSeq system with the V3 chemistry, obtaining 250 bp paired-end reads. The amount of genetic material to load was determined by aiming for a theoretical coverage of 60x per sample, based on the average Bacillus genome size of~4 Mbp.
DNA Library Preparation and Sequencing
Short-read DNA libraries were prepared using the Nextera XT DNA library preparation kit (Illumina) according to the manufacturer's instructions. Sequencing was carried out on an Illumina MiSeq system with the V3 chemistry, obtaining 250 bp paired-end reads. The 4 FE sample libraries were analyzed on a MiSeq run together with 3 libraries belonging to another study, amounting to 7 sample libraries in total, in equimolar quantities. Additionally, an entire independent MiSeq run was devoted to sequencing the Coobra sample library to obtain a super-high depth sequencing coverage.
Long-read DNA libraries were prepared using the ligation sequencing kit (SQK-LSK109; Oxford Nanopore Technologies, Oxford, UK) according to the manufacturer's instructions. Each FE sample library was loaded on an individual R9 MinION flow cell to be sequenced for 48 h.
Raw Read Preprocessing and Analysis
Raw short reads were preprocessed with Trimmomatic and quality of raw and preprocessed data was evaluated with FastQC as described in Section 2.1.4. Raw long reads were basecalled with Guppy 5.0.7 in GPU mode, with a super accuracy model, and with q-score based filtering disabled. Filtlong 0.2.0 [31] was applied to raw fastq data to remove reads with an average quality score below 7 and read lengths below 1000 bp. Quality statistics on raw and filtered data were collected with NanoPlot 1.33.0 [32] with default settings.
Exploratory taxonomic classification and visualization of the raw short-read data was performed with Kraken2 2.1.1 [33], and Krona 2.7 [34], respectively. Genotypic AMR detection with KMA [35] on raw short and long reads was performed as described by Bogaerts et al. [28], with one modification, i.e., instead of the ResFinder database, the National Database of Antibiotic Resistant Organisms (NDARO) (retrieved on 2021-01-12) was used, complemented with an in-house database with a Bacillus-specific AMR gene (catA, CP023729.1:2725109-2725759), which was not present in NDARO.
Metagenome Assembled Genome (MAG) Assembly and Characterization
Metagenomic hybrid assembly was carried out with OPERA-MS 0.9.0 [36] with the -genome-db argument to provide a custom database, with SPAdes 3.13.0 as short-read assembler, and default settings otherwise. The custom database contained all publicly available nucleotide sequences from the NCBI nucleotide database (August 2021) belonging to the genus Bacillus that were circular and/or larger than 3 Mbp, to include a wide range of plasmids and genome assemblies. Apart from SPAdes, the OPERA-MS pipeline had the following dependencies: Samtools 0. The clusters produced by OPERA-MS correspond to high-quality conservative metagenome assembled genomes (MAGs) and were used for further analysis. As an alternative approach to obtain MAGs, binning was carried out with MetaBAT2 2.15 [39] with default settings, using as input the metagenomic OPERA-MS assembly, and the short and long reads of the samples, mapped to the metagenomic OPERA-MS assembly. Short reads were mapped end-to-end with Bowtie2 2.3.4.3, with the '-sensitive' preset, while the long reads were mapped with Minimap2 2.17 with the 'map-ont' preset. Completeness and contamination rates of both the OPERA-MS and Metabat2 MAGs were estimated with CheckM 1.1.3 [40], with default settings, and with Prodigal 2.6.3 and pplacer 1.1.alpha19 as dependencies. For metagenomic long-read only assembly, Canu 2.1.1 [41] was employed, with the following settings: genomeSize = 12,000,000, useGrid = false, corMinCoverage = 0, corOut-Coverage = 999, correctedErrorRate = 0.105, corMaxEvidenceCoverageLocal = 10, cor-MaxEvidenceCoverageGlobal = 10, oeaMemory = 32, redMemory = 32, batMemory = 200, maxThreads = 50, and stopOnLowCoverage = 5. The Canu assemblies were afterwards binned with MetaBAT2, as described above.
Taxonomic classification and annotation of the MAGs was performed with GTDB-Tk as described in Section 2.1.4. Additional ANI values were calculated with FastANI 1.33.
Whole Genome Alignment-Based Comparisons
Multiple genome alignments were made for the annotated B. licheniformis and B. amyloliquefaciens MAGs and B. licheniformis isolates (see Section 3), with progressive-Mauve 20150213 [42] with default settings. The included assemblies were the MAGs ( Table 2), and the isolate assemblies in case of B. licheniformis, and a number of assemblies from reference strains from the NCBI RefSeq database, based on their similarity to the MAGs according to the output of OPERA-MS, and web-based blastn analysis of selected contigs of the MAGs. The B. licheniformis alignment included the following reference strains: ATCC9789 (Accession CP023729), SCDB34 (Accession CP014793), MBGJa67 (Accession CP026522), and YNP1-TSU (Accession CM007615). For the B. amyloliquefaciens alignment, the MAGs were complemented with reference strains DSM7 (Accession FN597644, B. amyloliquefaciens type strain), HK1 (Accession CP018902), 205 (Accession NZ_CP054415), CC178 (NC_022653), and Y2 (Accession NC_017912).
Estimation of Depth and Breadth of Coverage of Bacillus spp. Chromosomes and Extrachromosomal Elements in the Samples
A pipeline for the calculation of the read depth and breadth of coverage for the Bacillus species chromosomes and associated extrachromosomal elements detected in the samples was designed to obtain an estimate of the reads that map uniquely, thereby excluding reads multi-mapping to similar regions in the transgenic constructs or Bacillus chromosomes. The reference consisted of B. licheniformis ATCC9789, B. amyloliquefaciens DSM7, B. velezensis 10075, the transgenic constructs of GMM alpha-amylase1 (this study), GMM protease1 (Accession OU015425.1), GMM alpha-amylase2 (this study) and the sequences of plasmid pFL7 (Accession AJ577855), and the putative extrachromosomal linear prophage of the GMM protease1 (Accession OU015426). Short reads of the metagenomic samples were trimmed and filtered with Trimmomatic as described previously, and mapped end-toend with bowtie2 2.3.4.3, with '-sensitive' presets. Raw long reads were mapped with Minimap2 with the 'map-ont' presets. The alignments were filtered with Samtools 1.9 to remove alignments with MAPQ values below 2 or below 60, for the short and long reads, respectively, followed by splitting the alignment file according to the reference with Bamtools 2.5.15. Depth of coverage was calculated with Samtools depth with default settings for each resulting alignment file, after which the mean depth and the breadth of coverage were calculated for each reference with an in-house script. The mean depth of coverage only considered sites with a non-zero depth, i.e., all sites of the reference that were not covered by any uniquely mapping reads were excluded from the calculation. To calculate the breadth of coverage for short reads, only sites with a depth of coverage >2 were taken into account, to avoid counting sites with only or two potentially spuriously mapped reads. For long reads, this cutoff was set to >0 because the reads are longer, and were already filtered very strictly on their MAPQ scores, thus all reads were assumed to map correctly. , the longread alignments were sorted and indexed with Samtools, whereafter they were visualized with Integrative Genomics Viewer 2.4.10 [43]. The alignments were checked manually for the presence of macroscopic deletions. For each sample, the percentage of long reads supporting a certain deletion was calculated by subtracting the estimated coverage at the site of the deletion from the average coverage of the 1000 bp regions surrounding either site of the deletion, followed by dividing this number by the latter coverage. To confirm some of the metagenomic results, PCR assays targeting the areas of interest, followed by Sanger sequencing, were performed for the samples Coobra and Pureferm. Primers were designed using the software Primer3 [44], resulting in the SigF-F (ATGCAGC-CGATTTGAAAGAG) and SigF-R (AAAACTCAGGGCAGGGAAAC) primers for the sigF deletion, and in the yqfD-F (CTTCTGCTTTTTCGCCATCTT) and yqfD-R (CCTTTCCTCGT-GCAGAAGTC) primers for the yqfD deletion ( Figures S8 and S9). For the chromosomal insertion of the GMM alpha-amylase2 transgenic construct in B. licheniformis, several regions (A-D) were targeted (Figure S10), using (i) the A-F (GCGGGACTATGGATGTTTGT) and A-R (GAGACTGTTGCCTGGACCTC) primers for region A, (ii) the B-F (GGCAGAAT-ACATCCTGCA) and B-R (CAAAGTGTCATCAGCCCTCA) primers for region B, (iii) the C-F (CTGCGGACGTTGCATAAATA) and C-R (ATGCAGTGTGTGACGGCTAT) primers for region C, and (iv) the D-F (GGCAGAATACATCCTGCAG) and D-R (TTGATTCCATCC-CCCTGTAA) primers for region D.
For each PCR assay, a standard 25 µL reaction volume was applied containing 1X Green DreamTaq PCR Master Mix (ThermoFisher Scientific), 400 nM of each primer (Eurogentec) and 10 ng of DNA. The PCR program consisted of a single cycle of 1 min at 95 • C (initial denaturation) followed by 35 amplification cycles of 30 sec at 95 • C (denaturation), 30 s at 55 • C (annealing) and 1 min at 72 • C (extension) and finishing by a single cycle of 5 min at 72 • C (final extension). The run was performed on a Swift MaxPro Thermal Cycler (Esco). The PCR products were visualized by electrophoresis on 1% agarose gel (Invitrogen, CA, USA) (100 V, 400 mA, 50 min). The sequencing of the PCR products, purified from agarose gel using the QIAEX II Gel Extraction Kit (QIAGEN), was performed on a Genetic Sequencer 3500 using the Big Dye Terminator Kit v3.1 (Applied Biosystems) according to the manufacturer's instructions. The generated sequences were analysed using the Clustal Omega software [45] through the web-interface of EBI with default parameters (Figures S6, S7 and S10).
Assembly of Mock Metagenomic Datasets with B. velezensis and B. amyloliquefaciens
To investigate a putative metagenomic hybrid assembly collapse of B. velezensis and B. amyloliquefaciens into a single MAG for B. amyloliquefaciens or B. velezensis (see Section 3), mock Illumina and ONT sequencing datasets were constructed, consisting of publicly available data from a B. amyloliquefaciens strain (EA19, Accession Bioproject PRJNA744208), mixed with reads from GMM protease1 isolates [8]. The B. amyloliquefaciens Illumina reads were 150 bp in length, as opposed to the 250 bp reads of B. velezensis, but this was the best available dataset, since there were no publicly available B. amyloliquefaciens datasets for a single strain that comprised both ONT reads as well as Illumina reads of 250 bp.
The first dataset was composed of B. amyloliquefaciens and B. velezensis Pilsner1-2 (Accession Biosample SAMEA8478143) reads in a 10/1 ratio to mimic the proportions of the read abundance of both strains as estimated for the Coobra sample. In addition, the datasets were subsampled with seqtk 1.3, prior to mixing them, to approximate the absolute read depth of both strains in the Coobra sample. For B. amyloliquefaciens, Illumina and ONT reads were subsampled to 250× and 50×, respectively, based on a genome size of 4.0 Mbp, while for B. velezensis, Illumina and ONT reads were subsampled to 25× and 5×, respectively, based on a genome size of 4.35 Mbp.
The resulting mock datasets were subjected to metagenomic hybrid assembly with the OPERA-MS pipeline, followed by downstream analysis as described in Section 2.2.3.
Characterization of Samples by Classical Methods: qPCR, Microbial Isolation, and WGS
For the four food enzyme (FE) products used in this study, Table 1 lists the results of their characterization with classical methods, including qPCR on the FE matrix, and microbial isolation from the FE matrix, followed by WGS-based analysis. qPCR assays were performed for three previously characterized transgenic constructs with insertions of protease (GMM protease1 and GMM protease2) and alpha-amylase (GMM alpha-amylase1) encoding genes. Based on these results, a cross-contamination of food enzyme products with two different GMM, namely GMM protease1 and GMM alpha-amylase1, was demonstrated.
Microbial isolation experiments were performed to characterize any viable Bacillus strains contaminating the samples (see Supplementary text S1 and Table S1 for analysis metrics and a more detailed description of the results). This yielded isolates for samples Coobra and Pureferm, while from samples Stillspirits and Browin no viable strains could be retrieved under the tested conditions. For the Pureferm sample, all 3 isolates obtained in this study corresponded to the GM B. velezensis protease1 host strain. However, no sequence related to the GMM protease1 construct (pUB110 shuttle vector and associated AMR genes) was detected in the assemblies, which could likely be explained by the loss of the plasmid carrying the GMM protease1 construct due to the absence of antibiotic selection pressure during the microbial isolation experiment. For the Coobra samples, all 10 isolates obtained in this study were identified as clones of a single Bacillus licheniformis strain. No elements associated with the presence of a transgenic construct were identified in the assemblies of these isolates, indicating that it is either not a GMM or alternatively also might have lost the construct due to the absence of a suitable antibiotic selection pressure during the isolation.
The Metagenomic Approach Confirms the Presence of All GMM Contaminations Observed by qPCR
Metagenomic sequencing was carried out to obtain both short-and long-read data, for which key metrics are listed in Table S2, while Figure S1 shows the taxonomic classification results for the raw short reads. Table 2 shows the main metrics of the hybrid metagenomic assemblies and derived MAGs for the four samples. An overview of the extrachromosomal elements, e.g., plasmids, detected in the hybrid metagenomic assemblies is provided in Table S3.
The presence of contaminations related to known GMM was investigated and found to be in line with the qPCR analysis (Section 3.1). In the metagenomic assemblies of the three alpha-amylase FE products, i.e., Coobra, Stillspirits, and Browin, contigs covering the complete GMM alpha-amylase1 construct were detected (Table S3). Additionally, in the protease FE sample Pureferm, a contig partially covering the GMM alpha-amylase1 construct was present, supporting the qPCR result and confirming that Pureferm is cross-contaminated with GMM alpha-amylase1. Conversely, all alpha-amylase sample assemblies displayed contigs with at least a partial GMM protease1 construct (Table S3), confirming the qPCR result and the cross-contamination of these samples with the protease-producing GMM. The GMM alpha-amylase1 construct has previously been partially characterized using DNA walking [7], but as no isolate could be obtained for this GMM, complete sequencing and characterization of the construct, and determination of its location (chromosomal or plasmidic) had remained elusive.
Metagenomic Analysis
With the metagenomic approach, genomic material covering the entire construct and its genomic context could be obtained through metagenomic hybrid assembly. The metagenomic assemblies presented contigs representing at least a partial, in case of the Pureferm sample, or the complete GMM alpha-amylase1 construct, for samples Coobra, Stillspirits, and Browin, allowing for a complete characterization (Figure 1). The complete construct was 6814 bp in length, and derived from shuttle vector pUB110 (Accession M19465, 4548 bp), with a recombinant insert of 2265 bp in length. This insert carried amyA, encoding alpha-amylase, and was a nearly 100% identical match to amyA of B. amyloliquefaciens DSM7 (Accession FN597644). The GMM alpha-amylase1 construct carried two AMR genes: ant(4 )-Ia, encoding an aminoglycoside O-nucleotidyltransferase conferring kanamycin and neomycin resistance, and bleO, conferring bleomycin resistance. Both AMR genes were a full-length 100% identical match to the reference AMR genes, indicating that they were complete and potentially functional (Tables S4 and S5). The upstream junction of pUB110 and the insert displayed an MboI restriction site (GATC), while the downstream junction showed a hybrid BamHI/MboI restriction site (GGATCC) (Figure 1). The recombinant insert disrupted only the mob gene, leaving all elements required for normal replication intact [46]. genomic assemblies presented contigs representing at least a partial, in case of the Pureferm sample, or the complete GMM alpha-amylase1 construct, for samples Coobra, Stillspirits, and Browin, allowing for a complete characterization ( Figure 1). The complete construct was 6814 bp in length, and derived from shuttle vector pUB110 (Accession M19465, 4548 bp), with a recombinant insert of 2265 bp in length. This insert carried amyA, encoding alpha-amylase, and was a nearly 100% identical match to amyA of B. amyloliquefaciens DSM7 (Accession FN597644). The GMM alpha-amylase1 construct carried two AMR genes: ant(4′)-Ia, encoding an aminoglycoside O-nucleotidyltransferase conferring kanamycin and neomycin resistance, and bleO, conferring bleomycin resistance. Both AMR genes were a full-length 100% identical match to the reference AMR genes, indicating that they were complete and potentially functional (Tables S4 and S5). The upstream junction of pUB110 and the insert displayed an MboI restriction site (GATC), while the downstream junction showed a hybrid BamHI/MboI restriction site (GGATCC) (Figure 1). The recombinant insert disrupted only the mob gene, leaving all elements required for normal replication intact [46]. Although this could not be unequivocally established, both the available experimental evidence, as well as literature reports, indicated that the GMM alpha-amylase1 is most likely harbored on a free high-copy plasmid (Supplementary text S2). Although this could not be unequivocally established, both the available experimental evidence, as well as literature reports, indicated that the GMM alpha-amylase1 is most likely harbored on a free high-copy plasmid (Supplementary text S2).
The Unauthorized GMM Contaminations in the FE Samples Are Associated with a Considerable Load of AMR Genes
AMR gene detection analysis based on the complete metagenomic short-read and long-read datasets, which both show the same trends, as well as on the metagenomic assemblies and MAGs, highlighted that the microbial contamination of the FE samples is associated with a significant presence of AMR genes, both on plasmids as well as of chromosomal origin (Tables S4 and S5). These include the AMR genes associated with the transgenic constructs (Section 3.2.2 and Section 3.2.3.3), but also a number of additional AMR genes, associated with the Bacillus host chromosomes and likely of natural origin (Supplementary text S3).
Metagenomic Analysis Reveals the Presence of Novel Unculturable Genetically Modified Bacillus strains and of a Novel Transgenic Construct
In addition to the confirmation of the qPCR results targeting known GM constructs, and the complete characterization of the GMM alpha-amylase1 construct reported in the previous sections, the metagenomic hybrid assembly approach revealed that all samples were contaminated with multiple different Bacillus strains, several of which were not previously detected by microbial isolation experiments (Table 2). Moreover, the metagenomic analysis facilitated the discovery and complete characterization of a previously unknown transgenic construct.
Unlike the B. velezensis GMM protease1 host strain, the B. licheniformis and B. amyloliquefaciens strains are unculturable strains that could not be detected with classical (culturing based) analysis methods. In these cases, the culturing conditions may not have been suitable to obtain isolates, or the contaminations may have been solely represented by dead vegetative cells, or even only by free DNA that was released from dead cells. Irrespective of whether viable cells were still present, if the organism could not be cultured, it was designated as 'unculturable' for the purpose of this study.
Two Unculturable Bacillus licheniformis Strains Are Likely Asporogenic GMM
A single metagenome assembled genome (MAG) for B. licheniformis was obtained for all four samples, and in samples Coobra, Browin, and Stillspirits, it was the dominant contamination in terms of read abundance, as indicated by the read-depth reported by OPERA-MS for the different MAGs ( Table 2). The B. licheniformis OPERA-MS MAGs were 4.05-4.16 Mbp in length, and deemed of high quality, being at least 96% complete. Whole-genome comparison of the B. licheniformis MAGs with selected B. licheniformis reference genomes (see Section 2) indicated that the unculturable B. licheniformis is closely related to B. licheniformis ATCC9789 (Accession CP023729). B. licheniformis ATCC9789 is a non-auxotrophic, wild-type strain, which is available for purchase from a number of culture collections. The B. licheniformis MAGs and the genome of strain ATCC9789 share a number of genomic islands that are absent from the other strains included in the whole-genome comparison ( Figure S2), supporting their close relatedness. Additionally, average nucleotide identity (ANI) estimations between the B. licheniformis MAGs and strain ATCC9789 were >99.97% in all cases.
Moreover, in-depth analysis based on inspection of long-read alignments (Supplementary text S4) indicated that in samples Coobra, Stillspirits and Browin, the B. licheniformis MAG does not represent one, but two closely related strains, only distinguishable by the presence of a different set of genomic deletions (Table S6). Sample Pureferm on the other hand appeared to be contaminated with only one of the unculturable B. licheniformis strains. Additionally, evidence was found, which was supported by PCR, that the two unculturable B. licheniformis strains were genetically modified to impair their ability to sporulate (Supplementary text S4). More specifically, the B. licheniformis strain that was found in all four FE samples carried a deletion affecting sporulation genes sigF and spoIIAB ( Figures S3, S4 and S7). The other strain, detected in the alpha-amylase FE samples Coobra, Browin, and Stillspirits, but not Pureferm, harbored a deletion in the yqfD sporulation gene (Figures S5 and S6).
Finally, whole-genome comparison clearly demonstrated that the viable B. licheniformis strain that was isolated from the Coobra sample (Section 3.1 and Supplementary text S1) is distinct from the unculturable B. licheniformis strains, as illustrated in Figure S2. Furthermore, none of the deletions found in the unculturable B. licheniformis strains (Table S6) were detected in the isolate assemblies, underpinning their difference.
An Unculturable Bacillus amyloliquefaciens Strain Is Potentially an Asporogenic GMM
In the samples Coobra and Stillspirits, two incomplete, distinct OPERA-MS MAGs per sample were classified as B. amyloliquefaciens, while MetaBAT2 outputted a single B. amyloliquefaciens MAG for Coobra, Stillspirits, as well as for Browin (Table 2), albeit a highly incomplete one. For Pureferm, no B. amyloliquefaciens MAG was generated at all, although read-mapping analysis suggested that B. amyloliquefaciens is present at a low abundance (Table S7). A potential explanation for these inconsistent results is the occurrence of an assembly collapse of the highly similar genomes of the B. velezensis strain (GMM protease 1) with that of the B. amyloliquefaciens strain, as supported by assembly of mock metagenomic datasets containing both B. velezensis and B. amyloliquefaciens reads with uneven relative abundances. Overall, the analysis indicated that only one B. amyloliquefaciens strain was present in the samples, despite the output of two separate MAGs by OPERA-MS (Supplementary text S6).
The MetaBAT2 MAGs of Coobra and Stillspirits were included in a whole-genome comparison with a selection of B. amyloliquefaciens reference strains (see Section 2). This revealed the presence of a 6 bp insertion in sigK, also known as spoIIIC, encoding a sigma factor responsible for the expression of sporulation specific genes, in the B. amyloliquefaciens MAGs of Coobra and Stillspirits, compared to the reference strains. The insertion is not present in the B. velezensis GMM isolate genome, confirming that it is not an assembly artefact resulting from the presence of the two similar strains (see Supplementary text S6). The insertion might therefore represent a genuine and unique genetic modification to impair the sporulation ability of the strain, similar to the unculturable B. licheniformis strains described in Section 3.2.3.1. Analysis of the predicted protein sequence of the gene showed that it constitutes an in-frame mutation, resulting in the insertion of 'NA' in the primary sequence of the protein. The possibility that this mutation occurred naturally cannot be excluded, although further investigation indicated it was never present in any of the publicly available B. amyloliquefaciens genomes in NCBI. Apart from the sigK insertion, no other conspicuous putative modifications were found that could indicate this strain potentially being genetically modified.
A Novel GMM Alpha-Amylase2 Construct Is Integrated into the Genome of the Unculturable B. licheniformis
Our investigation (Supplementary text S5, Figures S8-S10, Table S8) revealed the presence of an additional transgenic construct in all four samples, which was not previously detected using the classical qPCR-or isolation-based methods. This construct (Figure 2), designated GMM alpha-amylase2, carried the catA AMR gene, flanked by an amylase encoding gene (amyS) originating from B. licheniformis, and not from B. amyloliquefaciens as is the case for the GMM alpha-amylase1 construct. The B. licheniformis amyS gene shares only 74% nucleotide sequence identity with its alpha-amylase encoding counterpart amyA from B. amyloliquefaciens.
presence of an additional transgenic construct in all four samples, which was not previously detected using the classical qPCR-or isolation-based methods. This construct (Figure 2), designated GMM alpha-amylase2, carried the catA AMR gene, flanked by an amylase encoding gene (amyS) originating from B. licheniformis, and not from B. amyloliquefaciens as is the case for the GMM alpha-amylase1 construct. The B. licheniformis amyS gene shares only 74% nucleotide sequence identity with its alpha-amylase encoding counterpart amyA from B. amyloliquefaciens. Figure S10. A single copy of the construct is 3606 bp in length, and composed of two sequences originating from catA is an AMR gene, encoding a type A chloramphenicol O-acetyltransferase that has recently been described in literature as being common in B. (para)licheniformis [48], and is phylogenetically distinct to previously described catA from other bacterial species. The catA gene in the novel construct was a full-length 100% identical match to the reference from B. licheniformis ATCC9789 (Tables S4 and S5), indicating that it is complete and potentially functional.
The results proved (Supplementary text S5) that the GMM alpha-amylase2 construct is integrated into the genome of at least one and potentially both unculturable B. licheniformis strains (Figure 2). The available evidence (Supplementary text S5) indicates that the copy number of the construct is at least two, and probably more.
High-Depth Metagenomic Sequencing and Hybrid Assembly Highlights the Presence of GMM Protease1 Host Strain in the Coobra Sample
Despite the positive qPCR signal for GMM protease1 in the alpha-amylase FE samples, a B. velezensis MAG, representing the GMM protease1 host strain, was not detected in the assemblies of these samples. To assess the added value of very high depth sequencing, an additional entire independent MiSeq run was carried out, dedicating the full capacity to the Coobra sample to obtain super high (short-read) coverage. The data was analyzed with the same approach as for the smaller datasets. For the hybrid assembly, the data was combined with the same long-read dataset for Coobra as described above. In addition to the unculturable B. licheniformis and B. amyloliquefaciens MAGs that were also assembled with the lower depth data, this assembly (Table 3) additionally showed two MAGs, classified as B. velezensis, i.e., the host species of the GMM protease1 construct. However, even at this high depth, the B. velezensis MAGs were of low quality. This may be explained by assembly collapse of the closely related B. amyloliquefaciens and B. velezensis strains in the samples (Supplementary text S6, Table S9). Furthermore, the high-depth Coobra assembly contained a contig displaying the completely assembled extrachromosomal prophage of the GMM protease1, which in a previous study was shown to be a characteristic element of the genome of this GMM [8], while the lower-depth alpha-amylase datasets of Coobra, Stillspirits and Browin only allowed assembly of small fragments of this prophage (Table S3). These findings provided strong support for the presence of the GMM protease1 host strain in the sample. Table 3. Metrics of metagenomic assembly generated with OPERA-MS based on the super-high depth short-read dataset and the long-read dataset described previously, and derived metagenomics assembled genomes (MAGs) in the Coobra sample. Overall, the analysis of the Coobra sample, with a combination of classical analysis methods and in-depth metagenomic analysis, provided a thorough insight into the GMM contaminations in the sample, clearly highlighting the added value and potential of this approach for the investigation of unauthorized GMM contaminations (Figure 3). 1 The MAGs directly outputted by OPERA-MS by a reference-based clustering (i.e., supervised) approach are shown, together with the average short-read and long-read coverage that was obtained for each MAG. MAGs obtained by an alternative unsupervised binning tool, Metabat2, are presented as well. Taxonomic classification was done with GTDB-Tk. 2 GC%, completeness, and taxonomic classification (done with GTDB-Tk) are only relevant for the MAGs and are therefore not indicated for the metagenomes. 3 GTDB-Tk did not assign a taxonomic label to this MAG (because it was too incomplete). Blastn was used to get an indication of the taxonomic classification. Figure 3. Overview of the contribution of different analysis approaches to the elucidation of the genomic composition of the FE sample Coobra. Metagenomic analysis confirmed the presence of the GMM protease1 and GMM alpha-amylase1 construct and allowed for complete characterization of the latter. Additionally, metagenomics revealed the presence of three unculturable Bacillus strains with genetic modifications affecting their sporulation ability, and one novel transgenic construct/GMM (GMM alpha-amylase2). The viable GMM protease1 (B. velezensis) strain was characterized previously [8]. The transgenic construct of GMM alpha-amylase1 (pUB110-amyA), for which the association with its host strain could not be established with full certainty (see Section 4) is indicated with its most likely host.
Discussion
In this case, study, the characterization of GMM contaminations in FE products by classical methods, i.e., qPCR and microbial isolation followed by WGS, was compared and complemented with an approach using shotgun metagenomic sequencing with both short-and long-read technologies. Table 4 shows an overview of the most important findings for the Coobra sample, which was studied the most extensively. Figure 3. Overview of the contribution of different analysis approaches to the elucidation of the genomic composition of the FE sample Coobra. Metagenomic analysis confirmed the presence of the GMM protease1 and GMM alpha-amylase1 construct and allowed for complete characterization of the latter. Additionally, metagenomics revealed the presence of three unculturable Bacillus strains with genetic modifications affecting their sporulation ability, and one novel transgenic construct/GMM (GMM alpha-amylase2). The viable GMM protease1 (B. velezensis) strain was characterized previously [8]. The transgenic construct of GMM alpha-amylase1 (pUB110-amyA), for which the association with its host strain could not be established with full certainty (see Section 4) is indicated with its most likely host.
Discussion
In this case, study, the characterization of GMM contaminations in FE products by classical methods, i.e., qPCR and microbial isolation followed by WGS, was compared and complemented with an approach using shotgun metagenomic sequencing with both shortand long-read technologies. Table 4 shows an overview of the most important findings for the Coobra sample, which was studied the most extensively.
The qPCR assays demonstrated the presence of a cross-contamination of the four investigated samples with two previously described known GMM: GMM protease1 and GMM alpha-amylase1. For GMM protease1, viable isolates could be obtained from some of the samples, which were characterized in a previous study using WGS [8]. Microbial isolation experiments were also pivotal to the detection of a viable B. licheniformis strain in the Coobra sample, which constitutes a significant unauthorized contamination, even if no signs of genetic modification were observed.
With the metagenomic approach, the presence of GMM contaminations related to the known GMM protease1 and GMM alpha-amylase1 was confirmed, in agreement with the qPCR analysis. Without any prior microbial strain isolation, the transgenic GMM alpha-amylase1 construct could be completely characterized. The genetic make-up of this construct is consistent with that of pKTH10, a recombinant plasmid generated by cloning Life 2022, 12,1971 18 of 24 a MboI-restriction fragment of approximately 2.3 kb into BamHI-restricted pUB110 [49]. Transformation of a B. subtilis host with this plasmid led to a 2500-fold increase of the alpha-amylase activity, according to Palva [49]. To our knowledge, the sequence of pKTH10 was never published, but the close resemblance nevertheless indicates that the design of GMM alpha-amylase1 could potentially be inspired by that of pKTH10. 'x' indicates that the approach was able to detect the strain/construct. 1 The viable GMM protease1 (B. velezensis) strain and the transgenic construct it carries were characterized previously [8]. 2 A potential explanation for the absence of the viable B. licheniformis strain from the metagenomic assembly is given in Supplementary text S6.
While the classical approach with qPCR can detect specific AMR genes for which an assay is available, metagenomics allowed obtaining a complete characterization of the AMR genes in the samples. With this open approach, not only the AMR genes associated with the known GMM constructs were retrieved, but also the Bacillus-specific catA gene associated with the novel GMM alpha-amylase2 construct (see Section 3.2.3.3), as well as several AMR genes associated with the unculturable Bacillus strains that contaminated the samples. Notably, the catA gene was not detected by our previously developed qPCR assay targeting a cat gene commonly present in vectors, which was found in an unauthorized GMM on at least one occasion [3]. The cat gene targeted in this qPCR assay originates from S. aureus, and shows only 42.8% sequence similarity at the nucleotide level with the cat gene indigenous to Bacillus, explaining why the latter did not produce a positive signal with this assay.
In addition to the more complete characterization of known GMM strains and constructs, the metagenomic approach also revealed the presence of several previously undetected Bacillus strains, and allowed for the discovery and characterization of a novel transgenic construct, GMM alpha-amylase2, which was shown to be integrated into the chromosome of its host B. licheniformis. The FE products contained up to three unculturable Bacillus strains: two B. licheniformis and one B. amyloliquefaciens strain(s) that were likely deliberately engineered to impair their ability to sporulate. Concerning the suspected artificial nature of the genetic modifications, the sigF-spoIIAB deletion is especially noteworthy.
At the site of the deleted region, a short foreign sequence was detected (GACTCTAGAG-GATCCCC, Figure S7), which was not present in strain ATCC9789. This 17 bp sequence is an exact match to the multiple cloning site (MCS) of plasmid pWH1520. In a recent study [50], this plasmid was employed as a vector for a CRISPR/Cas9 editing system for B. licheniformis. Zhou et al. cloned a CRISPR/Cas9 construct into the MCS of pWH1520 (Accession JC210951), resulting in the MCS ending up flanking the homologous repair template (HRT) of the CRISPR/Cas9 construct. This or a similar vector might therefore potentially have been used to construct the sigF-spoIIAB deletion by CRISPR/Cas9 editing, whereby a part of the flanking sequence of the HRT may have ended up in the genome of the B. licheniformis strain by accident, leading to the presence of a 'trace' sequence that could be detected in the resulting GMM strain. However, it should be emphasized that a 17 bp sequence is too short to unequivocally determine its origin, and whether the deletion was created with CRISPR/Cas9 or with another genetic engineering technique.
Knock-out of sporulation genes is an established strategy in Bacillus producer strains, because it facilitates sterilization of the fermentation equipment, while it can also increase enzyme production yield [51]. A Bacillus strain unable to produce spores is unable to survive during long-term storage under unsuitable conditions for vegetative growth. Therefore, the presence of genetic modifications rendering the strains asporogenic could explain why they could not be isolated as viable strains, despite their high read abundance in some of the samples.
With the aid of a high-depth sequencing short-read dataset for the Coobra sample, the GMM protease1 host strain could additionally be detected and partially characterized. This strain was not detected with the lower-depth datasets for the amylase samples Coobra, Stillspirits and Browin, which can on the one hand be attributed to its low read abundance, which may in turn be associated with its presence as spores, potentially reducing the efficiency of the DNA extraction, and on the other hand to the close relationship of B. amyloliquefaciens and B. velezensis, which likely caused the assemblies for both species to collapse and hide the presence of the strain present in the lowest abundance. With the continuing decrease in sequencing cost, this level of sequencing depth will become feasible, allowing to take full advantage of the power of metagenomics when in-depth metagenomic characterization of this type of complex datasets is envisaged.
Together, these results confirm that metagenomic analysis can partly bypass the need for cumbersome and often problematic isolation experiments, while additionally allowing to detect and characterize previously undetected constructs and strains, highlighting the potential of metagenomics and a hybrid assembly approach for the analysis of GMM-based products.
A major obstacle for the detection of GMM by enforcement laboratories is that the dossiers submitted to EFSA by the manufacturers, providing detailed information concerning producer organisms and genetic modifications for the different FE products, are confidential. Therefore, even when a GMM is detected, the confidentiality of the data present in the dossier does not allow to verify by enforcement laboratories that the GMM described in the dossier is effectively the one present in the product sold on the market. Moreover, for one of the samples, the information that is publicly available was shown to be incorrect, i.e., the Pureferm FE is labeled to be produced with B. subtilis (Table 1), while our analysis demonstrated that it is in fact a B. velezensis strain.
Due to this confidentiality and lack of information, it is difficult for enforcement laboratories to develop routine, targeted detection methods. Even if an open approach such as metagenomics was used, it is still difficult to draw definitive conclusions concerning the potential risks that are associated with these contaminations. The potential risk for spreading of AMR through horizontal gene transfer increases if AMR genes are located on mobile genetic elements, such as plasmids [52]. Although the GMM alpha-amylase1 construct most likely exists as a free high-copy plasmid, this could not be unequivocally established. Moreover, it was not possible to identify the host of this construct with full certainty, based on the available results. However, the amylase encoding gene in this construct originates from B. amyloliquefaciens, for which an unculturable strain was detected in the metagenomic data. The amylase encoding gene from the GMM alpha-amylase2 construct on the other hand was derived from B. licheniformis and was also shown to be associated with an unculturable B. licheniformis strain in the samples. Therefore, it could be deduced that the most likely host for GMM alpha-amylase1 is the unculturable B. amyloliquefaciens strain. To confirm this, prior isolation of the host strain would still be required, or alternatively the use of advanced analysis methods such as Hi-C, which relies on a sample pretreatment to cross-link genomic DNA regions in close proximity to one another, followed by NGS of linked DNA segments [53].
One of the most noteworthy findings from this study is that the samples were crosscontaminated with three different transgenic constructs. The cross-contaminations may have been caused by a common downstream processing line for both amylase and protease FE production, which is not sufficiently decontaminated between batches. Alternatively, the contaminations may originate from different manufacturers, and ended up together as a consequence of batch mixing.
The use of GMM in food-and other industries has some undeniable advantages, and since microbial fermentation takes places in an enclosed environment, potential risks associated with the use of GMM can, at least in theory, be perfectly mitigated. However, these commercially available FE products contained a plethora of microbial contaminations, including, e.g., for the Coobra sample a viable GMM, a natural viable contamination, and DNA from three unculturable GMM, resulting in a combined significant AMR gene load. This signals a significant problem with the implementation of suitable containment procedures at the production facilities and poses a substantial potential public health risk, as the AMR genes could potentially spread into the environment, e.g., by horizontal transfer to gut microbiota and/or to pathogens after ingestion In turn, this emphasizes the need for more structural control procedures, to ensure the quality and safety of microbial fermentation products. The availability of detailed information concerning species, strain and genetic modifications of registered GMM to control enforcement laboratories would enable the development of targeted detection methods. In particular, the implementation of a GMM reference database, analogous to, e.g., the GMO database Nexplorer [54] or JRC GMO-Amplicons [55], would allow for the development of much more efficient NGS analysis pipelines.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/life12121971/s1, Figure S1: Visualization of taxonomic classification of metagenomic data (short-reads). Figure S2: Part of the whole-genome alignment of the B. licheniformis OPERA-MS MAGs, a number of the B. licheniformis isolates from Coobra, and a selection of reference strains, centered on a genomic island, indicated in blue, that is shared by B. licheniformis ATCC 9789 and the B. licheniformis MAGs, but is absent from the reference strains, and the B. licheniformis isolates. Figure S3: Part of the whole-genome alignment of B. licheniformis OPERA-MS MAGs, B. licheniformis isolate (no. 9), and a selection of reference strains, centered on the sigF and spoIIAB gene. Figure S4: Alignment of raw long reads of A. The Coobra, and B. the Pureferm sample to reference B. licheniformis ATCC 9789, centered on the sigF and spoIIAB genes. Figure S5: Whole-genome alignment of B. licheniformis OPERA-MS MAGs, B. licheniformis isolates (only no. 9 shown), and a selection of reference strains, centered on the yqfD gene. Figure S6 Figure S8: Alignment of raw long reads of the Coobra sample to reference B. licheniformis ATCC 9789, centered on the amyS gene, visualized with IGV. Figure S9: Alignment of raw long reads of the Coobra sample to reference B. licheniformis ATCC 9789, centered on the catA gene (cds 2,725,109-2,725,759), visualized with IGV. Figure S10: Multiple sequence alignments (ClustalO 1.2.4) of Sanger sequencing results of PCR products targeting the unnatural associations due to the insertion of the GMM alpha-amylase2 construct (catA-amyS) in the B. licheniformis host genome. Table S1: Metrics of assemblies for the Bacillus isolates from the Coobra and Pureferm FE samples, together with the SNP addresses 1 obtained with B. licheniformis ATCC 9789 and B. velezensis Pilsner1-2 as reference genomes for the B. licheniformis and B. velezensis isolates, respectively. Table S2: Key metrics for Illumina and ONT raw data. Table S3: Overview of contigs with (putative) extrachromosomal elements detected in the metagenomic hybrid assemblies, Table S4: Result of AMR gene detection on raw short-read data. Table S5: Result of AMR gene detection on raw long-read data. Table S6: Overview of deletions supported by the long reads as compared to the reference B. licheniformis ATCC9789, with gene name and annotation of strain ATCC9789. Table S7: Depth and breadth of coverage of short and long reads that map uniquely against reference genomes of the Bacillus species found in the metagenomic samples, as well as the two extrachromosomal elements; plasmid pFL7 of B. licheniformis and the putative prophage of B. velezensis, and the three transgenic constructs GMM protease1, pUB110-amylase, and GMM alpha-amylase2. Table S8: Metrics of metagenomic long-read assemblies generated with Canu, and derived metagenomics assembled genomes (MAGs), obtained with Metabat2. Table S9
|
2022-11-27T17:20:36.213Z
|
2022-11-25T00:00:00.000
|
{
"year": 2022,
"sha1": "0d7b0a471fec84cb54518c7094d6850f9199b420",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/12/12/1971/pdf?version=1669366489",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "84bc7b5810c3780e3afb8a0c13800a779da9a1a8",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220251138
|
pes2o/s2orc
|
v3-fos-license
|
Morbidity and Mortality From COVID‐19 Are Not Increased Among Children or Patients With Autoimmune Rheumatic Disease—Possible Immunologic Rationale: Comment on the Article by Henderson et al
We read with great interest the Viewpoint by L. H Henderson et al (1) on the therapeutic approach with Glucocorticoids (GC) to the inflammation and Cytokine Storm phases of SARS.CoV 2 infection. We would like to expand their analysis and discuss the data , so far reported in Children and Autoimmune patients ( Rheumatoid Arthritis , Systemic Lupus Erythematosus), about the chance of undergoing a "severe" infection. So far children (and autoimmune patients) , who should be extremely fragile, rarely entered into the third phase "the cytokine release syndrome-CRS " of COVID-19, leading only some patients to the Intensive Care Units (ICUs).
receiving biologic DMARDs is well known, and patients have probably been informed of this risk at the time of treatment initiation (6,7).
Reduced physical activity resulting from home confinement could be another explanation for worsening symptoms. In SpA patients, exercise can reduce disease activity and, consequently, is recommended for optimal treatment (8).
In this patient population, COVID-19 occurrence was associated with SpA treatment modification. We did not find a link between NSAID or biologic treatment and COVID-19. When considering both the confirmed and the clinically suspected cases of COVID-19, we found 31 cases (13 clinically suspicious and 18 selfreported as being confirmed), which is more substantial than the 8 cases in a cohort of 320 patients with chronic arthritis (4 confirmed and 4 highly suggestive) reported by Monti et al (9). However, it is impossible to compare prevalence as the population, methodology, and period are different (9). It is important to emphasize that a majority of our patients were treated with NSAIDs. Our results are interesting because they provide data from a real-life setting.
Our findings should be interpreted within the limitations of the study. The most important limitation is that our results are based on self-reported data. For patients who reported having confirmed COVID-19, we could not verify that this was in fact confirmed via a positive test result. However, this is the first study providing information on therapy compliance during home confinement and reporting the frequency of COVID-19 in SpA patients. The size of our cohort reinforces the importance of our results.
Thus, our survey results show that in SPA patients, home confinement linked to the COVID-19 pandemic is associated with worsening of the disease and reduction or suspension of medication intake, in particular NSAIDs. These findings have considerable clinical implications, given that home confinement is likely to recur in the future. Patients need to be educated about the current evidence regarding NSAID treatment and ways to stay physically active at home. (ICUs). Based on reports to date from China, Europe, and the US, patients with rheumatic diseases are not necessarily at increased risk for severe outcomes. The latest report from the COVID-19 Global Rheumatology Alliance (2) shows that of 334 registered patients infected with COVID-19 (of whom 74.25% were women and 25.75% were men), only 38 were hospitalized and 19 (5.69%) died. Within this population of patients with COVID-19, there were 121 patients with RA, 33 with psoriatic arthritis, 58 with SLE, 28 with axial spondyloarthritis, 27 with vasculitis, and 19 with primary Sjögren's syndrome. In China, of 171 children with COVID-19 infection, only 12 experienced radiologic pneumonitis and only 1 died in the pediatric ICU (3). This percentage does not appear to be higher than the percentage in the general population, despite the fact that >65% of adult patients with rheumatic diseases were treated with disease-modifying antirheumatic drugs (DMARDs): 5.39% with JAK inhibitors, 36.5% with biologic DMARDs, and 30.2% with glucocorticoids. Only 25.7% were treated with hydroxychloroquine (HCQ) (2). The Global Rheumatology Alliance data on patients with rheumatic diseases showed that the number of women infected with COVID-19 was higher than the number of men. This is expected, based on data from the general population showing that women with rheumatic diseases are more often hospitalized and more often have a worse disease course. Meanwhile, no proven cases of macrophage activation syndrome or hemophagocytic lymphohistocytosis in children with COVID-19 infection have been described to date. The recently described Kawasaki-like illness (4) still needs to be well defined.
In addition to the potential role of sex, we would like to speculate on the immunologic basis for rheumatic disease patients not having more severe outcomes, as might have been expected at the onset of this pandemic. In the early phases of infection, the lungs of patients with COVID-19 exhibit edema, a patchy inflammatory infiltrate, and multinucleated giant cells (MGCs), with lymphopenia in the peripheral blood. Evidence from animal models demonstrates that macrophage colony-stimulating factor and granulocytemacrophage colony-stimulating factor stimulate the differentiation of rat alveolar macrophages into MGCs with distinct phenotypes (type 1 and type 2 MGC) and that neutralization of endogenous interleukin-6 (IL-6) during alveolar macrophage differentiation into MGCs significantly inhibits the formation of type 2 MGCs (up to 50%) (5). Of interest, another type of key immune cell, type 2 innate lymphoid cells (ILC2), which are important for maintaining lung integrity, do not efficiently migrate from the bone marrow to the lungs with aging. In mice, transfer of young ILC2 to the old lung enhances resistance to infection. In addition, levels of tumor necrosis factor and IL-6 and numbers of neutrophils increase with age and may contribute to increased inflammation in the lungs. Furthermore, it is well known that IL-6 inhibits natural killer (NK) cell cytotoxicity. Therefore, IL-6 appears to be a key molecule.
Numbers of NK cells are higher in infancy and decrease progressively with aging. Lymphocyte number and function also decline with age, and CD8+ T cells decline in number Figure 1. Initial phase of viral infection. In this phase, nonspecific viral agents (while specific agents are awaited), antimalarials, or anti-interleukin-6 (anti-IL-6) (or other anticytokine) agents may be used to shut down the inflammatory process before it evolves into acute respiratory distress syndrome-induced lung failure. SARS-Cov-2 = severe acute respiratory syndrome coronavirus 2; ACE2 = angiotensin-converting enzyme 2; IFNα/β = interferon α/β; TLR-7 = Toll-like receptor 7; ssRNA = single-stranded RNA; AT 2 = type II alveolar epithelial cells; Myd88 = myeloid differentiation factor 88; TNF = tumor necrosis factor; MCP-1 = monocyte chemotactic protein 1. and weaken with aging, a feature of immunoesenescence (6). We do not yet know if vaccinations administered in early childhood play a role in stimulating the immune system. This is an area of intense clinical research. It has been suggested that SARS-CoV-2 may enter type II alveolar epithelial cells (AECs) through angiotensin-converting enzyme 2 (ACE2) present on the membrane of type II AECs and employs serine protease TMPRSS2 for priming (7). Once type II AECs are infected, they provoke an innate immune response and synthesize type I interferon (IFNα/β), type II IFN (IFNγ), IL-6, and IL-8 (8). In the majority of patients with this response, the infection clears. Once the SARS-CoV-2 single-stranded RNA is released inside the type II AECs, it is recognized by Toll-like receptor 7 (TLR-7) and TLR-8. TLR-7 ligation induces signal transduction via the adaptor protein myeloid differentiation factor 88, and the activation leads to synthesis and release of cytokines and chemokines. This may explain the inflammatory and lung symptoms (Figure 1).
Treatment that targets the TMPRSS2 protease camostat mesylate, thus inhibiting priming of SARS-Cov-2 spike-S1 protein and its binding to ACE2, may protect against severe outcomes (9). If ACE2 expression is shut down by COVID-19, the inflammation progresses, with release of IL-6 and other cytokines and chemokines. The inflammation may progress to acute respiratory distress syndrome (ARDS) and cytokine storm (1).
RA and SLE have a type I IFN signature (10), which might explain why children and patients with autoimmune diseases may not be affected more severely than the general population. This is why scientific societies support the idea of continuing treatments (IL-6 inhibitors for juvenile idiopathic arthritis, DMARDs or JAK inhibitors for adult RA, and mycophenolate mofetil or HCQ for SLE). While specific antivirals are awaited, these drugs may help in the hyperinflammatory phase of the infection, and, in fact, several trials are underway using anti-IL-6, other cytokines, or JAK1/2 inhibitors (ClinicalTrials.gov).
The key question is whether, or when, to prescribe glucocorticoids, since the American Thoracic Society and Infectious Diseases Society of America (11) did not strongly support the use of glucocorticoids once the hyperinflammatory phase progresses to the cytokine release syndrome and ARDS-like phase. Data from clinical trials and the real world are badly needed to support these theories. We read with interest the article by Duarte-García et al (1), in which they reported that the estimated prevalence of antiphospholipid syndrome (APS) was 50 per 100,000 population. APS is an autoimmune disorder characterized by thrombotic events, pregnancy morbidity, or both, in the presence of antiphospholipid antibodies (aPLs) (2). While APS is often thought to be the most common thrombophilia, its global incidence and prevalence in the general population still need to be fully elucidated. Some reports describe an incidence of 5 cases per 100,000 population per year and a prevalence of 40-50 per 100,000 population (1,(3)(4)(5)(6). In several recent studies, investigators attempted to estimate the prevalence of aPLs in different cohorts, such as in young patients with stroke (7), patients with pregnancy morbidity, stroke, myocardial infarction, and deep vein thrombosis (4), and patients with a first unprovoked thrombosis (8). To date, APS meets the definition of a rare disease as described by Holué (prevalence ≤5 per 10,000 population) (9).
In order to better estimate the epidemiology of APS, we performed an analysis using a population-based approach, investigating clinicoepidemiologic data on patients with APS in northwest Italy. We collected data from the Piedmont and Aosta Valley Rare Disease Registry, part of the National Registry of Rare Diseases (10). The registry includes demographic, socioeconomic, and disease data, as detailed elsewhere (11) and currently includes 740 patients with a definite diagnosis of APS. The location of the centers reporting APS diagnoses by relative number of diagnoses are depicted in Figure 1. The median age at diagnosis was 45 years (interquartile range 23); 63% of patients were diagnosed at age ≤50 years, 39% at ≤40 years, and 18% at ≤30 years. Taking into account that the population of the Piedmont and Aosta Valley regions is ~4.4 million (12), the estimated prevalence of APS in the region is 1.68 per 10,000 population. The annual incidence from 2010 through 2019 was 1.1 per 100,000 population. APS is considered to be a rare disease according to the Rare Disease Registry of Piedmont and Aosta Valley. Despite the fact that the numbers are relatively small, an accurate estimation of the epidemiology of rare diseases is crucial in order to: 1) plan adequate strategies to maximize
|
2020-06-18T09:06:44.566Z
|
2020-06-17T00:00:00.000
|
{
"year": 2020,
"sha1": "d050e1cac185ddfff50cbc7148cf3ad7db01bded",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/art.41399",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ea4a7bc3f88118284253a50a7bcdbc772e2788c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235298627
|
pes2o/s2orc
|
v3-fos-license
|
Plasma Extracellular Vesicle miRNAs Can Identify Lung Cancer, Current Smoking Status, and Stable COPD
Lung cancer remains the leading cause of cancer related mortality worldwide. We aimed to test whether a simple blood biomarker (extracellular vesicle miRNAs) can discriminate between cases with and without lung cancer. Methods: plasma extracellular vesicles (EVs) were isolated from four cohorts (n = 20 in each): healthy non-smokers, healthy smokers, lung cancer, and stable COPD participants. EV miRNA expression was evaluated using the miRCURY LNA miRNA Serum/Plasma assay for 179 specific targets. Significantly dysregulated miRNAs were assessed for discriminatory power using ROC curve analysis. Results: 15 miRNAs were differentially expressed between lung cancer and healthy non-smoking participants, with the greatest single miRNA being miR-205-5p (AUC 0.850), improving to AUC 0.993 in combination with miR-199a-5p. Moreover, 26 miRNAs were significantly dysregulated between lung cancer and healthy smoking participants, with the greatest single miRNA being miR-497-5p (AUC 0.873), improving to AUC 0.953 in combination with miR-22-5p; 14 miRNAs were significantly dysregulated between lung cancer and stable COPD participants, with the greatest single miRNA being miR-27a-3p (AUC 0.803), with two other miRNAs (miR-106b-3p and miR-361-5p) further improving discriminatory power (AUC 0.870). Conclusion: this case control study suggests miRNAs in EVs from plasma holds key biological information specific for lung cancer and warrants further prospective assessment.
Introduction
Due to the lack of early diagnosis and effective treatments, lung cancer remains one of the most fatal forms of cancer; therefore, diagnosis at an earlier, more curable stage of disease, has been the focus of worldwide efforts to reduce associated mortality. There is a large body of evidence describing the pathological link between chronic obstructive pulmonary disease (COPD)-the third leading cause of death worldwide [1,2]-and lung cancer-the leading cause of cancer-related mortality [3], beyond their common etiologies. While tobacco smoking is the main risk factor for both lung cancer and COPD, only 10-20% of smokers develop COPD [4], and conversely 10-15% of individuals with lung cancer are non-smokers [5]. Additionally, lung cancer is five times more likely to occur in smokers with airflow obstruction than in those with normal lung function [6], with the annual incidence of lung cancer arising in individuals with COPD being reported as 0.8-1.2% [7]. Screening and monitoring of individuals with COPD to identify early stage lung cancer has been suggested as a potential strategy to reduce lung cancer mortality [8]. As part of this approach, the search for novel diagnostic biomarkers of serious lung diseases has been under intense investigation. microRNAs (miRNAs) are short non-coding RNAs (19)(20)(21)(22) nucleotides) that regulate gene expression post-transcriptionally through either degradation of target mRNA or translation inhibition [9]. Due to the broad range of target genes, miRNAs are involved in regulating a number of key physiological processes including apoptosis, DNA repair, cell metabolism, and the initiation and progression of pathogenic processes, leading to further inflammation and tumourigenesis [10][11][12]. miRNAs have been shown to be highly stable in a variety of body fluids, in particularly plasma [13][14][15]. Studies have shown that, in cells, miRNAs have an estimated half-life of 8 hours [15], unlike mRNA which only have a half-life of several minutes [16].
Nanosized lipid bilayer membrane vesicles, known as extracellular vesicles (EVs), have been identified as an attractive source of biomarkers for lung disease. This is due to their key role in intercellular communication, through bioactive cargo (e.g., DNA, RNA, miRNA, proteins) exchange with recipient cells [17]. This bioactive cargo can contain disease specific molecular information, reflecting their cellular origin. Specific miRNAs have been demonstrated to be selectively exported into EVs, while others are excluded [18], altering recipient cells biological processes and overall disease pathophysiology [19]. Given the stability of plasma miRNAs and transportability of vesicles, their accessibility through minimally invasive methods (i.e., blood tests) [17,20,21] makes them ideal for biomarker studies.
In this study, we aimed to identify potential diagnostic EV miRNA biomarkers that can discriminate between cases of lung cancer and COPD, as well as separate these disease states from healthy smokers and healthy non-smoking participants. Plasma EV miRNAs with discriminatory power to distinguish lung cancer from COPD and from smokers without lung disease could have significant translational application as a screening tool for lung cancer case finding in populations at risk.
Clinical Characteristics of the Four Case Groups
EVs were isolated from plasma, extracted for RNA and assessed for miRNA expression from 20 healthy participants who had never smoked, 20 healthy smokers, 20 participants with a smoking history and diagnosis of non-small cell lung cancer, and 20 participants with clinically stable COPD. Clinical characteristics are shown in Table 1. Each of the four groups contained similar numbers of males and females, although the healthy controls were on average 10 years younger than cases in the other three groups (p < 0.0001). The majority of the "healthy" smoker group were current smokers, whereas most of the cases of diagnosed lung disease (lung cancer or COPD) were former smokers (see Table 1), but there was no significant difference in cumulative pack years between the three groups. The majority of lung cancer cases were TNM stage I or II. Most of the COPD cases had GOLD-classified moderate to severe disease. GOLD stage (n, %) Categorical data represented as n (population %); continuous data represented as mean (standard deviation); * p-value < 0.05; " non-normal distribution; 1 Chi-square test; 2 Kruskal-Wallis test.
Characterization of EVs from Healthy Non-Smoking, Healthy Smoking, Lung Cancer, and Stable COPD Participants Plasma
An aliquot of EVs isolated from plasma of two healthy non-smokers, two healthy smokers and two lung cancer participants were characterized using western blotting, while one healthy non-smoker case underwent nanoparticle tracking analysis (NTA).
Established "EV-markers" Flotillin-1 and CD9 were identified by western blotting in plasma EVs from the cases, but were absent from the corresponding raw plasma samples ( Figure 1). An exosome standard from human plasma used as a positive control was positive for Flotillin-1 (but not CD9) in the EV samples. Albumin, the most abundant protein in plasma [22], was present in the raw plasma samples on western blots as expected, and showed a faint band in the exosome standard, but was also identified in plasma EVs, so may not be a suitable "negative" control marker. Flotillin-1 and CD9 presence or absence in participant groups (healthy smokers, lung cancer, healthy non-smokers), as well as lyophilized exosome standard from plasma (EXS STD) and raw plasma (RAW). (b) Plasma EV particle concentration and size from a healthy non-smoking participant using nanoparticle tracking analysis (NTA).
Plasma-Derived EV miRNA Profiling Identified Significantly Dysregulated miRNAs Specific for Select Cohorts
Candidate plasma-derived EV miRNAs were investigated by qPCR to identify those that could discriminate between lung disease cohorts (lung cancer and stable COPD), healthy smokers, and healthy non-smokers. Testing for 179 miRNAs specific for human serum and plasma (including internal controls) was performed using QIAGEN's miRCURY LNA miRNA Serum/Plasma Focus PCR panels. A clustergram of miRNA expression across all cohorts is shown in Figure 2.
As detailed in Table 2, the miRNA expression from plasma EVs identified several miRNAs whose expression differed between the four cohorts.
Two plasma EV miRNAs were expressed at significantly higher levels in lung cancer participants than in healthy non-smokers and expression of 13 miRNAs was significantly lower in the lung cancer participants.
Additionally, lung cancer participants compared to the healthy smoker's cohort identified 14 significantly over-and 12 significantly under-expressed miRNAs.
Further, lung cancer participants compared to stable COPD patients identified 6 miRNAs that were significantly under-expressed and eight miRNAs that were significantly over-expressed.
Figure 2.
Clustergram indicating the magnitude of miRNA expression (green = minimal, red = maximal). Clustering is grouped by cohort (control group = healthy non-smokers, group 1 = healthy smokers, group 2 = lung cancer and group 3 = stable COPD). Table 2. Over-and under-expressed miRNA targets that were identified to be significantly (p < 0.05) dysregulated between lung cancer participants and comparator cohorts (healthy non-smokers, healthy smokers and stable COPD participants).
Correlations Identified between Significantly Dysregulated miRNAs and Clinical Characteristics
Significant differences and correlations were assessed between the identified miR-NAs and appropriate clinical characteristics between lung cancer participants compared to comparator groups (healthy non-smokers, healthy smokers, and stable COPD).
Two plasma EV miRNAs differentially expressed between lung cancer participants and healthy non-smokers were significantly associated with age (Table S2), while four plasma EV miRNAs were significantly associated with gender (Table S3).
For lung cancer participants, compared to healthy smokers, five plasma EV miRNAs were significantly associated with age (Table S4), while seven plasma EV miRNAs were significantly associated with gender and three miRNAs with smoking history (Table S3).
For lung cancer participants compared to stable COPD participants, five plasma EV miRNAs were significantly associated with age, two plasma EV miRNAs were associated with pack years (Table S5), while three plasma EV miRNAs were significantly associated with gender (Table S3).
Biomarker Potential of Plasma EV miRNAs Assessed by ROC Curve Analyses
To evaluate the discriminatory power of plasma EV miRNAs for distinguishing lung cancer from healthy non-smoking, healthy smoking and stable COPD groups, ROC curve analysis was performed on select miRNAs that did not have significant associations with clinical characteristics.
Lung Cancer Participants vs. Healthy Non-Smokers
A total of 15 miRNAs identified in the primary analyses were significantly dysregulated between lung cancer participants compared to healthy non-smoking participants. From Table S2, the expression of six miRNAs was significantly associated with age and/or gender (Table S3). ROC curve analysis was performed individually for the remaining nine Further, candidate miRNA-regulated genes were identified based on the miRNAs expression in lung cancer participants compared to healthy non-smokers, healthy smokers, and stable COPD patients (full list detailed in Table S1).
miRNA-regulated gene ZXDC was unique to lung cancer participants compared to healthy non-smokers, targeted by two over-expressed miRNAs. NFIB was unique to lung cancer participants compared to healthy smokers, targeted by five overexpressed miRNAs. While CLCN5 was unique to lung cancer participants compared to stable COPD participants, targeted by four under-expressed miRNAs.
Correlations Identified between Significantly Dysregulated miRNAs and Clinical Characteristics
Significant differences and correlations were assessed between the identified miRNAs and appropriate clinical characteristics between lung cancer participants compared to comparator groups (healthy non-smokers, healthy smokers, and stable COPD).
Two plasma EV miRNAs differentially expressed between lung cancer participants and healthy non-smokers were significantly associated with age (Table S2), while four plasma EV miRNAs were significantly associated with gender (Table S3).
For lung cancer participants, compared to healthy smokers, five plasma EV miRNAs were significantly associated with age (Table S4), while seven plasma EV miRNAs were significantly associated with gender and three miRNAs with smoking history (Table S3).
For lung cancer participants compared to stable COPD participants, five plasma EV miRNAs were significantly associated with age, two plasma EV miRNAs were associated with pack years (Table S5), while three plasma EV miRNAs were significantly associated with gender (Table S3).
Biomarker Potential of Plasma EV miRNAs Assessed by ROC Curve Analyses
To evaluate the discriminatory power of plasma EV miRNAs for distinguishing lung cancer from healthy non-smoking, healthy smoking and stable COPD groups, ROC curve analysis was performed on select miRNAs that did not have significant associations with clinical characteristics.
Lung Cancer Participants vs. Healthy Non-Smokers
A total of 15 miRNAs identified in the primary analyses were significantly dysregulated between lung cancer participants compared to healthy non-smoking participants. From Table S2, the expression of six miRNAs was significantly associated with age and/or gender (Table S3). ROC curve analysis was performed individually for the remaining nine significantly over-or under-expressed miRNAs (Figure 4a). The highest AUC for a single miRNA was achieved by miR-205-5p (AUC 0.850; standard error 0.061; 95% CI 0.731-0.969; p = 0.0002) (Figure 4b). . Discriminatory power assessed using ROC curves for the miRNAs that were significantly dysregulated in lung cancer participants compared to healthy non-smoker participants. a) Individual ROC curves for under-and overexpressed miRNAs; b) AUC values, including standard error (Std. Error), significance and 95% confidence intervals; c) binary logistic regression model for miR-205-5p, miR-199a-5p, and significant clinical characteristic (age); d) ROC curve for the logistic regression model.
Lung Cancer Participants vs. Healthy Smokers
A total of 17 of the 26 miRNAs identified in the primary analyses as significantly differentially expressed in plasma EVs between lung cancer participants and healthy smokers were associated with age (Table S4), gender and/or smoking history (Table S3). Individual ROC curve analyses of the remaining nine significantly over-or under-expressed miRNAs are shown in Figure 5a. The highest AUC for a single miRNA was achieved by miR-497-5p (AUC 0.873; standard error 0.057; 95% CI 0.761-0.984; p = 0.0001) (Figure 5b).
Based on the highest individual AUC values, and lack of associations with potential confounding factors the combination of miR-497-5p and miR-22-5p was assessed in a logistic regression model including age (Figure 5c). ROC curve analysis of the model ( Figure 5d), showed that AUC improved to 0.953 (standard error 0.031; CI 0.892-1.000; p < 0.0001). The miRNAs that were not significantly correlated with each other or associated with demographic parameters with the highest AUC values were miR-205-5p and miR-199a-5p. These were included in a logistic regression model (Figure 4c) for which ROC curve analysis showed the AUC improved to 0.993 (standard error 0.009; 95% CI 0.974-1.000; p < 0.0001) (Figure 4d).
Lung Cancer Participants vs. Healthy Smokers
A total of 17 of the 26 miRNAs identified in the primary analyses as significantly differentially expressed in plasma EVs between lung cancer participants and healthy smokers were associated with age (Table S4), gender and/or smoking history (Table S3). Individual ROC curve analyses of the remaining nine significantly over-or under-expressed miRNAs are shown in Figure 5a. The highest AUC for a single miRNA was achieved by miR-497-5p (AUC 0.873; standard error 0.057; 95% CI 0.761-0.984; p = 0.0001) (Figure 5b).
Lung Cancer Participants vs. Stable COPD Participants
A total of 14 miRNAs identified in the primary analyses were significantly dysregulated between lung cancer patients compared to stable COPD participants. From the correlation analysis with clinical characteristics, seven miRNAs were significantly correlated with age or significantly associated with gender, smoking history and/or pack years (Table S3 and S5). ROC curve analysis was performed for the remaining seven significantly dysregulated individual over-and under-expressed miRNAs (Figure 6a). The highest AUC for a single miRNA was achieved with miR-27a-3p (AUC 0.803; standard error 0.071; 95% CI 0.664-0.941; p = 0.001) (Figure 6b).
Combination of miR-27a-3p with another with the second highest individual AUC value could not be achieved, as miR-27a-3p significantly correlated with all of the remaining miRNAs. The second highest AUC was achieved by miR-106b-3p, which could be combined with miR-361-5p, as no significant correlations identified with each other or clinical characteristics (with the exception of pack years being significantly different between the two patient cohorts), and they were therefore further assessed in a logistic regression model (Figure 6c). The model was then analyzed using ROC curve analysis (Figure 6d), with the AUC improving to 0.870 (standard error 0.064; CI 0.744-0.996; p = 0.0002). Based on the highest individual AUC values, and lack of associations with potential confounding factors the combination of miR-497-5p and miR-22-5p was assessed in a logistic regression model including age (Figure 5c). ROC curve analysis of the model (Figure 5d), showed that AUC improved to 0.953 (standard error 0.031; CI 0.892-1.000; p < 0.0001).
Lung Cancer Participants vs. Stable COPD Participants
A total of 14 miRNAs identified in the primary analyses were significantly dysregulated between lung cancer patients compared to stable COPD participants. From the correlation analysis with clinical characteristics, seven miRNAs were significantly correlated with age or significantly associated with gender, smoking history and/or pack years (Tables S3 and S5). ROC curve analysis was performed for the remaining seven significantly dysregulated individual over-and under-expressed miRNAs (Figure 6a). The highest AUC for a single miRNA was achieved with miR-27a-3p (AUC 0.803; standard error 0.071; 95% CI 0.664-0.941; p = 0.001) (Figure 6b). Figure 6. Discriminatory power assessed using ROC curves for the miRNAs that were significantly dysregulated in lung cancer participants compared to stable COPD participants. a) Individual ROC curves for under-and overexpressed miRNAs; b) AUC values, including standard error (Std. Error), significance, and 95% confidence intervals; c) binary logistic regression model for miR-106b-3p and miR-361-5p, as well as significant clinical characteristic (pack years); and d) ROC curve for the logistic regression model.
Biological Pathways Associated with miRNAs Differentially Expressed between Lung Cancer Participants and Healthy Non-Smoking Participants
KEGG pathway analysis was used to identify significantly affected pathways from the candidate miRNAs that were significantly dysregulated in the primary analysis in the lung cancer participants compared to the healthy non-smoking participants (Table 3).
A total of 15 significantly dysregulated miRNAs that were identified in the lung cancer cohort compared to the healthy non-smoker's cohort were enriched in three different KEGG pathways. The 'proteoglycans in cancer' (hsa05205) interaction was the most significant KEGG pathway enriched for the most miRNAs, with 32 enriched target genes, as well as the most genes validated by the miRNA-regulated gene targets GeneGlobe analysis. Combination of miR-27a-3p with another with the second highest individual AUC value could not be achieved, as miR-27a-3p significantly correlated with all of the remaining miRNAs. The second highest AUC was achieved by miR-106b-3p, which could be combined with miR-361-5p, as no significant correlations identified with each other or clinical characteristics (with the exception of pack years being significantly different between the two patient cohorts), and they were therefore further assessed in a logistic regression model (Figure 6c). The model was then analyzed using ROC curve analysis (Figure 6d), with the AUC improving to 0.870 (standard error 0.064; CI 0.744-0.996; p = 0.0002).
Biological Pathways Associated with miRNAs Differentially Expressed between Lung Cancer Participants and Healthy Non-Smoking Participants
KEGG pathway analysis was used to identify significantly affected pathways from the candidate miRNAs that were significantly dysregulated in the primary analysis in the lung cancer participants compared to the healthy non-smoking participants (Table 3). A total of 15 significantly dysregulated miRNAs that were identified in the lung cancer cohort compared to the healthy non-smoker's cohort were enriched in three different KEGG pathways. The 'proteoglycans in cancer' (hsa05205) interaction was the most significant KEGG pathway enriched for the most miRNAs, with 32 enriched target genes, as well as the most genes validated by the miRNA-regulated gene targets GeneGlobe analysis.
Main Results
In this study, we verified the presence of EVs isolated from plasma and identified significantly dysregulated EV miRNAs that can discriminate between groups of lung cancer cases compared to healthy non-smokers, healthy smokers and stable COPD cases. These EV miRNAs and or signatures specific to disease states have translational application as potential biomarkers with their strong diagnostic discrimination power evaluated using ROC curve analysis. Further, KEGG pathway analysis based on EV miRNAs with discriminatory power between case groups indicates their involvement in disease-specific and biologically relevant pathways.
Lung Disease Biomarker Potential of Plasma EV miRNAs
There is increasing evidence to support the biomarker potential of miRNAs for diagnosis of disease. It has previously been reported that in plasma, miRNAs are concentrated in the bioactive cargo of EVs [24], which (due to their phospholipid bilayer) are highly stable in circulating bodily fluids. EV cargo reflects the physiological state and microenvironment of the cells of origin and in disease states circulating EVs contain an array of disease associated biomolecules [25].
Recent studies in various lung diseases show that certain miRNAs are differentially enriched in EVs, and that these can alter biological processes in recipient cells, thereby reflecting disease pathophysiology [19].
It is known that tobacco smoking is one of the main etiological risk factors for lung cancer and COPD development [26]. Therefore, when identifying possible diagnostic biomarkers, clinically relevant controls need to be evaluated, which in the case of lung cancer and COPD, includes smokers without these diseases.
Lung Cancer Participants vs. Healthy Non-Smokers
Our results identified miR-205-5p as the top under-expressed plasma EV miRNA, with no association identified between age and gender, with solid discriminatory power (AUC 0.850) in distinguishing lung cancer from healthy non-smokers. This result is not concordant with previous reports and literature, in fact miR-205-5p has been extensively reported as being significantly overexpressed lung cancer [27], promoting metastasis and cellular invasion through an epithelial phenotype, along with increased E-cadherin and reduced fibronectin [28].
Explanations for this anomaly may include the small sample sizes of both cohorts and therefore validation of this result is required in larger, independent cohorts. Additionally, this result may have occurred due to mechanisms involved in selective miRNA packaging into EVs, a process that is not random, as certain miRNAs can be exported into EVs, while others are excluded [29], which suggests that their effect can be pathogenic for specific lung diseases [30] and not just a 'bystander'. The mechanisms behind EV miRNA packaging are complex and still under investigation. Previous reports have suggested that specific miRNA enriched EVs can exert anti-tumorigenic effects to nearby cells [31]. A cell surface heparin sulfate proteoglycan known as syndecan-1 has been reported to function in cancer cell signaling and exosomes from cells expressing this proteoglycan have shown to contain a miRNA (miR-485) that was both upregulated in A549 cells [32], while another study reported this miRNA as downregulated in breast cancer tissue [33]. Additionally, miRNA profiling of plasma fractions revealed that miR-205 expression increased in tumor specific EVs of patients with squamous cell carcinoma [27,34]; therefore, the decreased expression observed in our primarily adenocarcinoma NSCLC lung cancer participants may be expected [35]. Another alternative explanation may be that signaling molecules produced by tumor cells may downregulate the expression of miRNA from normal tissue, suggesting that miRNAs from non-tumor cells may have diagnostic significance [36].
The discriminatory power for lung cancer participants compared to healthy nonsmokers improved with the combination of miR-205-5p and miR-199a-5p (AUC 0.993). Previous studies have reported that miR-205-5p, in combination with other miRNAs, as a validated plasma miRNA signature for lung cancer early detection [37], as well as a differentiating squamous cell carcinoma from adenocarcinoma [38]. For miR-199a-5p, this miRNAs dysregulation is concordant with previous studies, being significantly underexpressed in lung cancer patient's plasma and tissue [39,40]. The under-expression of miR-199a-5p in the lung cancer cohort [41], has also been demonstrated in adenocarcinoma patients and associated with a high risk for disease progression [41].
Overall these identified plasma EV miRNAs have shown to have strong discriminatory power for lung cancer patients and warrant further validation in a larger independent cohort.
Lung Disease Participant Cohorts
From our results, significantly dysregulated miRNA, miR-497-5p, may be a potential biomarker candidate for the early detection of lung cancer, as it was identified as being significantly under-expressed in the lung cancer participants compared to healthy non-smokers (AUC 0.813), healthy smokers (AUC 0.873), and stable COPD participants (significantly dysregulated, but not applicable for discriminatory power assessment due to correlations with clinical characteristics). This is in support of previous studies that have shown that in tissue and plasma, miR-497-5p is downregulated in a number of different cancers [42][43][44][45][46], and in relation to NSCLC, this miRNAs has been suggested to function as a tumor suppressor [47,48], as well as related to disease progression, TNM stage and distant metastases [49].
The top identified miRNAs that were significantly dysregulated in the lung cancer cohort compared to stable COPD participants included the over-expression of miR-27a-3p and miR-106b-3p. These findings are concordant with previous studies, with miR-27a-3p being reported as an oncogene with high expression reported in a number of different cancers, including lung cancer [50][51][52]. This miRNAs over-expression has also been suggested to be involved in chemotherapy resistance, as well as disruption of TP53/miR-27a/EGFR pathway, promoting increased cell proliferation and tumorigenesis [53]. Further plasma exosomal miR-27a has been reported as a novel diagnostic and prognostic biomarker for colorectal cancer [54], highlighting this plasma EV miRNAs translational application as a potential biomarker for lung cancer.
In relation to miR-106b, a recent study by Sun et al. reported that serum exosomal miR-106b was significantly higher in lung cancer participants, promoting metastasis through targeting phosphatase and tensin homolog (PTEN) [55]. In regards to COPD, miR-106b has been reported as being significantly under-expressed [56] as well as negatively correlating with disease severity [57]. Overall, results from this study support that these identified plasma EV miRNAs have discriminatory potential to distinguish between lung cancer and COPD, and therefore warrant further investigation as clinically applicable diagnostic biomarkers.
Identified miRNA-Regulated Target Genes in Each Patient Cohort and Their Involvement in Relevant Biological Pathways
Identifying which miRNAs regulate a given gene set or pathway is a key question to address in functional miRNA studies. In this study, analysis using DIANA-miRpath (v.3.0) identified specific biological pathways from the combinations of miRNAs significantly dysregulated between lung cancer and healthy non-smoker participants.
KEGG pathway analysis identified that the function of these miRNAs were enriched in the proteoglycans in cancer pathway. This is concordant with a recent study by Wu et al. who investigated EV miRNA expression in NSCLC patients and non-smoking controls, and identified that the most prominent pathways enriched in NSCLC EV miRNA signatures was also the proteoglycans pathway, as well as fatty acid biosynthesis [58]. Further, it has been reported that heparin sulfate proteoglycans is a functionally relevant and targetable entry pathway for cancer cell exosomes [59].
Our results further support that the dysregulated plasma EV miRNAs identified in lung cancer participants target specific genes involved in significant lung cancer biological pathways and, therefore, make strong biomarker candidates for lung cancer diagnosis and disease differentiation.
Limitations
The limitations of this study arise from retrospective case control design introducing difficulties with confounding biases. While case control studies for novel biomarkers allow for the comparison of individuals with the outcome of interest (lung cancer) versus without the outcome (no lung cancer), unbalanced confounders, such as age and gender, may be encountered [60,61]. In this study, there was a significant difference in age between lung disease groups and the controls was observed. A larger study with age and gender matched selection of cases and controls would possibly overcome these confounding biases. To adjust for this in the analyses, demographic, and other possible confounding factors were included in the logistic regression models.
Secondly, the relatively small number of cases in groups introduces multiple comparison issues and statistical bias with limitations of overfitting and over-calling significant results. Further validation of the predictive models is required as well as assessment of whether these models are well calibrated. Additionally, a larger study design would have allowed for implementation of training and test sets. Independent external dataset validation of the significant EV miRNAs and signatures would also have strengthened confidence in the reported results.
In this study, a precipitation-based method for EV isolation was used, with advantages of being cost effective, fast, and yielding high volumes of EVs suitable for downstream miRNA analyses. However it has been shown that precipitation methods can also coisolate contaminating proteins, which interfere with downstream EV characterization [62]. This was observed in the western blot with the presence of plasma contaminating protein albumin in the EV samples. One recently reported way to overcome this issue and improve EV purity from precipitation methods, is the use of protease K and acidification which preserves the advantages of precipitation based EV isolation and minimizes contamination with non-vesicle miRNA [63].
Other studies have reported different unique EV miRNAs and highlighted their potential as novel biomarker signatures for lung disease development and diagnosis. Discrepancies between these and the miRNAs identified in this study may be due to factors such as different cases, comparator groups, and circulating EV compartment (serum vs. plasma vs. whole blood derivation), as well as differing EV isolation and nucleic acid extraction methods, miRNA technology platforms and bioinformatics analyses. For this study, plasma was selected as it was known to yield EVs reliably and a method of miRNA analysis that could be implemented in a clinical setting was chosen.
Statement of Ethics Approval
Protocols and participant recruitment were approved by the Human Research Ethics Committee for the Metro North Hospital and Health Service (HREC/17/QPCH/54 and LNR/2019/QPCH/52409) and The University of Queensland (2019001147). All participants provided written informed consent. Demographics and clinical data were obtained from medical records at the time of sample collection. Clinical data included smoking history, lung cancer staging using the tumor, node, metastasis (TNM) number staging system, and COPD severity using the global initiative for chronic obstructive lung disease (GOLD) airflow limitation severity classification. A summary of the cohort's clinical characteristics can be found in Table 1.
Blood Plasma Sample Collection and Processing
Full details of blood processing and downstream methods are provided in the online supplement. Briefly, peripheral blood obtained from 80 participants (20 healthy nonsmokers, 20 healthy smokers, 20 participants with tissue diagnosed non-small cell lung cancer, and 20 participants with stable COPD) was processed to separate plasma from the blood cell fraction.
Plasma EV Isolation
Frozen plasma samples were thawed and EVs were isolated using the commercially available miRCURY Exosome Serum/Plasma Kit (QIAGEN, Hilden, Germany) according to the manufacturer's instructions. Briefly, 600 µL of thawed plasma was incubated with thrombin for defibrination, followed by EV precipitation with 500 µL of the thrombin treated plasma and 0.4 volumes of Precipitation Buffer A. Samples were then incubated at 4 • C for 1 hour and then centrifuged at 500× g for 5 min at 20 • C. The supernatant was then separated and the remaining EV pellet was either re-suspended in 200 µL 1X PBS, and stored at −80 • C for downstream characterization, or re-suspended in 700 µL QIAZOL as per the manufacturer's protocol for RNA purification using the miRNeasy Mini Kit (QIAGEN, Hilden, Germany).
Nanoparticle Tracking Analysis
EVs isolated from plasma and re-suspended in 1X PBS were analyzed by nanoparticle tracking analysis (NTA) using the NanoSight NS300 instrument (Malvern Instruments, Amesbury, UK).
Western Blot of EV Markers
Protein concentration of re-suspended EVs in 1X PBS was assessed using the Pierce BCA Protein Assay Kit (Thermo Fisher Scientific, USA) as per the manufacturer's instructions, followed by blotting for proteins Albumin, Flotillin-1 and anti-CD9 through SDS-PAGE.
Plasma EV RNA Extraction and Purification
Total RNA was extracted using the miRNeasy Mini Kit (QIAGEN, Hilden, Germany) as per the manufacturer's instructions. Samples from the RT reaction were prepared with the miRCURY SYBR Green PCR Kit (QIAGEN, Hilden, Germany) and assessed for miRNA gene expression using the miRCURY LNA miRNA Serum/Plasma Focus PCR Panels (QIAGEN, Hilden, Germany) as per the manufacturer's protocol. Raw Ct values were uploaded onto the QIAGEN data analysis web portal at http://www.giagen.com/geneglobe and normalized using the NormFinder algorithm [64][65][66] with fold change expression calculated using the ∆∆Ct method.
miRNA Function Enrichment Analysis
Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis was performed using the online software DIANA-miRPath v.3.0, and miRNA targets were also identified using TargetScan [67].
Conclusions
In conclusion, our results highlight that bioactive cargo (miRNAs) in EVs from plasma holds key biological information specific for lung cancer, with diagnostic biomarker potential, which warrants further investigation of translational application.
Fifteen miRNAs were significantly dysregulated between lung cancer participants and healthy non-smokers, with miR-205-5p resulting in the highest AUC (0.850) for a single miRNA, with a combination of two miRNAs (miR-205-5p and miR-199a-5p) further improving discriminatory power (AUC 0.993).
Twenty-six miRNAs were significantly dysregulated between lung cancer and healthy smoking participants, with miR-497-5p resulting in the greatest AUC (0.873) for a single miRNA, improving to AUC 0.953 in combination with miR-22-5p.
Fourteen miRNAs were significantly discriminatory between lung cancer and COPD participants, of which miR-27a-3p had the highest AUC (0.803) for a single miRNA, with a combination of two miRNAs (miR-106b-3p and miR-361-5p) further improving discriminatory power (AUC 0.870).
Future studies are needed in larger patient cohorts to validate the application of these identified plasma EV miRNAs for lung disease differentiation and prediction, as well as explore their potential mechanisms in lung disease progression.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/ijms22115803/s1, Table S1: The top miRNA-regulated genes targeting miRNAs that were identified as significantly over-or under-expressed in lung disease groups (healthy smokers, lung cancer & stable COPD) compared to healthy controls and between different lung disease groups, Table S2: Correlation analysis between clinical characteristics and identified significantly dysregulated miRNAs for lung cancer participants compared to healthy non-smokers, Table S3: Significantly associated miRNAs with categorical clinical characteristics, including gender and smoking history Table S4: Correlation analysis between clinical characteristics and identified significantly dysregulated miRNAs for lung cancer participants compared to healthy smokers, Table S5: Correlation analysis between clinical characteristics and identified significantly dysregulated miRNAs for lung cancer participants compared to stable COPD participants.
|
2021-06-03T06:17:22.181Z
|
2021-05-28T00:00:00.000
|
{
"year": 2021,
"sha1": "1045a8f627496a3e8113a83c7111e38fbb169baa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/11/5803/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ae28fad0b453aa8c744ccc281906ae4abbf2431",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240258403
|
pes2o/s2orc
|
v3-fos-license
|
Fatigue Life Analysis of Aluminum Alloy Notched Specimens under Non-Gaussian Excitation based on Fatigue Damage Spectrum
In this study, a non-Gaussian excitation acceleration method is proposed, using aluminum alloy notched specimens as a research object and measured acceleration signal of a certain airborne bracket, during aircraft flight as input excitations, based on the fatigue damage spectrum (FDS) theory. .e kurtosis and skewness of the input signal are calculated and the non-Gaussian characteristics and amplitude distribution are evaluated. Five task segments obey a non-Gaussian distribution, while one task segment obeys a Gaussian distribution. .e fatigue damage spectrum calculation method of non-Gaussian excitation is derived. .e appropriate FDS calculation method is selected for each task segment and the acceleration parameters are set to construct the acceleration power spectral density, which is equivalent to the pseudo-acceleration damage. A finite-element model is established, the notch stress concentration factor of the specimen is calculated, the large mass point method is used to simulate the shaking table excitation, and a random vibration analysis is carried out to calculate the accelerated fatigue life. .e simulation results show that the relative error between the original cumulative damage and test original fatigue life is 15.7%. .e shaking table test results show that the relative error of fatigue life before and after acceleration is less than 16.95%, and the relative error of test and simulation is 24.27%..e failure time of the specimen is accelerated from approximately 12 h to 1 h, the acceleration ratio reaches 12, and the average acceleration ideal factor is 1.125, which verifies the effectiveness of the acceleration method. It provides a reference for the compilation of the load spectrum and vibration endurance acceleration test of other airborne aircraft equipment.
Introduction
e bench tests of airborne equipment are divided mainly into functional vibration tests and vibration endurance acceleration tests. e vibration endurance acceleration test can reproduce the fatigue damage during the whole service life in a relatively short time, which is of significance. Compared to the outfield environment test, the continuous acceleration bench test is repeatable, which can largely reduce the labor and time costs. erefore, to carry out the acceleration fatigue life bench test of airborne equipment, compilation of the frequency domain synthetic acceleration spectrum has become a key step in the vibration endurance bench test. e selection of acceleration parameters and determination of the equivalent acceleration time directly affect the rationality of the acceleration fatigue life. In engineering problems, many loads exhibit obvious non-Gaussian characteristics, particularly under harsh working conditions, while the non-Gaussian characteristics of excitation are particularly obvious [1]. However, in the analysis, we usually assume that the random load on the structure obeys a stationary Gaussian distribution, which often leads to the danger of fatigue damage, which implies a hidden danger to the equipment service [2]. e most commonly used acceleration method is the frequency domain synthesis method based on the fatigue damage spectrum (FDS), proposed in French military standards to evaluate the potential damage of different components under dynamic excitation [3]. Since then, extensive related studies have been carried out. Liu [4] applied super-Gaussian basic excitation to a cantilever beam, studied its dynamic response and structural fatigue life, and deduced Gaussian and super-Gaussian vibration acceleration models. Cheng [5] established an analytical expression for a non-Gaussian probability density function, combined a probabilistic power spectrum with the Dirlik formula, proposed a formula for the calculation of non-Gaussian wide-and narrow-band fatigue lives, and provided a random vibration acceleration test scheme based on the failure mechanism. According to the actual operation of subway vehicles, Wang [6] carried out road spectrum simulation tests, simulated long-life acceleration spectrum tests, and standard spectrum tests based on the FDS synthetic acceleration power spectral density (PSD). e critical surface method based on the maximum principal stress and shear stress criterion was used to predict the multiaxial acceleration fatigue life. Based on the equivalent principle of fatigue damage, Qin et al. [7] converted the non-Gaussian excitation into the frequency-domain PSD for the shaking table test, so that the vibration environment of the traction converter is simulated more accurately. In 2014, Lalanne [8] studied the different calculation methods and influence parameters of the extreme response spectrum (ERS) and FDS based on a single-degree-of-freedom (SDOF) system. Wolfsteiner [9] employed the high-order spectrum to calculate the FDS of a multi-degree-of-freedom (MDOF) system to estimate the damage of the system under non-Gaussian excitation. Wen [10] employed the load signal of the front axle of an 88-kW tractor, considering the effects of load amplitude and material fatigue, and proposed an accelerated durability testing method based on PSD FDS editing (PD-LSD). is ensured that the accelerated endurance test could reproduce the fatigue load characteristics of the tractor assembly. Based on FDS and test synthesis method, Cianetti [11] compared the fatigue damages caused by acceleration excitation under different conditions and verified the proposed acceleration method by a durability test case.
Conventional vibration endurance acceleration tests usually ignore the transfer process from excitation to response and non-Gaussian characteristics of excitation signals [12]. At present, there is no good acceleration method for the acceleration of non-Gaussian and Gaussian mixed excitation, and the selection of acceleration parameters for non-Gaussian excitation is only based on experience. It is impossible to get an accurate accelerated excitation PSD applied to the shaking table test. In this study, considering the measured non-Gaussian excitation of an airborne equipment, mounting bracket, during aircraft flight as an input, a non-Gaussian excitation acceleration method based on the FDS is proposed. e calculation method of fatigue damage spectrum is summarized, the advantages and disadvantages of each method are compared, the influence of acceleration parameters on accelerated PSD is explored, and the appropriate acceleration parameters are selected to deal with non-Gaussian excitation. Finally, simulations and experiments are carried out to verify the effectiveness of the accelerated method and the accuracy of the finite element model.
Theoretical analysis and accelerated fatigue model
In order to edit the input load spectrum in frequency domain or time domain, it is necessary to distinguish the non-Gaussian or Gaussian characteristics of the input excitation. en the fatigue damage spectrum of each working condition or task section is calculated based on different FDS calculation methods and synthesized by linear superposition. Finally the accelerated PSD used on the bench vibration test is calculated by acceleration formula based on the principle of damage equivalence.
Discriminant principle of non-Gaussian characteristics.
e PSD is a second-order statistic. For a stationary Gaussian vibration with a mean value of 0, the PSD can include the statistical characteristics of random vibrations [13]. For non-Gaussian random processes, the information contained in the PSD is insufficient. e load suffered by the equipment in aircraft operation is generally non-Gaussian excitation; therefore, it is necessary to distinguish non-Gaussian parts from a input excitation. For a random process X(t), skewness S and kurtosis K are commonly used in engineering to calculate third-order and forth-order random vibration statistics. e calculation equation is shown as (1) and (2): In engineering, when the kurtosis of a random signal is K � 3 and S � 0, the random excitation is a Gaussian process, when K > 3, the random excitation is a super-Gaussian excitation, and when K < 3, it is a sub-Gaussian random excitation [14]. Additionally, to obtain the amplitude distribution properties of input excitations, the cycles times of excitation amplitude intervals are counted and contrasted with the Gaussian distribution to see whether the excitation amplitudes obey a Gaussian distribution.
Gaussian frequency domain fatigue damage models.
e estimation methods of fatigue life of random vibration are generally divided into time domain method and frequency domain method. In the frequency domain method, the response stress power spectral density (PSD) is used to predict the fatigue life of engineering structures by its spectral parameters, the S-N curve of materials and the appropriate damage theory. In this process, the spectral parameters describing the statistical characteristics of the random process, which avoids directly dealing with the complex stress time history and greatly shortens the time and workload are only be calculated in the frequency domain method.
Shock and Vibration
For Gaussian stochastic processes, the power spectral density function G(f) is used to describe the frequency domain characteristics of excitation. Meanwhile, the statistical characteristics of the power spectral density function can be expressed by the moment [15]. Given a random signal X(t), the moment of order n (close to the origin) is the quantity: Expressed in the form of power spectral density G(f), which can be written as: e moment of order zero is none other than the square of the rms value X rms (t): Moments of order n are related to stochastic processes and their derivatives as follows: e zero crossing frequency E(0) (the number of times the zero is crossed from the bottom up) and the peak frequency E(p) (the number of peaks in the sample) can be approximately estimated by the moment as follows: Random vibration theory introduces irregular factor c and spectral width coefficient ε to distinguish whether a stationary random process is a narrowband process or a broadband process.
When ε ⟶ 0 or c ⟶ 1, the stochastic process is a narrow-band process; when ε ⟶ 1 or c ⟶ 0, the stochastic process is a broadband process, especially at that time c � 0, the stochastic process can be regarded as white noise. In engineering, it is generally believed that the amplitude probability density function of narrowband random process tends to Rayleigh distribution when 0 ≤ ε ≤ 0.3 or when 0.7 ≤ c ≤ 1. When 0.7 ≤ ε ≤ 1 or 0 ≤ c ≤ 0.3, the excitation can be regarded as a broadband stochastic process, and there are many models that can be described in fatigue theory [16]. e amplitude time history of narrow band and broad band random vibration is shown in the Figure 1 .
For continuous time histories of random stress, the general expression of fatigue damage can be written in the form of the following integral [17]: where ρ(σ) denotes the probability density function of random stress response on the specimen, T is the total time of exposure to the random vibration excitation, N 0 is the average number of the zero up-crossings per unit time in the stress time history. If the stress response obeys a narrowband distribution, i.e., the irregular coefficient r tends to 1 [18], the expression of probability density function is as follows: where σ rms is the root mean square of stress response. en Substitute (10) into (9) to obtain e above equation uses the Rayleigh distribution as an amplitude probability density function. e rms displacement response z rms to a vibration can be further simplified. If the PSD is defined with respect to frequency f, the z rms can be written as: and when the input is an acceleration, the simplified z rms is Under the assumption of small damping, n + 0 ≈ f 0 . en substitute (13) into (11) to derive the FDS expression [8]: Shock and Vibration 3 where f n is the system inherent frequency, K is the SDOF system stiffness, b and C are material parameters, and Γ is the gamma function, defined as Γ(g) � ∞ 0 x g−1 · e −x dx. If the stress response obeys a broad-band distribution, i.e., the irregular coefficient r tends to 0, the Wirsching and Dirlik probability density functions can be used to calculate the broad-band stress FDS [17].
e Wirsching correction fatigue damage formula is expressed in (15) [19]: where FDS(f n ) is narrow band damage and λ R is the correction coefficient of rain flow counting method which is shown as following: where T. Dirlik proposes the following expression for the probability of rainflow half-ranges [20]: where Substituting the probability density of Dirlik (18) into original fatigue damage expression (9) , the broad-band FDS calculation formula can be derived [8]: where z rms is the RMS value of the displacement response and T is the duration.
Non-Gaussian time domain fatigue damage models.
If the excitation cannot be characterized by the PSD, such as an unsteady signal or non-Gaussian signal, the relative displacement response of the signal can be obtained only by transient method. For solving the relative displacement response of a SDOF system, Duhamel integral method is generally used which can be generally expressed as follow [21]: where p(τ) is random excitation load, ω is natural circle frequency and ζ is damping coefficient. Using the Duhamel integral solution method, although the result is the most accurate, the calculation speed is relatively slow. If there are large numbers of non-Gaussian signals points that need to respond calculation, the Duhamel integral method for solving the analytical solution can not meet the requirements. Smallwood [22] put forward an improved recursive formula of shock response spectrum in 1981, which includes the fast response formula of relative displacement model. e relative displacement response model and the absolute acceleration response model are shown in Figure 2: Shock and Vibration where σ is the stress response, K is the linear multiplier between relative displacement and stress (linear system), and z p is the relative displacement response of the system. e fatigue damage can be obtained by counting the rain flow of the stress response. After calculating the cumulative damage at each natural frequency of the linear system, a frequency damage curve was drawn. Figure 3 shows the detailed flow of solving FDS with non-Gaussian excitation.
Basic principle of frequency-domain synthesis acceleration PSD
In this study, vibration data of a certain type of airborne equipment, installation bracket, during flight (take-off, climb, flat flight 1, flat flight 2, descent, and landing) were used as an excitation input. Aluminum alloy notched specimens were used as research objects to collect the strain data before and after the acceleration bench vibration test. e effectiveness of the accelerated PSD based on different frequency domain synthetic acceleration methods were tested and the whole fatigue life cycle of aluminum alloy notched specimens was detected.
Measured acceleration excitation input and discrimination.
e climbing and flat flying 2 task segments were selected as examples; the distribution test results of them are shown in Figure 4. e kurtosis and skewness of each task segment were calculated, as shown in Figure 5.
e comparative analysis shows that, if the kurtosis is larger than 3.4, the signal can be considered a non-Gaussian signal, while, if the kurtosis is between 3 and 3.4, the signal can be considered a Gaussian signal [23]. In summary, in the mission profile, flat flying 2 corresponds to a Gaussian signal and, because of its large number of sampling points, a fast FDS calculation method can be used later. e other mission segments (take-off, climb, flat flight 1, descent, and landing) correspond to non-Gaussian signals.
Selection of acceleration parameters.
e FDS of each task segment is calculated first, and the FDS calculation method needs to be chosen according to the load type. Secondly, the appropriate acceleration parameters are selected to calculate the bench test acceleration PSD. e selection of different acceleration parameters has a great influence on the fatigue results of the aluminum alloy notched specimens.
Selection of FDS calculation method.
First of all, the effects of different fatigue calculation methods on FDS after the excitation passes through the SDOF system are compared. Taking flat flight 2 signal as an example, the FDS results of the different methods were obtained by programming, as shown in Figure 6.
Based on the RMS values of FDS and accelerated PSD obtained by time domain method, it can be found that the FDS calculated by rainflow method is almost the same and the most accurate, but its calculation speed is slow. e fatigue damage of the simplified Rayleigh distribution has a large float and has more peak points. Because the input excitation is assumed to be white noise for the solution of the RMS value of the relative displacement. e FDSs calculated by the Wirsching and Dirlik methods are very close to those of rainflow counting method. e following analysis compares the differences between these methods from different angles and provides suggestions for selection. e advantages and disadvantages of various FDS methods are compared from three aspects: load applicability, calculation speed, and calculation accuracy. e speed of calculation is used to evaluate the time spent of synthesis of FDS of each acceleration method. For the evaluation of the calculation accuracy, the relative errors between RMS value of accelerated PSD based on rainflow method and RMS values based on other FDS calculation methods are compared. Table 1 shows that the time-domain rain flow counting result is most accurate, but the calculation is slowest, so it is suitable for non-Gaussian data calculation. e simplified Rayleigh method is fastest and most suitable for the calculation of a large number of points of Gaussian excitation. If the number of sampling points is small, the Dirlik and Wirsching methods can be used to improve the accuracy of the FDS calculation of Gaussian excitation.
Setting of acceleration parameters.
According to the acceleration process, the acceleration parameters can be roughly divided into system parameters, material parameters, sampling parameters, FDS estimation parameters, and equivalent acceleration time and so on. e selection of different acceleration parameters has a significant impact on the final acceleration fatigue life. erefore, in this study, the system parameters (SDOF stiffness K) and material parameters (b, C in the S-N curve) are varied to provide the contrastive acceleration PSD, as shown in Figure 7. Meanwhile, by changing the value of b, the relationship between preset equivalent acceleration time and pre-simulation accelerate life is shown in Figure 8. It can be seen from these two figures that when the equivalent acceleration time is 1 h and Take
Establishment of acceleration spectrum.
e PSD calculation formula of the synthetic accelerated test within the equivalent time T eq is [24] where FDS(f n ) is the total damage of each working condition, k is the safety factor, and T eq is the preset equivalent test time. Accelerate the non-Gaussian and Gaussian excitation signals of all task segments and take the proportion of both Gaussian and non-Gaussian task phases into account. According to the original fatigue life vibration test results, as shown in section 5.1, each Gaussian and non-Gaussian task phase is assigned with the number of FDS cycles, as shown in Table 2.
e accelerated excitation PSD representing the whole life for bench vibration test is shown in Figure 9. A finite-element simulation and bench test verification are carried out as next chapters.
Finite-element model.
A one-to-one finite element model was established according to the structure of the specimen and tooling model, as shown in Figure 10. Considering the determination of the patch position, we need to obtain the stress proportion relationship between the patch element and dangerous element, i.e., the stress concentration factor. e simulation provides mainly the fatigue life of the hazard unit, while the strain response of the patch unit was obtained in the bench test. As the acceleration excitation in this study had non-Gaussian characteristics, the quasi-static superposition method and transient response method were used to calculate the original fatigue life, while the uniaxial harmonic response method was used to calculate the accelerated fatigue life. Simultaneously, the large mass point method was used to simulate the input excitation of the shaking table. e solution method to calculate the stress concentration factor is to add a vertical force of 10 N at the end of the specimen and evaluate the ratio of the maximum principal stress of the dangerous element to the patch element. e calculated results are shown in Figure 11. e stress concentration factor was 5.3.
Selection of the S-N curve.
In this study, the specimen material was a 7050-T7451 aluminum alloy. According to EN-1999-1-3, the fatigue strength is 140 MPa at a number of 2 × 10 6 cycles, while the survival rate of the S-N curve was 97.7%, as shown in Figure 12. ere are two curves in the figure. e S-N curve of 2 × 10 6 cycles is the fatigue strength of the base metal with defects, and the S-N curve of 1 × 10 8 cycles is the fatigue strength of the base metal without defects. Because the aluminum alloy specimen in this paper has a notch, 2 × 10 6 is selected as the fatigue cycle.
Calculation of the Fatigue Damage at the Notch of the Specimen.
In this study, all elements at the notch of the aluminum alloy specimen were selected to calculate the fatigue damage. e model transfer function was obtained through the harmonic response analysis based on the modal superposition method and the accelerated PSD excitation was used as the simulation excitation input. e stress response PSD of the model can be obtained using (24). e rain flow amplitude probability density function (PDF), according to the Miner damage accumulation criterion, is obtained using the Dirlik probability density function frequency-domain fatigue model. Shock and Vibration where H is the transfer function of the model, G(f n ) is the PSD excitation of the input acceleration, Y(f n ) is the PSD response of the system stress, and ρ(σ) is the probability density function of the stress amplitude.
e fatigue damages before and after the acceleration are calculated and compared. e accelerated fatigue damage cloud diagram of the aluminum alloy notched specimen (damage per second) is shown in Figure 13. e selection of the acceleration parameters and fatigue strength during the simulation leads to different acceleration fatigue lives. Shock and Vibration erefore, to analyze the influences of the acceleration parameters on the accelerated fatigue life of the aluminum alloy notched specimen, Figure 14 compares the damage change law of the notch element with the change in the acceleration parameter b. Generally, a larger b led to a larger accelerated damage of the dangerous element. According to a comparative study, the system parameter K and material parameter C affect only the FDS, but do not affect the final accelerated fatigue life [8].
Comparison of damage before and after the acceleration.
Owing to the large number of notch elements, the most dangerous notch element in the simulation analysis is selected to list the fatigue damage calculation results. e simulated cumulative damages before and after the acceleration are compared to the test original fatigue life. e relative error of the calculation is shown in Table 3. e comparative analysis shows that, using the simulation method of uniaxial harmonic response, the calculated original fatigue life is close to the test original fatigue life; the relative error is 15.7%. e relative error between the simulated accelerated cumulative damage and test cumulative damage (calculated at D � 1) is 20.5%. With the uniaxial harmonic response method to calculate the cumulative damage, the accelerated damage of the dangerous element is larger than the original damage. e error is related to the calculation method of the original fatigue life and linear accumulation of damage and is within the allowable range, which verifies the effectiveness of the accelerated method used in this study. Table 4. e average original fatigue life of the aluminum alloy notched specimen was 42095 s. e purpose of the original fatigue life vibration test is to obtain the fatigue life of aluminum alloy notched specimens under Gaussian and non-Gaussian mixed excitation, so as to calculate the number of cycles of FDS which represents the whole life.
Vibration endurance acceleration test.
To further verify the effectiveness of the random vibration excitation acceleration method used in this study, uniaxial original spectrum time-domain simulation tests and vibration endurance acceleration tests were carried out. A group of six aluminum alloy notch specimens were tested. Strain gauges were pasted near the notches on the upper surface of the specimens, strain responses were measured, and accelerometers were installed on the shaking table to compare the consistency of input and output excitations. In this experiment, six unidirectional strain gauges and two accelerometers were used in a total of 12 channels. Using the crack time as a reference, the failure time of the specimen was recorded and the strain was measured. e test table is arranged as shown in Figure 17. e fracture fatigue life of specimens was recorded and the measured strain cumulative life was calculated according to the strain signal, as shown in Table 5. e relative errors of test (calculated by preset equivalent acceleration time and crack initiation fatigue life) and the relative errors of simulation (calculated by measured strain cumulative life and simulation cumulative life) were calculated. At the same time, the mean values of the absolute relative errors were computed. Finally, calculate the frequency domain characteristics of the stress signal at the dangerous point of each specimen and compare the relative error between the mean value of test life and the mean value of simulation life, as shown in Table 6. ) Y1_1 32 240 1493 864 16 34200 4140 40985 Y1_2 32 240 1493 864 16 34200 6600 43445 Y1_3 32 240 1493 864 16 34200 4860 41705 Y1_4 32 240 1493 864 16 34200 4080 40925 Y1_5 32 240 1493 864 16 34200 4740 41585 Y1_6 32 240 1493 864 16 34200 7080 43925 rough the bench vibration tests and finite-element simulation, and the comparison of all kinds of fatigue life, the accelerated fatigue life in the bench test is slightly larger than the preset equivalent accelerated fatigue life. e average accelerated fatigue life is 67.5 min, while the relative error with respect to the preset equivalent acceleration fatigue life is 16.95%. e average relative error between the measured strain accumulation and accelerated fatigue life of the finite-element simulation is 27.55%. e causes of the error are the difference of finite element model and the difference of fatigue calculation methods. In the bench test and finite-element simulation, the average relative error is within the allowable limit. e effectiveness of the non-Gaussian acceleration method and accuracy of the finiteelement accelerated fatigue life simulation method were verified.
Acceleration ideal factor calculation.
e acceleration ideal factor is used to define the acceleration effect. e ideal acceleration factor is equal to the failure fatigue life of specimens divided by the equivalent acceleration time. A closer acceleration ideal factor to 1 implies a better acceleration effect. An acceleration ideal factor >> 1 indicates that the acceleration excitation does not reflect the long-life operation of the equipment within the effective time. If the acceleration ideal factor is close to 0, the acceleration is excessive. e calculation of the ideal acceleration factor is shown in Figure 18. e ideal factor of the test acceleration is slightly larger than the simulation result owing to the cumulative results of strain damage, which is related to the temperature change in the test and control accuracy of the shaking table. e randomness of the test is relatively large. e average acceleration ideal factor is 1.125, indicating that J_#1~6 Specimens Figure 17: Sample number of the vibration endurance accelerated test. the acceleration is not excessive. e excitation can reflect the total service life of the specimen at a specified time.
Conclusion
In this study, an acceleration method to address non-Gaussian excitation is proposed based on the fatigue damage spectrum theory. e aluminum alloy notched specimen is designed and the bench test and finite element simulation are carried out at the same time to verify the effectiveness of the method.
(1) e calculation formulas of fatigue damage spectrum under Gaussian excitation and non-Gaussian excitation are derived, and the calculation flow chart of fatigue damage spectrum by acceleration excitation rain flow counting method is drawn. Simultaneously A discrimination method for non-Gaussian excitation was proposed and the kurtosis and skewness of each task segment were calculated. e Gaussian distribution characteristics of each task segment were tested and an amplitude distribution map was drawn.
e Flat Flying 2 mission segment corresponded to a Gaussian excitation, while the rest of the task segment to a non-Gaussian excitation.
(2) e effects of different FDS methods on the acceleration spectra were analyzed. e simplified Rayleigh method was most suitable for the calculation of a large number of data points. e Dirlik calculation accuracy was highest. e rain flow counting method was suitable for non-Gaussian loads. e acceleration effect was best when the acceleration parameter was b � 5 and the equivalent acceleration time was 1 h.
(3) e finite-element model of the aluminum alloy notched specimen and tooling was established. e stress concentration factor was 5.3. e fatigue lives before and after the acceleration were calculated by the uniaxial harmonic response method and compared to the original test life. e simulation results showed that the relative error of the original cumulative damage of the dangerous element was 15.7%. e relative error of the accelerated cumulative damage was 20.5%. e changes in the acceleration parameters revealed that a higher b led to a larger accelerated cumulative damage. (4) e original fatigue life vibration test and accelerated fatigue life test were carried out respectively. e bench test results of the original fatigue life show that the number of non-Gaussian FDS cycles is 1, and the number of Gaussian FDS cycles is 23.8 and 11.46 respectively. e bench test results of accelerated fatigue life show that the relative error of fatigue life before and after acceleration is less than 16.95%, and the relative error of test and simulation is 24.27%. e failure time of the specimen accelerated from approximately 12 h to 1 h; the acceleration ratio reached 12. e acceleration method can process the signal with non-Gaussian and Gaussian excitations into an accelerated PSD as a bench test-simulation input. e purpose of this study was to provide a reference for the compilation of the load spectrum and vibration endurance acceleration test of other equipment in aviation.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
|
2021-10-31T15:07:07.182Z
|
2021-10-28T00:00:00.000
|
{
"year": 2021,
"sha1": "9b3807ac560ddf860690167ccd2c2336f467434d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/sv/2021/6887951.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6b0fdd3f7e5fba5ba38608362a283172264133e9",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
239009450
|
pes2o/s2orc
|
v3-fos-license
|
Transforming Autoregression: Interpretable and Expressive Time Series Forecasts
Probabilistic forecasting of time series is an important matter in many applications and research fields. In order to draw conclusions from a probabilistic forecast, we must ensure that the model class used to approximate the true forecasting distribution is expressive enough. Yet, characteristics of the model itself, such as its uncertainty or its general functioning are not of lesser importance. In this paper, we propose Autoregressive Transformation Models (ATMs), a model class inspired from various research directions such as normalizing flows and autoregressive models. ATMs unite expressive distributional forecasts using a semi-parametric distribution assumption with an interpretable model specification and allow for uncertainty quantification based on (asymptotic) Maximum Likelihood theory. We demonstrate the properties of ATMs both theoretically and through empirical evaluation on several simulated and real-world forecasting datasets.
INTRODUCTION
In prediction tasks, a common notion of uncertainty is the distinction between aleatoric uncertainty and epistemic uncertainty (Hüllermeier and Waegeman, 2021). Aleatoric uncertainty stems from the inherent randomness of the data generating process characterized by the cumulative distribution function (CDF) F * Y |x (y|x), relating the outcome y to features x, and is thus irreducible. Epistemic uncertainty, in contrast, Preliminary work. Figure 1: Comparison of probabilistic forecasting approaches with the proposed method (ATM) for a given time series (red line). While other methods are not expressive enough and tailored towards a simple unimodal distribution, our approach allows for complex probabilistic forecasts (here a bimodal distribution) as well as the quantification of parametric uncertainty (darker shaded area at the top of the density showing what we call a density confidence interval).
is a model characteristic that can be reduced to zero with growing data size and appropriate model complexity. Approaches that can explicitly model or quantify both uncertainties are, however, less common or restricted in their expressiveness. Standard Gaussian processes (GP), for example, model both types of uncertainty, but with the aleatoric uncertainty (the observation noise term) being restricted to homoscedasticity. A heteroscedastic GP (Lázaro-Gredilla and Titsias, 2011) or a GP conditional density estimator (Dutordoir et al., 2018) address this issue, but are still based on a pre-specified parametric distribution assumption. Bayesian approaches (e.g., Blundell et al., 2015) induce uncertainties by imposing distributions for model weights θ and rely on a (predictive) posterior for inference statements that also accounts for arXiv:2110.08248v1 [cs.LG] 15 Oct 2021 the epistemic uncertainty. Yet, the resulting inference is still based on an (approximate) parametric distribution that is in many cases too restrictive (Kingma et al., 2016) and influenced by prior assumptions.
A recent, popular solution to the limited expressiveness of parametric posterior distributions are normalizing flows (Papamakarios et al., 2019). Flows turn a simple distribution to a potentially complex one using (multiple) parameterized transformations and thereby allow to minimize the discrepancy between the model's CDF F Y |x (y|x, θ) and the true CDF F * Y |x (y|x). This discrepancy, the so-called structural uncertainty, is only one part of the epistemic uncertainty. Another source of epistemic uncertainty, the parametric uncertainty (PU), stems from estimating the usually unknown parameters θ (given a fixed model structure) and is not accounted for by flow-based methods (Liu et al., 2019). Other approaches for epistemic uncertainty quantification (UQ) in machine learning such as deep ensembles (Lakshminarayanan et al., 2017) quantify for model uncertainty, but are not well studied for time series or do not allow for specific treatment of the aleatoric uncertainty. On top of that, these methods are also difficult to interpret. While approaches like ensembles perform well in quantifying predictive uncertainty, it is not immediately clear what parts of the model cause uncertainty. Ideally, we would like to conduct probabilistic forecasting of time series that is both more expressive than parametric (deep) autoregressive models and also allow for interpretation as well as parametric UQ (cf. Figure 1).
Our contributions In order to achieve these goals, we propose a new and general class of semi-parametric autoregressive models for time series analysis called autoregressive transformation models (ATMs; Section 3) that learn expressive distributions based on interpretable parametric transformations. We also derive asymptotic results for estimated parameters for a special class of ATMs (Section 4.2) and thereby allow for parametric UQ in a non-Bayesian manner, i.e., without the need of specifying prior distributions. Finally, we provide evidence for the efficacy of our proposal both with numerical experiments based on simulated data and by comparing ATMs against other existing state-of-the-art methods in benchmarks.
BACKGROUND
Approaches that model the conditional density can be distinguished by their underlying distribution assumption. Approaches can be parametric, such as mixture density networks or GPs for conditional density estimation and then learn the parameters of a prespecified parametric distribution, or non-parametric such as Bayesian non-parametrics (Dunson, 2010). A third line of research that we describe as semiparametric or parametric transforming, are approaches that start with a simple parametric distribution assumption F Z and end up with a far more flexible distribution F Y |x by transforming F Z (multiple times). Such approaches have sparked great interest in recent years, triggered by research ideas such as density estimation using non-linear independent components estimation or real-valued non-volume preserving transformations (Dinh et al., 2015(Dinh et al., , 2017. A general notion of such transformations is known as normalizing flow (NF; Papamakarios et al., 2019), where realizations z ∼ F Z of an error distribution F z are transformed to observations y via using k transformation functions. Many different approaches exist to define expressive flows. These are often defined as a chain of several transformations or an expressive neural network and allow for universal representation of F Y |x (Papamakarios et al., 2019). Autoregressive models (e.g., Bengio and Bengio, 1999;Uria et al., 2016) for distribution estimation of continuous variables are a special case of NFs, more precisely autoregressive flows (AFs; Kingma et al., 2016;Papamakarios et al., 2017), with a single transformation. Similar, suitably parameterized transformation models (TMs; Hothorn et al., 2014) are an alternative to NFs using also only a single transformation function. The transformation in TMs is chosen to be expressive enough on its own and comes with desirable approximation guarantees. Instead of a transformation from z to y, TMs define an inverse flow h(y) = z. The key idea of TMs is that many well-known statistical regression models can be represented by a base distribution 1 F Z and some transformation h. Prominent examples include linear regression or the Cox proportional hazards model (Cox, 1972), which can both be seen as a special case of TMs (Hothorn et al., 2014). Various authors have noted the connection between autoregressive models and NFs (e.g., Papamakarios et al., 2019) and between TMs and NFs (e.g., Sick et al., 2021). Advantages of TMs and conditional TMs (CTMs) are their parsimony in terms of parameters, interpretability of the input-output relationship and existing theoretical results on epistemic uncertainty (Hothorn et al., 2018). This also led to various recent TM advancements both in the field of machine learning (see, e.g., Van Belle et al., 2011) and in deep learning (see, e.g., Baumann et al., 2021;Kook et al., 2021).
Transformation models
Parametrized transformation models as proposed by Hothorn et al. (2014Hothorn et al. ( , 2018 are likelihood-based approaches to estimate the CDF F Y of Y . The main ingredient of TMs is a monotonic transformation function h to convert a simple error distribution F Z to a more complex and appropriate CDF F Y . Conditional TMs (CTMs) work analogously for the conditional distribution of Y given features x ∈ χ from feature space χ: CTMs learn h(y|x) from the data. A convenient parameterization of h for continuous Y are Bernstein polynomials (BSPs; Farouki, 2012) with order M (usually M 50). BSPs are motived by the Bernstein approximation (Bernstein, 1912) with uniform convergence guarantees for M → ∞, while also being easily invertible and computationally attractive with only M + 1 parameters. BSPs further have easy and analytically accessible derivatives, which makes them a particularly interesting choice for the change of random variables. We denote the BSP basis by a M : Ξ → R M +1 with sample space Ξ. The transformation h is then defined as h(y|x) = a M (y) ϑ(x) with feature-dependent basis coefficients ϑ. This can be seen as an evaluation of y based on a mixture of Beta densities f Be(κ,µ) with different distribution parameters κ, µ and weights ϑ(x): whereỹ is a rescaled version of y to ensureỹ ∈ [0, 1]. Restricting ϑ m > ϑ m−1 for m = 1, . . . , M + 1 guarantees monotonicity of h and thus of the estimated CDF. Roughly speaking, using BSPs of order M , allows to model the polynomials of degree M of y.
Model definition and interpretability
The transformation function h can include different data dependencies. One common choice (Hothorn, 2020;Baumann et al., 2021) is to split the transformation function into two parts where a(y) is a pre-defined basis function such as the BSP basis (omitting M for readability in the following), ϑ : χ ϑ → R M +1 a conditional parameter function defined on χ ϑ ⊆ χ and β(x) models a feature-induced shift in the target distribution. The flexibility and interpretability of TMs stems from the parameterization where the matrix Γ j ∈ R (M +1)×Oj , O j ≥ 1, subsumes all trainable parameters and represents the effect of the interaction between the basis functions in a and the chosen predictor terms b j : χ bj → R Oj , χ bj ⊆ χ. The predictor terms b j have a role similar to base learners in boosting and represent simple learnable functions. For example, a predictor term can be the jth feature, b j (x) = x j , and Γ j ∈ R (M +1)×1 describes the linear effect of this feature on the M + 1 basis coefficients, i.e., how the feature x j relates to the density transformation from Z to Y |x. Other structured non-linear terms such as splines allow for interpretable lowerdimensional non-linear relationships. Various authors also proposed neural network predictors to allow potentially multidimensional feature effects or to incorporate unstructured data sources Baumann et al., 2021;Kook et al., 2021). In a similar fashion, β(x) can be defined using various predictors.
Relating features and their effect to the coefficients a in an additive fashion allows to directly assess the impact of each feature on the transformation and also whether changes in the feature just shift the distribution in its location (β(x)) or if the relationship also transforms other distribution characteristics such as variability or skewness (see, e.g., Baumann et al., 2021).
Relationship with autoregressive flows In the notation of AFs, h −1 (·) is known as transformer, a parameterized and bijective function. By the definition of (4), the transformer in the case of TMs is represented by the basis function a(·) and parameters ϑ. In AFs, these transformer parameters are learned by a conditioner, which in the case of TMs are the functions b j . In line with the assumptions made for AFs, these conditioners in TMs do not need to be bijective functions themselves.
AUTOREGRESSIVE TRANSFORMATIONS
Inspired by TMs and AFs, we propose autoregressive transformation models (ATMs). The basic idea is to use a parameter-free error distribution F Z and transform this distribution in an interpretable fashion to obtain F Y |x . One of the assumptions of TMs is the stochastic independence of observations, i.e., Y i |x i ⊥Y j |x j , i = j. When Y is a time-series, this assumption does clearly not hold. In contrast, this assumption is not required for AFs.
Let t ∈ T ⊆ N 0 be a time index for the time series for some p ∈ {1, . . . , t}, distribution G, parameter θ ∈ Θ with compact parameter space Θ ⊂ R v and filtration F s , s ∈ T , s < t, on the underlying probability space. Assume that the joint distribution of Y t , Y t−1 , . . . , Y 1 possesses the Markov property with order p, i.e., the joint distribution, expressed through its absolutely continuous density f , can be rewritten as product of its conditionals with p lags: The time-dependency of x is omitted for better readability here and in the following. Given this autoregressive structure, we propose a time-dependent transformation h t that extends (C)TMs to account for filtration and time-varying feature information.
Definition 1 Autoregressive Transformation Models Let h t , t ∈ T , be a time-dependent monotonic transformation function and F Z the parameter-free error distribution as in Definition 3 in the Supplementary Material.
We define autoregressive transformation models as follows: This can be seen as the natural extension of (2) for time-series data with autoregressive property and time-varying transformation function h t . In other words, (8) says that after transforming y t with h t , its conditional distribution follows the error distribution F Z , or vice versa, a random variable Z ∼ F Z can be transformed to follow the distribution Y t |x using h −1 t .
Relationship with autoregressive models and autoregressive flows Autoregressive models (AMs; Bengio and Bengio, 1999) and AFs both rely on the factorization of the joint distribution into conditionals as in (7). Using the CDF of each conditional in (7) as transformer in an AF, we obtain the class of AMs (Papamakarios et al., 2019). AMs and ATMs are thus both (inverse) flows using a single transformation, but with different transformers and, as we will outline in Section 3.2, also with different conditioners.
Likelihood-based estimation
Based on (7), (8) and the change of variable theorem, the likelihood contribution of the tth observation y t in ATMs is given by ∂yt and the full likeli- Figure 2: Illustration of a transformation process induced by the structural assumption of Section 3.2. The original data history F t−1 (red) is transformed into a base distribution (orange) using the transformation h 1t (solid blue arrow) and then further transformed using h 2t (dashed green arrow) to match the transformed distribution of the current time point t.
hood for T observations thus by where y 0 is a known finite starting value and F 0 only contains y 0 . Based on (9), we define the loss of all model parameters θ as negative log-likelihood and use (10) to train the model.
As for AFs, many special cases can be defined from the above definition and more concrete structural assumptions for h t make ATMs an interesting alternative to other methods in practice. We will elaborate on meaningful structural assumptions in the following.
Structural assumptions
In CTMs, the transformation function h is usually decomposed as h(y|x) = h 1 (y|x) + h 2 (x), where h 1 is a function depending on y and h 2 is a distribution-shift function depending only on x. For time-varying transformations h t our fundamental idea is that the outcome y t shares the same transformation with its filtration F t−1 , i.e., the lags Y t = (y t−1 , . . . , y t−p ). In other words, a transformation applied to the outcome must be equally applied to its predecessor in time to make sense of the autoregressive structural assumption. An appropriate transformation structure can thus be described by for t ∈ T , where indicates the element-wise application of h 1t to all lags in Y t . In other words, ATMs first apply the same transformation h 1t to y t and individually to y t−1 , y t−2 , . . ., and then further consider a transformation function h 2t to shift the distribution based on the transformed filtration. While the additivity assumption of λ 1t and λ 2t seems restrictive at first glance, the imposed relationship between y t and Y t only needs to hold in the transformed probability space. For example, h 1t can compensate for a multiplicative autoregressive effect between the filtration and y t by implicitly learning a log-transformation (cf. Section 6.1). At the same time, the additivity assumption offers a nice interpretation of the model, also depicted in Figure 2: After transforming y t and Y t , (11) implies that training an ATM is equal to a regression model of the form λ 1t = λ 2t + ε, with additive error term ε ∼ F Z (cf. Proposition 1 in the Supplementary Material). This also helps explaining why only λ 2t depends on F t−1 : if λ 1t also involves F t−1 , ATMs would effectively model the joint distribution of the current time point and the filtration, which in turn contradicts the Markov assumption (7).
Specifying h 1t very flexible clearly results in overfitting. As for CTMs, we use a feature-driven basis function representation h 1t (y t |x) = a(y t ) ϑ(x) with BSPs a and specify their weights as in (5). The additional transformation h 2t ensures enough flexibility for the relationship of transformed response and the transformed filtration, e.g., by using a non-linear model or neural network. An interesting special case arises for linear transformations in h 2t , which we elaborate in Section 4 in more detail.
Interpretability The three main properties that make ATMs interpretable are 1) their additive predictor structure as outlined in (5); 2) the clear relationship between features and the outcome through the BSP basis, and 3) ATM's structural assumption as given in (11). As for (generalized) linear models, the additivity assumption in the predictor allows to interpret feature influences through their partial effect ceteris paribus. On the other hand, choices of M and F Z will influence the relationship of features and outcome by inducing different types of models. A normal distribution assumption for F Z and M = 1 will turn ATMs into an additive regression model with Gaussian error distribution (see also Section 4). For M > 1, features in h 1 will also influence higher moments of Y |x and allow more flexibility in modeling F Y |x . For example, a (smooth) monotonously increasing feature effect will induce rising moments of Y |x with increasing feature values. Other choices for F Z such as the logistic distribution also allow for easy interpretation of feature effects (e.g., on the log-odds ratio scale; see Kook et al., 2021). Finally, the structural assumption of ATMs enforces that the two previous interpretability aspects are consistent over time. We will provide additional explanation as well as an illustrative example in the Supplementary Material and refer to Hothorn et al. (2014) for more details on interpretability of CTMs.
Implementation In order to allow for a flexible choice of transformation functions and predictors b j , we propose to implement ATMs in a neural network. While this allows for complex model definitions, there are also several computational advantages. In a network, weight sharing for h 1t across time points is straightforward to implement and common optimization routines such as Adam (Kingma and Ba, 2014) prove to work well for ATMs despite the monotonicity constraints required for the BSP basis. Furthermore, as basis evaluations for a large number of outcome lags in F t−1 can be computationally expensive and add M additional columns per lag to the feature matrix, an additional advantage is the dynamic nature of minibatch training. It allows to evaluate the bases only during training and separately in each mini-batch.
AT(p) TRANSFORMATIONS
A particular interesting special case of ATMs is the AT(p) model. This model class is a direct extension of the well-known autoregressive model of order p (short AR(p) model; Shumway et al., 2000) to transformation models.
Definition 2 AT(p) transformations We define AT(p) transformations, a special class of ATMs, by setting h 1t (y t |x) = a(y t ) ϑ(x), and h 2t (F t−1 , x) = p j=1 φ j h 1t (y t−j ) + r(x), i.e., an autoregressive shift term with optional exogenous remainder term r(x).
Model Details
The AT(p) model is a very powerful and interesting model class for itself, as it allows to recover the classical time series AR(p) model when setting M = 1, ϑ(x) ≡ ϑ and r(x) ≡ 0 (see Proposition 2 in the Supplementary Material for a proof of equivalence). But it can also be extended to more flexible autoregressive models in various directions. We can increase M to get a more flexible density, allowing to deviate from the error distribution assumption F Z , e.g., to relax the normal distribution assumption of AR models. Alternatively, incorporating exogenous effects into h 1t allows to estimate the density data-driven or to introduce exogenous shifts in time series using features x in r(x). ATMs can also recover well-known transformed autoregressive models such as the multiplicative autoregressive model (Wong and Li, 2000) as demonstrated in Section 6.1. When specifying h 1t flexible enough, an AT(p) model will, e.g., learn the log-transformation function required to transform a multiplicative autoregressive time series to an additive autoregressive time series on the log-scale. In general, this allows the user to learn autoregressive models without the need to find an appropriate transformation before applying the time series model. This means that the uncertainty about preprocessing steps (e.g., a Box-Cox transformation; Sakia, 1992) is incorporated into the model estimation, making parts of the pre-processing obsolete for the modeler and its uncertainty automatically available.
Non-linear extensions of AT(p) models can be constructed by modeling Y t in h 2t non-linear, allowing ATMs to resemble model classes such as non-linear AR models with exogenous terms (e.g., Lin et al., 1996).
Asymptotic theory and parametric inference
As elaborated in the introduction, an important yet often neglected aspect of probabilistic forecasts is the epistemic uncertainty, more specifically the PU. Based on general asymptotic theory for time series models (Ling and McAleer, 2010), we derive theoretical PU properties for AT(p)s in this section. Corresponding proofs follow directly from theorems in Ling and McAleer (2010) together with common assumptions for time series modeling described in the Supplementary Material. In particular, we assume that time series are strictly stationary and ergodic, a common assumption in (deep) time series models. The other assumptions made in Ling and McAleer (2010) can be directly transferred to our use case since AT(p)s and non-linear extensions are fullyparameterized time series models with parameter estimatorθ T = arg min Θ − (θ) based on Maximum-Likelihood estimation (MLE).
As stated in Hothorn et al. (2018), Assumption 1(i) holds if a is not arbitrarily ill-posed. In practice, both a finite y 0 and Assumption 1(i) are realistic assumptions. Making two additional and also rather weak assumptions (1(ii)-(iii)) allows to derive the asymptotic normal distribution forθ.
Theorem 2 (Asymptotic Normality) If y 0 is finite and Assumptions 1 hold, then for T → ∞, Based on the same assumptions, a consistent estimator for the covariance can be derived.
Theorem 3 (Consistent Covariance Estimator)
For finite y 0 and under Assumptions 1, are consistent estimators for I and J, respectively.
Using the above results, we can derive statistically valid UQ. An example is depicted in Figure 3. Since h is parameterized through θ, it is also possible to derive the structural uncertainty of ATMs based on the PU. More specifically, h can be represented using a linear transformation of θ, h = Υθ, implying the (co-)variance ΥI −1 J(θ * )I −1 Υ forĥ.
Practical application ATM define the distribution where h t is parameterized by θ. In order to assess PU in the estimated density as, e.g. visualized in Figure 1 and 3, we propose to use a parametric Bootstrap described in detail in Supplementary Material C.
RELATED LITERATURE
As outlined in Section 2.2 and 3, NFs and AFs are directly linked to ATMs and motivated by CTMs. Next to flow-based models that focus on a generative task, various approaches have been proposed for CDE (Dinh et al., 2017;Papamakarios et al., 2019) and combined with sequential modeling such as masked autoregressive flows (Papamakarios et al., 2017).
EXPERIMENTS
We will first investigate theoretical properties of ATMs as well as their epistemic uncertainty in simulation studies. We then compare our approach against other state-of-the-art methods described in the previous section on probabilistic forecasting tasks in a benchmark study. Additional results can be found in the Supplementary Material D. Table 1: Average and standard deviation (brackets) of the MSE (multiplied by 100 for better readability) between estimated and true coefficients in an AR(p) model using our approach on the tampered data (bottom row) and the corresponding oracle based on the true data (Oracle). Equivalence and consistency We first demonstrate Theorem 1 and Proposition 2 in the Supplementary Material, i.e., for growing number of observations AT(p) models can recover AR(p) models when equally specified. We therefore simulate various AR models using lags p = 1, 2, 4, n = 200, 400, 800 and estimate both a classical AR(p) model and an AT(p) model for 20 replications. For the latter, we use the mapping derived in Proposition 2 to obtain the estimated AR coefficients from the AT(p) model. In Table 3 in the Supplementary Material D we compare both models based on their estimated coefficients against the Flexibility Next, we demonstrate how the AT(p) model with M = 30 can recover a multiplicative autoregressive process. We therefore generate data using an AR model with different lags p and observations n as before. This time, however, we provide the AT(p) model only with the exponentiated datay t = exp(y t ). This means the model needs to learn the inverse transformation back to y t itself. Despite having to estimate the log-transformation in addition, the AT(p) model recovers the true model well and, for larger n, is even competitive to the ground truth model (Oracle) that has access to the original non-exponentiated data (cf. Table 1 for an excerpt of the results).
Simulation Study
Epistemic Uncertainty In this experiment we validate our theoretical results proposed in Section 4.2. As in the previous experiment, we try to learn the logtransformed AR model using an AT(p = 3) model with coefficients (0.3, 0.2, 0.1). After estimation, we check the empirical distribution ofθ andĥ against their respective theoretical one in 1000 simulation replications. Figure 4 depicts a quantile-quantile plot of the empirical and theoretical distribution for both h and all 4 parameters (intercept and three lag coefficients). The empirical distributions are well aligned with their theoretical distribution as derived in Section 4.2, confirming our theoretical results.
Benchmarks
Finally, we compare our approach to conditional NFs for (multivariate) time series (MCNF; Rasul et al., 2021) as well as different state-of-the-art forecasting methods previously introduced (DeepAR, DeepFactor, DeepState, Prophet) and an ARIMA baseline. We compare these approaches on commonly used benchmark datasets electricity ( (Lai et al., 2018). A short summary of these datasets can be found in Table 5 in the Supplementary Material. For each neural network-based algorithm a grid-search is used on a rolling window to find the optimal set of hyperparameters. For electricity and traffic we use both the 24 hours and 72 hours forecast horizon. For m4 and tour the test sets are already pre-defined with 48 hours and 24 months forecast windows, respectively. For each proposed method and dataset, we report continuous ranked probability scores (CRPS; Gneiting et al., 2007;Jordan et al., 2019) and average results across time series and time points. Our model uses only linear effects of the day, hour, month and/or household for ϑ. These effects change higher moments of the distribution in an additive manner allowing to relate individual influences of features to the outcome distribution. The (transformed) lags of the outcome only change the distribution's location. Further details can be found in the Supplementary Material D.
Results Table 2 shows the results of the comparison. Interpretability naturally comes at the cost of decreased prediction performance. DeepAR outperforms all models in most cases. However, our approach often yields competitive and consistently good results while its inner workings are straightforward to understand.
CONCLUSION AND OUTLOOK
We have proposed ATMs, a flexible and comprehensible model class combining and extending various existing modeling approaches. ATMs allow for expressive probabilistic forecasts using a base distribution and a single transformation modeled by Bernstein polynomials. Additionally, a parametric inference paradigm based on MLE allows for epistemic UQ. ATMs can be based on interpretable additive predictors or deep neural networks, empirically and theoretically recover well-known models and demonstrate competitive per-formance on real-world datasets.
ATMs are the first adaption of (deep) transformation models to time series applications. Although our approach can be easily extended to incorporate deep architectures, PU derivations no longer hold (e.g., because uniqueness of θ * cannot be guaranteed). The derived results are still valuable, as they will help derivations of PU for more complex models in the future and further advance the presented methods. Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
A RELATED LITERATURE
Conditional models describe the conditional distribution F Y |x of an outcome Y conditional on observed features x (see, e.g., Jordan et al., 2002). Instead of modeling the complete distribution of Y |x, many approaches focus on modeling a single characteristic of this conditional distribution. Predictive models, for example, often focus on predicting the average outcome value, i.e., the mean of the conditional distribution. Quantile regression (Koenker, 2005) can model a certain quantile of Y |x and is thus more flexible in explaining the conditional distribution and its aleatoric uncertainty -the non-deterministic nature of the input-output relationship (Hüllermeier and Waegeman, 2021). Various approaches in machine and deep learning allow for an even richer explanation by, e.g., directly modeling the distribution's density f Y |x and thus the whole distribution F Y |x . Examples include mixture density networks (Bishop, 1994), survival models (Bender et al., 2021) or, in general, probabilistic machine learning approaches such as Gaussian processes or graphical models (Murphy, 2012). In statistics and econometrics, similar approaches exist, which can be broadly characterized as distribution regression (DR) approaches (Chernozhukov et al., 2013;Foresi and Peracchi, 1995;Wu and Tian, 2013). These approaches can also be regarded as conditional density estimation (CDE) models. DR or CDE is a challenging task that requires balancing the representational capacity of the model and its risk for overfitting. We refer to Rothfuss et al. (2019) for a recent work of best practices in CDE.
B DEFINITIONS, ASSUMPTIONS, THEORETICAL ANALYSIS
The following definition of the error distribution follows Hothorn et al. (2018).
Definition 3 Error Distributions
Let Z : Ω → R be a U − B measurable function from (Ω, U) to the Euclidian space with Borel σ-algebra B with absolutely continuous distribution P Z = f Z µ L on the probability space (R, B, P Z ) and µ L the Lebesque measure. We define F Z and F −1 Z as the corresponding distributions and assume F Z (−∞) = 0, F Z (∞) = 1. 0 < f Z (z) < ∞∀z ∈ R with log-concave, twice-differentiable density f Z with bounded first and second derivatives.
Proposition 1 (Interpretation of (11)) The ATM as defined in (8) and further specified in (11) can be seen as an additive regression model with outcome h 1t (y t ), predictor h 2t ((h 1t Y t |F t−1 , x)|x) and error term ε ∼ F Z .
Proposition 2 (Equivalence of AR(p) and AT(p) models) An autoregressive model of order p (AR(p)) with independent white noise following the distribution F Z in the location-scale family is equivalent to an AT(p) model for M = 1, ϑ(x) ≡ ϑ, r(x) ≡ 0 and error distribution F Z .
The transformation of the AT(p) model is thus given by
The AR(p) model is given by The equivalence of (13) in combination (12) with (14) is then given when settingθ 0 = −ϕ 0 ,φ j = −ϕ j ∀j ∈ {1, . . . , p} and σ =θ −1 1 . Since both models find their parameters using Maximum Likelihood and it holdsθ 1 > 0 (as required for σ) by the monotonicity restriction on the BSPs coefficient, the models are identical up to different parameterization. (2010):
C PARAMETRIC UNCERTAINTY AND PRACTICAL APPLICATION
To assess the PU included in the estimated density, we propose to use a parametric Bootstrap (similar to the one suggested in Hothorn et al., 2018) that is based on the following steps:
D.1 Simulations
In this subsection, we describe the details of the data generating process used in Figure 1 (Section D.1.1) and provide results on experiments for the equivalence and consistency paragraph of Section 6.1 in Section D.1.2.
D.1.1 Data Generating Process Toy Example
For Figure 1 we simulate T = 1000 time points y 1 , . . . , y T that exhibit two modes as follows: 1. Set y 0 = 0; 2. Define a shift = 2 and sample x 1 , . . . , x T from {− , } with equal probability; 3. Define a autoregressive coefficient When providing the model with the marginal distribution of y t and defining x t as latent, unobserved variable, y t will exhibit two modes centered around ± .
D.1.2 AR(p) comparison
The data generating process for the simulation of Section 6.1 is an AR model with the p first coefficients 0.4, 0.2, 0.1, 0.05, 0.025. A standard implementation for the AR model was used. For the AT model we use the implementation provided in Rügamer et al. (2021) using 2500 epochs, batch size of 50, and early stopping based on 10% of the training data. Table 4: Mean and standard deviation (brackets) of the mean squared error (×10 2 for better readability) between estimated and true coefficients in an AR(p) model using our approach on the tampered data (bottom row) and the corresponding oracle based on the true data (Oracle). Table 5 summarizes the characteristics of the data sets used. We used MxNet and the gluon-ts (Alexandrov et al., 2020) implementation of DeepAR, DeepState and Deep-Factor for our comparison as well as the conditional flow implementation given in pytorch-ts (Rasul et al., 2021). For ATMs we extended the software deepregression et al. (2021). For ARIMA, we use the forecast R package (Hyndman et al., 2021) and for Prophet the prophet R package (Taylor and Letham, 2021).
D.2.3 Hyperparameter Setup
For DeepAR, DeepState and DeepFactor we tuned batch size, context length and epochs on a grid in line with the recommendations (see, e.g., the DeepAR Documentation) for each forecasting horizon and dataset. For ATMs we used the same lags as for DeepAR and applied a linear or neural network on top of the transformed lags as h 2t . We performed a grid search over other additive predictor components in h 1t and h 2t with possible options: 1) intercept only; 2) linear effect for the time series dimension indicator (i.e., the household); 3) linear hour effect and 2); 4) linear day effect and 2); 5) linear day and 3); 6) linear indicator-day effect; 7) linear indicator-hour effect; 8) linear indicator-month effect. We further searched over the number of BSPs M ∈ {10, 20, 30}. Network training was done using 50 epochs with early stopping and a batch size of 64 or 128. For MCNF we used the an LSTM as recurrent neural network, trained the network with batch size 64 for 1, 20 or 40 epochs as suggested in Rasul et al. (2021) and further compared the performance of 1, 3 or 5 RealNVP flows. For ARIMA we used the auto.arima implementation (Hyndman et al., 2021) and performed a stepwise search via the aicc with different starting values for the order of the AR and the MA term. For the AR term possible parameter values were 6, 24, 72 for hourly and monthly data when a 24 hour, 72 hour or 24 month forecast had to be performed, and 6 and 24 when a 48h forecast was performed. The search space for the MA term started either with 0 or 3. We chose the ARIMA model with the lowest aicc on the validation set. For Prophet (Taylor and Letham, 2021), we tuned the parameter modulating the flexibility of the automatic changepoint selection (0.001, 0.05, 0.5), the parameter modulating the strength of the seasonality model (0.01, 0.5, 10) and the number of Fourier components (3,5) on a grid in line with the recommendations (see, e.g., the Prophet Documentation). As competitor to Prophet's automatic seasonality detection, we manually added seasonal components in the additive predictor where the corresponding periodicity was determined based on the estimated spectral density for the frequency domain of the observed time series. For the exchange data, we did not tune the ARIMA and the Prophet model due to the small sample size compared to the models' complexity.
For the exchange data, the ARIMA model for the test data was found by chosing the model with the lowest in-sample aicc after starting with AR order 0 or 3 and with MA order 0 or 3. For the Prophet model, we took the default values.
For all models, we evaluate the CRPS for each hyperparameter set on a separate validation set which has the same size as the corresponding test set. The set of hyperparameter with the lowest CRPS on the validation set is finally used for the prediction on the test set. The CRPSs resulting from the prediction on the test set are the ones reported in Table 2. For multiple forecasting windows we evaluate the (same) final model multiple times (over each window) and average results.
|
2021-10-18T05:12:42.174Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "7246b752a61750371c29489c8925929191e24f80",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7246b752a61750371c29489c8925929191e24f80",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
3507570
|
pes2o/s2orc
|
v3-fos-license
|
Polynomial Norms
In this paper, we study polynomial norms, i.e. norms that are the $d^{\text{th}}$ root of a degree-$d$ homogeneous polynomial $f$. We first show that a necessary and sufficient condition for $f^{1/d}$ to be a norm is for $f$ to be strictly convex, or equivalently, convex and positive definite. Though not all norms come from $d^{\text{th}}$ roots of polynomials, we prove that any norm can be approximated arbitrarily well by a polynomial norm. We then investigate the computational problem of testing whether a form gives a polynomial norm. We show that this problem is strongly NP-hard already when the degree of the form is 4, but can always be answered by testing feasibility of a semidefinite program (of possibly large size). We further study the problem of optimizing over the set of polynomial norms using semidefinite programming. To do this, we introduce the notion of r-sos-convexity and extend a result of Reznick on sum of squares representation of positive definite forms to positive definite biforms. We conclude with some applications of polynomial norms to statistics and dynamical systems.
Introduction
A function f : R n → R is a norm if it satisfies the following three properties: (i) positive definiteness: f (x) > 0, ∀x = 0, and f (0) = 0.
Some well-known examples of norms include the
i , and the ∞-norm, f (x) = max i |x i |. Our focus throughout this paper is on norms that can be derived from multivariate polynomials. More specifically, we are interested in establishing conditions under which the d th root of a homogeneous polynomial of degree d is a norm, where d is an even number. We refer to the norm obtained when these conditions are met as a polynomial norm. It is easy to see why we restrict ourselves to d th roots of degree-d homogeneous polynomials. Indeed, nonhomogeneous polynomials cannot hope to satisfy the homogeneity condition and homogeneous polynomials of degree d > 1 are not 1-homogeneous unless we take their d th root. The question of when the square root of a homogeneous quadratic polynomial is a norm (i.e., when d = 2) has a well-known answer (see, e.g., [14, Appendix A]): a function f (x) = x T Qx is a norm if and only if the symmetric n×n matrix Q is positive definite. In the particular case where Q is the identity matrix, one recovers the 2-norm. Positive definiteness of Q can be checked in polynomial time using for example Sylvester's criterion (positivity of the n leading principal minors of Q). This means that testing whether the square root of a quadratic form is a norm can be done in polynomial time. A similar characterization in terms of conditions on the coefficients are not known for polynomial norms generated by forms of degree greater than 2. In particular, it is not known whether one can efficiently test membership or optimize over the set of polynomial norms.
Outline and contributions. In this paper, we study polynomial norms from a computational perspective. In Section 2, we give two different necessary and sufficient conditions under which the d th root of a degree-d form f will be a polynomial norm: namely, that f be strictly convex (Theorem 2.2), or (equivalently) that f be convex and postive definite (Theorem 2.1). Section 3 investigates the relationship between general norms and polynomial norms: while many norms are polynomial norms (including all p-norms with p even), some norms are not (consider, e.g., the 1-norm). We show, however, that any norm can be approximated to arbitrary precision by a polynomial norm (Theorem 3.1). In Section 4, we move on to complexity results and show that simply testing whether the 4 th root of a quartic form is a norm is strongly NP-hard (Theorem 4.1). We then provide a semidefinite programming-based test for checking whether the d th root of a degree d form is a norm (Theorem 4.4) and a semidefinite programming-based hierarchy to optimize over a subset of the set of polynomial norms (Theorem 4.20). The latter is done by introducing the concept of r-sum of squares-convexity (see Definition 4.6). We show that any form with a positive definite Hessian is r-sos-convex for some value of r, and present a lower bound on that value (Theorem 4.7). We also show that the level r of the semidefinite programming hierarchy cannot be bounded as a function of the number of variables and the degree only (Theorem 4.18). Finally, we cover a few applications of polynomial norms in statistics and dynamical systems in Section 5. In Section 5.1, we compute approximations of two different types of norms, polytopic gauge norms and p-norms with p noneven, using polynomial norms. The techniques described in this section can be applied to norm regression. In Section 5.2, we use polynomial norms to prove stability of a switched linear system, a task which is equivalent to computing an upperbound on the joint spectral radius of a family of matrices.
Two equivalent characterizations of polynomial norms
We start this section with two theorems that provide conditions under which the d th root of a degree-d form is a norm. These will be useful in Section 4 to establish semidefinite programming-based approximations of polynomial norms. Note that throughout this paper, d is taken to be an even positive integer. Proof. If f 1/d is a norm, then f 1/d is positive definite, and so is f . Furthermore, any norm is convex and the d th power of a nonnegative convex function remains convex.
Assume now that f is convex and positive definite. We show that f 1/d is a norm. Positivity and homogeneity are immediate. It remains to prove the triangle inequality. Let g := f 1/d . Denote by S f and S g the 1-sublevel sets of f and g respectively. It is clear that and as f is convex, S f is convex and so is S g . Let x, y ∈ R n . We have that x g(x) ∈ S g and y g(y) ∈ S g . From convexity of S g , · y g(y) ≤ 1.
Homogeneity of g then gives us 1 g(x) + g(y) g(x + y) ≤ 1 which shows that triangle inequality holds.
Theorem 2.2. The d th root of a degree-d form f is a norm if and only if f is strictly convex, i.e., Proof. We will show that a degree-d form f is strictly convex if and only f is convex and positive definite. The result will then follow from Theorem 2.1. Suppose f is strictly convex, then the first-order characterization of strict convexity gives us that For x = 0, the inequality becomes f (y) > 0, ∀y = 0, as f (0) = 0 and ∇f (0) = 0. Hence, f is positive definite. Of course, a strictly convex function is also convex.
Approximating norms by polynomial norms
It is easy to see that not all norms are polynomial norms. For example, the 1-norm ||x|| 1 = n i=1 |x i | is not a polynomial norm. Indeed, all polynomial norms are differentiable at all but one point (the origin) whereas the 1-norm is nondifferentiable whenever one of the components of x is equal to zero. In this section, we show that, though not every norm is a polynomial norm, any norm can be approximated to arbitrary precision by a polynomial norm (Theorem 3.1). The proof of this theorem is inspired from a proof by Ahmadi and Jungers in [1,4]. A related result is given by Barvinok in [11]. In that paper, he shows that any norm can be approximated by the d-th root of a nonnegative degree-d form, and quantifies the quality of the approximation as a function of n and d. The form he obtains however is not shown to be convex. In fact, in a later work [12,Section 2.4], Barvinok points out that it would be an interesting question to know whether any norm can be approximated by the d th root of a convex form with the same quality of approximation as for d-th roots of nonnegative forms. The result below is a step in that direction though no quantitative result on the quality of approximation is given. Throughout, S n−1 denotes the unit sphere in R n .
Theorem 3.1. Let || · || be any norm on R n . For any > 0, there exist an even integer d and a convex positive definite form f of degree d such that max Note that, from Theorem 2.1, f 1/d is a polynomial norm as f is a convex positive definite form. To show this result, we start with the following lemma.
Lemma 3.2. Let || · || be any norm on R n . For any > 0, there exist an even integer d and an n-variate convex positive definite form f of degree d such that Proof. Throughout, we let B α := {x | ||x|| ≤ α}. When α = 1, we drop the subscript and simply denote by B the unit ball of || · ||. We will also use the notation ∂S to denote the boundary of a set S and int(S) to denote its interior. Let¯ := 1+ . The crux of the proof lies in proving that there exists an integer d and a positive definite convex form f of degree d such that If we prove this, then Lemma 3.2 can be obtained as follows. Let x ∈ R n . To show the first inequality in (2), we proceed by contradiction. Suppose (If f 1/d (x) = 0 then x = 0 and the inequality holds.) Hence, (3). To prove the second inequality in (2), note that the first inclusion of (3) gives us f 1/d ((1 −¯ )x/||x||) ≤ 1, which is equivalent to f 1/d (x/||x||) ≤ 1 1−¯ = 1 + . Multiplying by ||x|| on both sides gives us the result.
We now focus on showing the existence of a positive definite convex form f that satisfies (3). The proof is a simplification of the proof of Theorem 3.2. in [1,4] with some modifications. Let x ∈ ∂B 1−¯ /2 . To any such x, we associate a dual vector v(x) orthogonal to a supporting hyperplane of B 1−¯ /2 at x. By definition of a supporting hyperplane, we have that v(x) T y ≤ v(x) T x, ∀y ∈ B 1−¯ /2 , and, as Let It is easy to see that S(x) is an open subset of the boundary ∂B of B. Futhermore, since x ∈ int(B), x/||x|| ∈ S(x) which implies that S(x) is nonempty and that the family of sets S(x) (as x ranges over ∂B 1−¯ /2 ) is a covering of ∂B.
We now define where d is any integer satisfying (5). The form f is convex as a sum of even powers of linear forms. Let By (5), it is straightforward to see that B 1−¯ ⊆ L. We now show that L ⊆ int(B). Let y ∈ L, then f (y) ≤ 1. If a sum of nonnegative terms is less than or equal to 1, then each term has to be less than or equal to 1, which implies that contradicts the previous statement. We have that ∂B ∩ L = ∅ as a consequence. However, as L and B both contain the zero vector, this implies that L ⊆ int(B). Note that the previous inclusion guarantees positive definiteness of f . Indeed, if f were not positive definite, L would be unbounded and could not be a subset of B (which is bounded).
Proof of Theorem 3.1. Let > 0 and denote by α := max x∈S n−1 ||x||. By Lemma 3.2, there exists an integer d and a convex form f such that This is equivalent to To see this, recall that by definition of a supporting hyperplane, vi = 0 For x ∈ S n−1 , as ||x||/α ≤ 1, this inequality becomes We remark that the polynomial norm constructed in Theorem 3.1 is the d th -root of an sos-convex polynomial. Hence, one can approximate any norm on R n by searching for a polynomial norm using semidefinite programming. To see why the polynomial f in (6) is sos-convex, observe that linear forms are sos-convex and that an even power of an sos-convex form is sos-convex.
4 Semidefinite programming-based approximations of polynomial norms
Complexity
It is natural to ask whether testing if the d th root of a given degree-d form is a norm can be done in polynomial time.
In the next theorem, we show that, unless P = N P , this is not the case even when d = 4.
Theorem 4.1. Deciding whether the 4 th root of a quartic form is a norm is strongly NP-hard.
Proof. The proof of this result is adapted from a proof in [5]. Recall that the CLIQUE problem can be described thus: given a graph G = (V, E) and a positive integer k, decide whether G contains a clique of size at least k. The CLIQUE problem is known to be NP-hard [16]. We will give a reduction from CLIQUE to the problem of testing convexity and positive definiteness of a quartic form. The result then follows from Theorem 2.1. Let ω(G) be the clique number of the graph at hand, i.e., the number of vertices in a maximum clique of G. Consider the following quartic form In [5], using in part a result in [20], it is shown that is convex and b(x; y) is positive semidefinite. Here, γ is a positive constant defined as the largest coefficient in absolute value of any monomial present in some entry of the matrix ∂ 2 b(x;y) ∂xi∂yj i,j . As i x 4 i + i y 4 i is positive definite and as we are adding this term to a positive semidefinite expression, the resulting polynomial is positive definite. Hence, the equivalence holds if and only if the quartic on the righthandside of the equivalence in (7) is convex and positive definite.
Note that this also shows that strict convexity is hard to test for quartic forms (this is a consequence of Theorem 2.2). A related result is Proposition 3.5. in [5], which shows that testing strict convexity of a polynomial of even degree d ≥ 4 is hard. However, this result is not shown there for forms, hence the relevance of the previous theorem.
Theorem 4.1 motivates the study of tractable sufficient conditions to be a polynomial norm. The sufficient conditions we consider next are based on semidefinite programming.
Sum of squares polynomials and semidefinite programming review
We start this section by reviewing the notion of sum of squares polynomials and related concepts such as sum of squares-convexity. We say that a polynomial f is a sum of squares (sos) if f (x) = i q 2 i (x), for some polynomials q i . Being a sum of squares is a sufficient condition for being nonnegative. The converse however is not true, as is exemplified by the Motzkin polynomial which is nonnegative but not a sum of squares [22]. The sum of squares condition is a popular surrogate for nonnegativity due to its tractability. Indeed, while testing nonnegativity of a polynomial of degree greater or equal to 4 is a hard problem, testing whether a polynomial is a sum of squares can be done using semidefinite programming. This comes from the fact that a polynomial p of degree d is a sum of squares if and only if there exists a positive semidefinite is the standard vector of monomials of degree up to d (see, e.g., [23]). As a consequence, any optimization problem over the coefficients of a set of polynomials which includes a combination of affine constraints and sos constraints on these polynomials, together with a linear objective can be recast as a semidefinite program. These type of optimization problems are known as sos programs.
Though not all nonnegative polynomials can be written as sums of squares, the following theorem by Artin [9] circumvents this problem using sos multipliers.
Theorem 4.2 (Artin [9]). For any nonnegative polynomial f , there exists an sos polynomial q such that q · f is sos.
This theorem in particular implies that if we are given a polynomial f , then we can always check its nonnegativity using an sos program that searches for q (of a fixed degree). However, the condition does not allow us to optimize over the set of nonnegative polynomials using an sos program (as far as we know). This is because, in that setting, products of decision varibles arise from multiplying polymomials f and q, whose coefficients are decision variables.
By adding further assumptions on f , Reznick showed in [24] that one could further pick q to be a power of i x 2 i . Theorem 4.3 (Reznick [24]). Let f be a positive definite form of degree d in n variables and define Motivated by this theorem, the notion of r-sos polynomials can be defined: a polynomial f is said to be r-sos if ( i x 2 i ) r · f is sos. Note that it is clear that any r-sos polynomial is nonnegative and that the set of r-sos polynomials is included in the set of (r + 1)-sos polynomials. The Motzkin polynomial in (8) for example is 1-sos although not sos.
To end our review, we briefly touch upon the concept of sum of squares-convexity (sos-convexity), which we will build upon in the rest of the section. Let H f denote the Hessian matrix of a polynomial f . We say that f is sos-convex if y T H f (x)y is a sum of squares (as a polynomial in x and y). As before, optimizing over the set of sos-convex polynomials can be cast as a semidefinite program. Sum of squares-convexity is obviously a sufficient condition for convexity via the second-order characterization of convexity. However, there are convex polynomials which are not sos-convex (see, e.g., [6]). For a more detailed overview of sos-convexity including equivalent characterizations and settings in which sos-convexity and convexity are equivalent, refer to [7].
Notation
Throughout, we will use the notation H n,d (resp. P n,d ) to denote the set of forms (resp. positive semidefinite, aka nonnegative, forms) in n variables and of degree d. We will futhermore use the falling factorial notation (t) 0 = 1 and (t) k = t(t − 1) . . . (t − (k − 1)) for a positive integer k.
A test for validity of polynomial norms
In this subsection, we assume that we are given a form f of degree d and we would like to test whether f 1/d is a norm using semidefinite programming.
Theorem 4.4. Let f be a degree-d form. Then f 1/d is a polynomial norm if and only if there exist c > 0, r ∈ N, and an sos form q(x, y) such that q(x, y) · y T H f (x, y)y is sos and f (x) − c( i x 2 i ) d/2 ( i x 2 i ) r is sos. Furthermore, this condition can be checked using semidefinite programming.
Proof. It is immediate to see that if there exist such a c, r, and q, then f is convex and positive definite. From Theorem 2.1, this means that f 1/d is a polynomial norm.
Conversely, if f 1/d is a polynomial norm, then, by Theorem 2.1, f is convex and positive definite. As f is convex, the polynomial y T H f (x)y is nonnegative. Using Theorem 4.2, we conclude that there exists an sos polynomial q(x, y) such that q(x, y) · y T H f (x)y is sos. We now show that, as f is positive definite, there exist c > 0 and r ∈ N such that sos. Let f min denote the minimum of f on the sphere. As f is positive definite, f min > 0. We take c := fmin 2 and consider g( We have that g is a positive definite form: indeed, if x is a nonzero vector in R n , then by homogeneity of f and definition of c. Using Theorem 4.3, ∃r ∈ N such that g(x)( i x 2 i ) r is sos. For fixed r, a given form f , and a fixed degree d, one can search for c > 0 and an sos form q of degree d such that q(x, y) · y T H f (x, y)y is sos and f (x) − c( i x 2 i ) d/2 ( i x 2 i ) r is sos using semidefinite programming. This is done by solving the following semidefinite feasibility problem: where the unknowns are the coefficients of q and the real number c.
Remark 4.5. We remark that we are not imposing c > 0 in the semidefinite program above. This is because, in practice, especially if the semidefinite program is solved with interior point methods, the solution returned by the solver will be in the interior of the feasible set, and hence c will automatically be positive. One can slightly modify (9) however to take the constraint c > 0 into consideration explicitely. Indeed, consider the following semidefinite feasibility problem where both the degree of q and the integer r are fixed: It is easy to check that (10) is feasible with γ ≥ 0 if and only if the last constraint of (9) is feasible with c > 0. To see this, take c = 1/γ and note that γ can never be zero.
To the best of our knowledge, we cannot use the approach described in Theorem 4.4 to optimize over the set of polynomial norms with a semidefinite program. This is because of the product of decision variables in the coefficients of f and q. The next subsection will address this issue.
Optimizing over the set of polynomial norms
In this subsection, we consider the problem of optimizing over the set of polynomial norms. To do this, we introduce the concept of r-sos-convexity. Recall that the notation H f references the Hessian matrix of a form f .
Positive definite biforms and r-sos-convexity
Definition 4.6. For an integer r, we say that a polynomial f is r-sos-convex if y T H f (x)y · ( i x 2 i ) r is sos.
Observe that, for fixed r, the property of r-sos-convexity can be checked using semidefinite programming (though the size of this SDP gets larger as r increases). Any polynomial that is r-sos-convex is convex. Note that the set of r-sos-convex polynomials is a subset of the set of (r + 1)-sos-convex polynomials and that the case r = 0 corresponds to the set of sos-convex polynomials.
It is natural to ask whether any convex polynomial is r-sos-convex for some r. Our next theorem shows that this is the case under a mild assumption.
Remark 4.8. Note that η(f ) can also be interpreted as Remark 4.9. Theorem 4.7 is a generalization of Theorem 4.3 by Reznick. Note though that this is not an immediate generalization. First, y T H f (x)y is not a positive definite form (consider, e.g., y = 0 and any nonzero x). Secondly, note that the multiplier is ( i x 2 i ) r and does not involve the y variables. (As we will see in the proof, this is essentially because y T H f (x)y is quadratic in y.) Remark 4.10. Theorem 4.7 can easily be adapted to biforms of the type j f j (x)g j (y) where f j 's are forms of degree d in x and g j 's are forms of degreed in y. In this case, there exist integers s, r such that is sos. For the purposes of this paper however and the connection to polynomial norms, we will show the result in the particular case where the biform of interest is y T H f (x)y.
We associate to any form f ∈ H n,d , the d-th order differential operator f (D), defined by replacing each occurence . . x a n i n where c i ∈ R and a i j ∈ N, then its differential operator will be Our proof will follow the structure of the proof of Theorem 4.3 given in [24] and reutilize some of the results given in the paper which we quote here for clarity of exposition. Proposition 4.11 ([24], see Proposition 2.6). For any nonnegative integer r, there exist nonnegative rationals λ k and integers α kl such that (x 2 1 + . . . + x 2 n ) r = k λ k (α k1 x 1 + . . . + α kn x n ) 2r .
We will focus throughout the proof on biforms of the following structure where p ij (x) ∈ H n,d , for all i, j, and some even integer d. Note that the polynomial y T H f (x)y (where f is some form) has this structure. We next present three lemmas which we will then build on to give the proof of Theorem 4.7. Proof. Using Proposition 4.11, we have where λ l ≥ 0 and α l ∈ Z n . Hence, applying Proposition 4.12, we get Notice that i,j y i y j p ij (α l ) is a quadratic form in y which is positive semidefinite by assumption, which implies that it is a sum of squares (as a polynomial in y). Furthermore, as λ l ≥ 0 ∀l and (α T l x) 2s−d is an even power of a linear form, we have that λ l (α T l x) 2s−d is a sum of squares (as a polynomial in x). Combining both results, we get that (13) is a sum of squares.
We now extend the concept introduced by Reznick in Proposition 4.13 to biforms. Lemma 4.16. For a biform F (x; y) of the structure as in (12), we define the biform Ψ s,x (F (x; y)) as where Φ s is as in (11). Define and Proof. We start by showing that (14) holds: We now show that (15) holds: (12), which is positive on the bisphere, let Proof. Fix y ∈ S n−1 and consider F y (x) = F (x; y), which is a positive definite form in x of degree d. From Proposition 4.14, if then Φ −1 s (F y ) is positive semidefinite. As η(F ) ≤ (F y ) for any y ∈ S n−1 , we have that if then Φ −1 s (F y ) is positive semidefinite, regardless of the choice of y. Hence, Ψ −1 s,x (F ) is positive semidefinite (as a function of x and y).
We know by Lemma 4.17 that G(x; y) is positive semidefinite. Hence, using Lemma 4.15, we get that is sos. Lemma 4.16 then gives us: As a consequence, F (x; y)(x 2 1 + . . . + x 2 n ) r is sos.
The last theorem of this section shows that one cannot bound the integer r in Theorem 4.7 as a function of n and d only. Proof. Consider the trivariate octic: It is shown in [6] that f has positive definite Hessian, and that the (1, 1) entry of H f (x), which we will denote by H (1,1) f (x), is 1-sos but not sos. We will show that for any r ∈ N, one can find s ∈ N\{0} such that satisfies the conditions of the theorem. We start by showing that for any s, g s has positive definite Hessian. To see this, note that for any (x 1 , x 2 , x 3 ) = 0, (y 1 , y 2 , y 3 ) = 0, we have: (y 1 , y 2 , y 3 )H gs (x 1 , x 2 , x 3 )(y 1 , y 2 , y 3 ) T = (y 1 , sy 2 , sy 3 )H f (x 1 , sx 2 , sx 3 )(y 1 , sy 2 , sy 3 ) T .
As y T H f (x)y > 0 for any x = 0, y = 0, this is in particular true when x = (x 1 , sx 2 , sx 3 ) and when y = (y 1 , sy 2 , sy 3 ), which gives us that the Hessian of g s is positive definite for any s ∈ N\{0}.
We now show that for a given r ∈ N, there exists s ∈ N such that (x 2 1 + x 2 2 + x 2 3 ) r y T H gs (x)y is not sos. We use the following result from [25, Theorem 1]: for any positive semidefinite form p which is not sos, and any r ∈ N, there exists s ∈ N\{0} such that ( n i=1 x 2 i ) r · p(x 1 , sx 2 , . . . , sx n ) is not sos. As H (1,1) f (x) is 1-sos but not sos, we can apply the previous result. Hence, there exists a positive integer s such that is not sos. This implies that (x 2 1 + x 2 2 + x 2 3 ) r · y T H gs (x)y is not sos. Indeed, if (x 2 1 + x 2 2 + x 2 3 ) r · y T H gs (x)y was sos, then (x 2 1 + x 2 2 + x 2 3 ) r · y T H gs (x)y would be sos with y = (1, 0, 0) T . But, we have which is not sos. Hence, (x 2 1 + x 2 2 + x 2 3 ) r · y T H gs (x)y is not sos, and g is not r-sos-convex.
Remark 4. 19. Any form f with H f (x) 0, ∀x = 0 is strictly convex but the converse is not true. To see this, note that any form f of degree d with a positive definite Hessian is convex (as H f (x) 0, ∀x) and positive definite (as, from a recursive application of Euler's theorem on homogeneous functions, . From the proof of Theorem 2.2, this implies that f is strictly convex.
To see that the converse statement is not true, consider the strictly convex form f (x 1 , We have which is not positive definite e.g., when x = (1, 0) T .
Optimizing over a subset of polynomial norms with r-sos-convexity
In the following theorem, we show how one can efficiently optimize over the set of forms f with H f (x) 0, ∀x = 0. Comparatively to Theorem 4.4, this theorem allows us to impose as a constraint that the d th root of a form be a norm, rather than simply testing whether it is. This comes at a cost however: in view of Remark 4.19 and Theorem 2.2, we are no longer considering all polynomial norms, but a subset of them whose d th power has a positive definite Hessian. Proof. If there exist c > 0, r ∈ N such that g(x) = f (x) − c( i x 2 i ) d/2 is r-sos-convex, then y T H g (x)y ≥ 0, ∀x, y. As the Hessian of ( i x 2 i ) d/2 is positive definite for any nonzero x and as c > 0, we get H f (x) 0, ∀x = 0. Conversely, if H f (x) 0, ∀x = 0, then y T H f (x)y > 0 on the bisphere (and conversely). Let We know that f min is attained and is positive. Take c := fmin 2d(d−1) and consider Then Note that, by Cauchy-Schwarz, we have ( i x i y i ) 2 ≤ ||x|| 2 ||y|| 2 . If ||x|| = ||y|| = 1, we get Hence, H g (x) 0, ∀x = 0 and there exists r such that g is r-sos-convex from Theorem 4.7. For fixed r, the condition that there be c > 0 such that f (x) − c( i x 2 i ) d/2 is r-sos-convex can be imposed using semidefinite programming. This is done by searching for coefficients of a polynomial f and a real number c such that Note that both of these conditions can be imposed using semidefinite programming.
Remark 4.21. Note that we are not imposing c > 0 in the above semidefinite program. As mentioned in Section 4.3, this is because in practice the solution returned by interior point solvers will be in the interior of the feasible set.
In the special case where f is completely free 2 (i.e., when there are no additional affine conditions on the coefficients of f ), one can take c ≥ 1 in (16) instead of c ≥ 0. Indeed, if there exists c > 0, an integer r, and a polynomial f such that f − c( i x 2 i ) d/2 is r-sos-convex, then 1 c f will be a solution to (16) with c ≥ 1 replacing c ≥ 0.
Norm approximation and regression
In this section, we study the problem of approximating a (non-polynomial) norm by a polynomial norm. We consider two different types of norms: p-norms with p noneven (and greater than 1) and gauge norms with a polytopic unit ball. For p-norms, we use as an example ||(x 1 , x 2 ) T || = (|x 1 | 7.5 + |x 2 | 7.5 ) 1/7.5 . For our polytopic gauge norm, we randomly generate an origin-symmetric polytope and produce a norm whose 1-sublevel corresponds to that polytope. This allows us to determine the value of the norm at any other point by homogeneity (see [14,Exercise 3.34] for more information on gauge norms, i.e., norms defined by convex, full-dimensional, origin-symmetric sets). To obtain our approximations, we proceed in the same way in both cases. We first sample N = 200 points x 1 , . . . , x N on the sphere S that we denote by x 1 , . . . , x N . We then solve the following optimization problem with d fixed: Problem (17) can be written as a semidefinite program as the objective is a convex quadratic in the coefficients of f and the constraint has a semidefinite representation as discussed in Section 4.2. The solution f returned is guaranteed to be convex. Moreover, any sos-convex form is sos (see [17,Lemma 8]), which implies that f is nonnegative. One can numerically check to see if the optimal polynomial is in fact positive definite (for example, by checking the eigenvalues of the Gram matrix of a sum of squares decomposition of f ). If that is the case, then, by Theorem 2.1, f 1/d is a norm. Futhermore, note that we have where the first inequality is a consequence of concavity of z → z 1/d and the second is a consequence of the inequality |x − y| 1/d ≥ ||x| 1/d − |y| 1/d |. This implies that if the optimal value of (17) is equal to , then the sum of the squared differences between ||x i || and f 1/d (x i ) over the sample is less than or equal to N · ( N ) 1/d .
It is worth noting that in our example, we are actually searching over the entire space of polynomial norms of a given degree. Indeed, as f is bivariate, it is convex if and only if it is sos-convex [7]. In Figure 2, we have drawn the 1-level sets of the initial norm (either the p-norm or the polytopic gauge norm) and the optimal polynomial norm obtained via (17) with varying degrees d. Note that when d increases, the approximation improves.
(a) p-norm approximation (b) Polytopic norm approximation Figure 2: Approximation of non-polynomial norms by polynomial norms A similar method could be used for norm regression. In this case, we would have access to data points x 1 , . . . , x N corresponding to noisy measurements of an underlying unknown norm function. We would then solve the same optimization problem as the one given in (17) to obtain a polynomial norm that most closely approximates the noisy data.
Joint spectral radius and stability of linear switched systems
As a second application, we revisit a result from Ahmadi and Jungers from [1,4] on upperbounding the joint spectral radius of a finite set of matrices. We first review a few notions relating to dynamical systems and linear algebra. The spectral radius ρ of a matrix A is defined as The spectral radius happens to coincide with the eigenvalue of A of largest magnitude. Consider now the discrete-time linear system x k+1 = Ax k , where x k is the n × 1 state vector of the system at time k. This system is said to be asymptotically stable if for any initial starting state x 0 ∈ R n , x k → 0, when k → ∞. A well-known result connecting the spectral radius of a matrix to the stability of a linear system states that the system x k+1 = Ax k is asymptotically stable if and only if ρ(A) < 1.
In 1960, Rota and Strang introduced a generalization of the spectral radius to a set of matrices. The joint spectral radius (JSR) of a set of matrices A := {A 1 , . . . , A m } is defined as Analogously to the case where we have just one matrix, the value of the joint spectral radius can be used to determine stability of a certain type of system, called a switched linear system. A switched linear system models an uncertain and time-varying linear system, i.e., a system described by the dynamics where the matrix A k varies at each iteration within the set A. As done previously, we say that a switched linear system is asymptotically stable if x k → ∞ when k → ∞, for any starting state x 0 ∈ R n and any sequence of products of matrices in A. One can establish that the switched linear system x k+1 = A k x k is asymtotically stable if and only if ρ(A) < 1 [19]. Though they may seem similar on many points, a key difference between the spectral radius and the joint spectral radius lies in difficulty of computation: testing whether the spectral radius of a matrix A is less than equal (or strictly less) than 1 can be done in polynomial time. However, already when m = 2, the problem of testing whether ρ(A 1 , A 2 ) ≤ 1 is undecidable [13]. An active area of research has consequently been to obtain sufficient conditions for the JSR to be strictly less than one, which, for example, can be checked using semidefinite programming. The theorem that we revisit below is a result of this type. We start first by recalling a Theorem linked to stability of a linear system.
Theorem 5.1 (see, e.g., Theorem 8.4 in [18]). Let A ∈ R n×n . Then, ρ(A) < 1 if and only if there exists a contracting quadratic norm; i.e., a function V : The next theorem (from [1,4]) can be viewed as an extension of Theorem 5.1 to the joint spectral radius of a finite set of matrices. It is known that the existence of a contracting quadratic norm is no longer necessary for stability in this case. This theorem show however that the existence of a contracting polynomial norm is. We remark that in [3], Ahmadi and Jungers show that the degree of f cannot be bounded as a function of m and n. This is expected from the undecidability result mentioned before. Example 5.3. We consider a modification of Example 5.4. in [2] as an illustration of the previous theorem. We would like to show that the joint spectral radius of the two matrices is strictly less that one.
To do this, we search for a nonzero form f of degree d such that x 2 i ) d/2 sos, for i = 1, 2.
If problem (19) is feasible for some d, then ρ(A 1 , A 2 ) < 1. A quick computation using the software package YALMIP [21] and the SDP solver MOSEK [8] reveals that, when d = 2 or d = 4, problem (19) is infeasible. When d = 6 however, the problem is feasible and we obtain a polynomial norm V = f 1/d whose 1-sublevel set is the outer set plotted in Figure 3. We also plot on Figure 3 the images of this 1-sublevel set under A 1 and A 2 . Note that both sets are included in the 1-sublevel set of V as expected. From Theorem 5.2, the existence of a polynomial norm implies that ρ(A 1 , A 2 ) < 1 and hence, the pair {A 1 , A 2 } is asymptotically stable.
Remark 5.4. As mentioned previously, problem (19) is infeasible for d = 4. Instead of pushing the degree of f up to 6, one could wonder whether the problem would have been feasible if we had asked that f of degree d = 4 be r-sos-convex for some fixed r ≥ 1. As mentioned before, in the particular case where n = 2 (which is the case at hand here), the notions of convexity and sos-convexity coincide; see [7]. As a consequence, one can only hope to make problem (19) feasible by increasing the degree of f . Open Problem 1. We have given a semidefinite programming hierarchy for optimizing over a subset of polynomial norms. Is there a semidefinite programming hierarchy that optimizes over all polynomial norms?
Open Problem 2. Helton and Nie have shown in [17] that sublevel sets of forms that have positive definite Hessians are SDP-representable. This means that we can optimize linear functions over these sets using semidefinite programming. Is the same true for sublevel sets of all polynomial norms?
Another open problem that we think would be of interest relates to the quality of approximation of norms by polynomial norms.
Open Problem 3. As mentioned previously, Barvinok has shown in [11] that for any norm ||.||, there exists a nonnegative form f of degree 2d such that We have shown in Section 3 that for any > 0, there exists an sos-convex and positive definite polynomial f of degree d such that f 1/2d (x) ≤ ||x|| ≤ (1 + )f 1/2d (x).
Is it possible to quantify the degree d needed to obtain an approximation of precision as a function of and n only? How would the degree be impacted if we searched for an r-sos-convex polynomial instead? In this case, could we express d as a function of , n, and r only?
On the application side, it might be interesting to investigate how one can use polynomial norms to design regularizers in machine learning applications. Indeed, a very popular use of norms in optimization is as regularizers, with the goal of imposing additional structure (e.g., sparsity or low-rankness) on optimal solutions. One could imagine using polynomial norms to design regularizers that are based on the data at hand in place of more generic regularizers such as the 1-norm. Regularizer design is a problem that has already been considered (see, e.g., [10,15]) but not using polynomial norms. This can be worth exploring as we have shown that polynomial norms can approximate any norm with arbitrary accuracy, while remaining differentiable everywhere (except at the origin), which can be beneficial for optimization purposes.
|
2017-04-24T20:49:44.000Z
|
2017-04-24T00:00:00.000
|
{
"year": 2019,
"sha1": "6c842098c56babdb9b22ce7ce0b47e8c8260e2a7",
"oa_license": null,
"oa_url": "https://pure.uvt.nl/ws/files/27881991/Polynomial_norms_v7.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "45025b7a6f09adb66bf20a6b8b6c45352587f526",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
984820
|
pes2o/s2orc
|
v3-fos-license
|
On the Mechanism of MgATP-dependent Gating of CFTR Cl− Channels
CFTR, the product of the gene mutated in cystic fibrosis, is an ATPase that functions as a Cl− channel in which bursts of openings separate relatively long interburst closed times (τib). Channel gating is controlled by phosphorylation and MgATP, but the underlying molecular mechanisms remain controversial. To investigate them, we expressed CFTR channels in Xenopus oocytes and examined, in excised patches, how gating kinetics of phosphorylated channels were affected by changes in [MgATP], by alterations in the chemical structure of the activating nucleotide, and by mutations expected to impair nucleotide hydrolysis and/or diminish nucleotide binding affinity. The rate of opening to a burst (1/τib) was a saturable function of [MgATP], but apparent affinity was reduced by mutations in either of CFTR's nucleotide binding domains (NBDs): K464A in NBD1, and K1250A or D1370N in NBD2. Burst duration of neither wild-type nor mutant channels was much influenced by [MgATP]. Poorly hydrolyzable nucleotide analogs, MgAMPPNP, MgAMPPCP, and MgATPγS, could open CFTR channels, but only to a maximal rate of opening ∼20-fold lower than attained by MgATP acting on the same channels. NBD2 catalytic site mutations K1250A, D1370N, and E1371S were found to prolong open bursts. Corresponding NBD1 mutations did not affect timing of burst termination in normal, hydrolytic conditions. However, when hydrolysis at NBD2 was impaired, the NBD1 mutation K464A shortened the prolonged open bursts. In light of recent biochemical and structural data, the results suggest that: nucleotide binding to both NBDs precedes channel opening; at saturating nucleotide concentrations the rate of opening to a burst is influenced by the structure of the phosphate chain of the activating nucleotide; normal, rapid exit from bursts occurs after hydrolysis of the nucleotide at NBD2, without requiring a further nucleotide binding step; if hydrolysis at NBD2 is prevented, exit from bursts occurs through a slower pathway, the rate of which is modulated by the structure of the NBD1 catalytic site and its bound nucleotide. Based on these and other results, we propose a mechanism linking hydrolytic and gating cycles via ATP-driven dimerization of CFTR's NBDs.
I N T R O D U C T I O N
CFTR, the protein product of the gene mutated in cystic fibrosis patients (Riordan et al., 1989), comprises two homologous halves linked by a regulatory domain, each half including a transmembrane domain and a cytosolic nucleotide binding domain (NBD),* alternatively called ATP binding cassette (ABC) (Fig. 1). CFTR belongs to the superfamily of ABC transport proteins and, like other members, shows intrinsic ATP binding and hydrolytic activity (Li et al., 1996). But, unlike other ABC proteins, CFTR functions as a gated ion channel which, when open, allows Cl Ϫ ions to flow passively down their electrochemical gradient. CFTR is also an unusual ion channel, as it will open only after its regulatory domain has been phosphorylated by PKA (Cheng et al., 1991;Tabcharani et al., 1991;Picciotto et al., 1992), and the phosphorylated channel requires the continuous presence of hydrolyzable nucleoside triphosphates to sustain normal gating. Despite much investigation, the relationship between binding and hydrolysis of nucleotide at CFTR's two NBDs and the conformational changes that open and close the channel pore remains controversial and poorly understood (e.g., Aleksandrov and Riordan, 1998;Gadsby and Nairn, 1999;Sheppard and Welsh, 1999).
One limitation is insufficient information on CFTR structure. All ABC protein NBDs feature two highly conserved motifs called Walker A (GXXXXGKS/T) and Walker B (four hydrophobic residues followed by an aspartate), common to a variety of ATP-binding proteins (Walker et al., 1982), and an intervening "signature" sequence (consensus: LSGGQ) unique to ABC proteins. Accordingly, X-ray crystal structures of several isolated NBDs reveal that they all share the same basic fold (e.g., Fig. 1): an F1-ATPase-core ␣ /  subdomain positions the Walker A and B motifs near the nucleotide phosphates, an antiparallel  subdomain inter-CFTR Channel Gating Mechanism acts with the nucleotide base, and an ␣ -helical subdomain harbors the signature sequence (Armstrong et al., 1998;Hung et al., 1998;Diederichs et al., 2000;Hopfner et al., 2000;Gaudet and Wiley 2001;Karpowich et al., 2001;Yuan et al., 2001). Oxygen atoms of the  -and ␥ -phosphate groups of the bound nucleotide contact the -amino group and/or the main chain nitrogen of the invariant Walker A lysine, and the Walker B aspartate helps coordinate the catalytic Mg 2 ϩ ion by hydrogen bonding to a water molecule in the Mg 2 ϩ ion's inner coordination shell (Hopfner et al., 2000;Gaudet and Wiley, 2001;Junop et al., 2001;Karpowich et al., 2001).
In the structures of NBD monomers, the ATP binding site is surprisingly open (Fig. 1), suggesting that, in vivo, the catalytic site is likely to be completed by interaction with another domain. Evidence of physical proximity (Hunke et al., 2000;Loo and Clarke, 2001;Qu and Sharom, 2001) and of functional interactions between NBDs (e.g., Senior and Bhagat, 1998;Ueda et al., 1999;Gao et al., 2000;Hou et al., 2000) suggests that a second NBD monomer completes the catalytic site. This is supported by the finding of NBD homodimers in certain crystals (Hung et al., 1998;Diederichs et al., 2000;Hopfner et al., 2000;Chang and Roth, 2001;Junop et al., 2001;Locher et al., 2002). Moreover, in two of the most complete structures of ABC-like proteins (Hopfner et al., 2000;Locher et al., 2002), the two NBDs were found to interact in a head-to-tail configuration that sandwiches two active sites in the dimer interface, each with Walker A and B motifs of one NBD and the signature sequence of the other NBD contacting the same ATP molecule (compare Jones and George, 1999).
In CFTR, the different consequences of introducing equivalent mutations in either NBD1 or NBD2 hint at this functional asymmetry. In several other ABC proteins, mutation of the Walker A lysine or Walker B aspartate in either NBD abolishes both ATP hydrolysis and substrate transport (for review see Schneider and Hunke 1998). However, in CFTR the Walker A NBD2 mutation K1250A abolished ATP hydrolysis, whereas the NBD1 mutation K464A simply reduced overall hydrolytic activity (Ramjeesingh et al., 1999); and biochemical studies of Walker B aspartate mutations in CFTR (D572N in NBD1, D1370N in NBD2) have not yet been performed. The consequences of NBD mutations for CFTR channel gating were even more asymmetric. Thus, the K1250A mutation dramatically prolonged burst duration, suggesting that hydrolysis at NBD2 might be coupled to burst termination Gunderson and Kopito, 1995), whereas the NBD1 mutations K464A, Q552A, and Q552H somewhat slowed channel opening to a burst, suggesting that NBD1 might be a site of ATP interactions governing opening . But, since mutations and chemical modifications at NBD2 also strongly affected opening rates, a possible interaction between the two NBDs during channel opening was postulated Gunderson and Kopito, 1995;Cotten and Welsh, 1998), and more recent studies of Walker A lysine mutants have suggested that NBD1 might also be involved in controlling burst duration (Powe et al., 2002).
In an attempt to unravel some of this complexity, we have examined the gating kinetics of wild-type (WT) and Figure 1. (Top) Cartoon of CFTR's domain topology, comprising two transmembrane domains (TMDs), a regulatory (R) domain, and NBD1 (green) and NBD2 (blue). (Below) Ribbon representations of homology models of NBD1 (green) and NBD2 (blue), based on the crystal structure of HisP-ATP (Hung et al., 1998), showing exposed positions of ATP molecules (yellow), Walker A lysines (red), and NBD2 Walker B aspartate (green). Homology models were built using the automated comparative modeling server "Swiss-model" (http://www.expasy.ch/swissmod/ SWISS-MODEL.html; Guex and Peitsch, 1997) with alignments optimized on the basis of a ClustalW multiple sequence alignment of 20 bacterial and human NBDs. 19 Vergani et al.
mutant CFTR channels expressed in Xenopus oocytes. We studied in detail the dependence of channel gating on [MgATP], gating in the presence of poorly hydrolyzable nucleotide analogs, as well as the effects of mutating residues in the Walker A (K464A and K1250A) and Walker B motifs (in particular, D1370N in NBD2). We addressed two main questions. First, do NBD1 and NBD2 in CFTR regulate separate steps of the channel gating cycle, and are the NBDs independent or do they interact? The second question concerns the relationship between nucleotide binding and hydrolysis and CFTR channel gating. In conventional ligand-gated channels, binding of ligand favors channel opening, and closing is simply the reverse of that process. In CFTR, interaction with ligand (MgATP) is also necessary for gating, but there is disagreement about whether the ligand dissociates unchanged (in which case gating can be described by linear, equilibrium models: Venglarik et al., 1994;Schultz et al., 1995;Aleksandrov and Riordan, 1998) or is hydrolyzed at some point in the gating cycle (cyclical models, violating microscopic reversibility: Baukrowitz et al., 1994;Hwang et al., 1994;Gunderson and Kopito, 1995;Zeltwanger et al., 1999). We interpret our results to suggest that ATP must bind at both catalytic sites before a CFTR channel can open, that opening is rate limited by a slow step after binding that is sensitive to the structure of the polyphosphate chain, that there is no further nucleotide binding to the open channel, that hydrolysis at NBD2 precedes normal rapid channel closing, and that the integrity of the NBD1 Walker A motif, and nucleotide bound there, influences the rate of exit from lockedopen burst states.
Molecular Biology
Human epithelial CFTR in a Xenopus expression vector was used as a template (pGEMHE-WT: Chan et al., 2000) for all point mutations, introduced using a QuickChange site-directed mutagenesis kit (Stratagene). Sequences were checked by automated sequencing. Plasmid DNA was linearized by NheI digestion and in vitro transcribed from the T7 promoter with mMessage mMachine kit (Ambion) as described (Chan et al., 2000).
Xenopus Oocyte Expression
Xenopus laevis oocytes were isolated, collagenase treated, and injected as described (Chan et al., 2000). Amounts of cRNA injected were adjusted to vary the level of expression: up to 40 ng/oocyte was required for high expression of K1250A or K464A/K1250A mutant channels, whereas 0.1-0.25 ng/oocyte sufficed for single channel recordings of WT, K464A, or D1370N channels.
Data Analysis
Data analysis was essentially as described (Chan et al., 2000;Csanády et al., 2000). CFTR channels typically open from relatively long closures into open "bursts" interrupted by short "flickery" closures. This is reflected in analyses of records from a single channel that clearly identify, within the bandwidth of our measurements, the presence of a single population of open states (average duration ഠ mean interval between flickery closures) but two distinct populations of closed states. [MgATP] or phosphorylation status do not affect the duration of the short-lived, intraburst flickery closures, or the duration of individual openings within bursts, but they do influence the duration ( ib) of the long-lived interburst closures, and phosphorylation increases the number of openings occurring within one burst (Table I; Gunderson and Kopito, 1994;Winter et al., 1994). In this paper we analyze and discuss the dependence on [MgATP] and phosphorylation of the duration, b, of bursts and of their frequency of occurrence, 1/ ib, which we usually refer to as the "rate of channel opening to a burst," but we sometimes simply call it channel "opening rate." Changes in burst duration signal changes in the "rate of channel closing from a burst," sometimes called channel "closing rate." Because we find that the burst duration distributions are described by single exponentials under almost all conditions (see Fig. 4, below; though the lifetimes vary between conditions), this confines extraction of relevant steady state kinetic parameters to determination of mean burst and interburst durations.
For kinetic analysis of patches in which individual channel opening and closing events could be discerned (protocols (a) and (b), below), digitized segments of records were baseline subtracted (to remove slow drifts and small, Ͻ 0.5 pA, changes in the magnitude of the seal current accompanying solution exchange), and idealized using half-amplitude threshold crossing. The resulting events lists were used to generate dwell-time distributions at all conductance levels that were then simultaneously fitted with a maximum likelihood algorithm (Csanády, 2000) to 20 CFTR Channel Gating Mechanism determine rate constants (r AB : average number of transitions from state A to state B occurring per unit time of dwell time in state A, measured in s Ϫ 1 ). Likelihood of the following three-state "C-O-C F " scheme was optimized to extract the rate constants indicated: in which "C" is the long interburst closed state, "C F " is the short flickery closed state, and so the simple r CO and r OC rate constants describe directly the rates of opening to a burst and closing from a burst. However, the parameters derived to describe the observed bursting behavior (burst duration, b, interburst duration, ib, as well as rates of opening to and closing from a burst) are essentially model independent: only the particular combination of elementary rate constants used to derive them (see Table I, legend), not the numerical values of the parameters, would be different had we instead fitted the alternative three-state "C 1 -C 2 -O" model to determine burst parameters. An artificial dead time of 6.5 ms ( Ͼ filter dead time of ف 3.6 ms) was imposed to implement a correction for events missed due to the limited bandwidth (Csanády, 2000). Our measurements can be grouped in three main classes: (a) those from patches containing Յ 8 observed channels (Table I, A and B,Fig. 5); (b) those from patches containing Ͼ 8 observed channels, which we analyzed only in conditions of low open probability (P o ) when opening and closing of individual channels could be discerned and the highest conductance level attained was less than or equal to level 8 (Figs. 2,7,8,and 11,Table I,C and D); and (c) records from patches containing hundreds of channels, in which individual gating events could not easily be discerned (Figs. 3,9,and 10).
(a) Multichannel kinetic analysis from patches containing Յ 8 simultaneously open channels. In 5 mM MgATP during PKA application, P o was relatively high and only patches in which Յ 8 simultaneously open channels were observed during the entire recording were analyzed to extract kinetic parameters in this condition (Table I A). For some of these patches, recordings could also be made after washing out the PKA and again after subsequently reapplying it, to examine reversible effects of phosphorylation status on the kinetic parameters (Table I B). In some cases, two estimates are given for ib and its reciprocal r CO that depend strongly on assumptions made about the number of active channels in the patch. One, the "total" estimate, is the mean from all recordings, using the observed maximum number of simultaneously open channels, N Ј , as an estimate of N, the true number of active channels present. The other, "best" estimate was obtained including only patches in which the presence of an unrecognized channel (hypothesis N Ͼ N Ј ) could be excluded with Ͼ 90% confidence, using statistical tests . As excluding records in which the number of channels is not known with confidence tends to select for records in which P o is highest, the second estimate is a minimum estimate for ib and hence a maximum estimate for r CO .
(b) Multichannel kinetic analysis from patches containing Ͼ 8 simultaneously open channels. For kinetic analysis in conditions of lower P o , we used patches with higher numbers of channels to obtain a sufficient number of events. Due to the practical limits imposed by computer processing times (Csanády, 2000), the maximum likelihood fitting programs allow a maximum N of eight. We therefore analyzed records only from patches in which the maximum conductance level did not exceed eight open channels under all test and reference conditions used, and the maximum likelihood fit was done assuming N ϭ 8. We do not report absolute values for ib and r CO in these cases, but only values relative to some other experimental condition applied to the same patch, and these should be relatively insensitive to N. Thus, for MgATP dose-response curves (Fig. 2), rates were normalized to those in bracketing segments at 5 mM MgATP; for the poorly hydrolyzable nucleotides, rates were normalized to those obtained in the same patches at 10 M (Figs. 7 and 11) or 50 M MgATP (Fig. 8); for K1250A mutant openings in 10 M MgATP, rates were normalized to those in nominally MgATP-free bath solution.
(c) Macroscopic current relaxation analysis. To obtain information on b for mutants or conditions in which slow gating made it difficult to collect a sufficient number of events for the multichannel kinetic analyses described above, we analyzed macroscopic current decay upon nucleotide removal. Records (Figs. 3 and 10), or sums of records ( Fig. 9), with several tens of open channels at t ϭ 0 were fitted with single or double exponential decay functions by nonlinear least squares (Sigmaplot; Jandel Scientific). Kinetic parameters were obtained using a maximum likelihood simultaneous fit to dwell-time histograms at all conductance levels (Csanády, 2000). Rate constants (r CO , r OC , r OF , r FO ) were extracted applying the three-state "C-O-C F " model (see materials and methods) and derived parameters were calculated as follows: burst duration, b ϭ (1/ r OC )*[1 ϩ (r OF /r FO )]; interburst duration, ib = 1/r CO ; flicker duration, F = 1/r FO ; number of flickers per burst, n F = r OF /r OC . Durations are given in ms, rates in s Ϫ1 . For ib and r CO values, the top rows give "total" estimates and the bottom ones "best" estimates (as defined in materials and methods). The values given in C (obtained in experiments as in Fig. 2, A-C) show that frequency and duration of flickery closures do not depend on [MgATP]. The significance of the slight prolongation of F for the D1370N mutant and for WT in 5 mM MgAMPPNP is unknown, but the rate r OF remained 1-2 s Ϫ1 for all conditions and mutants tested.
Analysis of isolated burst distributions.
We also examined the distribution of isolated (i.e., containing no superimposed openings) bursts from experiments of type (a) and (b) (Csanády et al., 2000). For each recording, a threshold duration was chosen to distinguish flickery closures from interburst closures, by minimizing the total number of misclassified events (Jackson et al., 1983). Bursts collected in the same condition from several patches were pooled and fitted, using an unbinned maximum likelihood optimization, with either a single-or a double-exponential distribution. Isolated bursts from records that also included a few superimposed openings were added to the pool only if distribution analysis of that burst population rid of superimposed openings yielded a mean b within 10% of that obtained by our standard multichannel kinetic analysis. To display the distributions and fits (Fig. 4, A-F), dwell times were ranked from the longest to the shortest and rank number was divided by the total number of bursts included, yielding the ordinate used for the plots, i.e., the fraction of observed bursts with duration greater than or equal to the dwell time given on the abscissa; the plot approximates the survivor function 1 Ϫ F(t), where F(t) is the distribution function, for the random variable open-burst dwell time (t). Unless otherwise noted, data are given as mean Ϯ SEM (n), where n represents the number of observations.
R E S U L T S [MgATP] Dependence of Rate of Opening to a Burst for Catalytic Site Mutants Suggests Involvement of Both NBDs in Channel Opening
WT and mutant CFTR channels in inside-out patches excised from oocytes were activated by phosphorylation to steady state with 300 nM PKA catalytic subunit plus 5 mM MgATP applied to the cytosolic surface. Withdrawal of the kinase, leaving the MgATP, resulted in a rapid partial reduction in channel activity, believed due to incomplete dephosphorylation by persistently active membrane-bound phosphatases , after which channel activity remained relatively constant for several minutes . After this prephosphorylation, patches were exposed to a range of [MgATP], each test interposed between periods at the reference [MgATP] of 5 mM ( Fig. 2 A). CFTR openings occur in bursts, which typically include one or more short shut periods. Reducing [MgATP] did not change the average burst duration, b (or the duration of the short, intraburst flickery closures, F , or the average duration of individual intraburst openings, see Table I). However, the long closed times separating bursts (interburst duration, ib) were visibly longer at the lower [MgATP] (Fig. 2 A, Table I). Maximum likelihood fitting of dwell-time distributions at all conductance levels (Csanády, 2000) confirmed that the rate of closing from a burst (r OC , in terms of a C-O-C F scheme, see materials and methods) was approximately the same at all [MgATP] tested ( Fig. 2 E, blue circles), whereas the rate of opening to a burst (r CO ϭ 1/ib) was a saturable function of [MgATP] yielding, for WT, an effective dissociation constant (K 0.5 ) of 56 Ϯ 5 M for MgATP ( Fig. 2 D, blue circles).
As the rate of opening, r CO , does not increase linearly with [MgATP], channel opening cannot be a simple bimolecular MgATP-binding reaction, C→O, but must require at least a C1-C2-O scheme (compare Fig. 12 A, below) with distinct MgATP binding (C1→C2) and channel-opening (C2→O) steps. This three-state scheme reduces kinetically to C1→O if MgATP binding is fast Relative opening (D) and closing (E) rates (mean Ϯ SEM, 2 Յ n Յ 7) from analysis of records as in A-C for WT (blue circles), K464A (red triangles), and D1370N (green squares) channels at 10 M Յ [MgATP] Յ 5 mM, plotted on semilogarithmic axes. Opening rate and closing rate at each test [MgATP] were normalized to the mean values measured for bracketing segments at 5 mM MgATP (procedure (b), materials and methods). Curves in D show Michaelis-Menten fits, yielding K 0.5 of 56 Ϯ 5, 807 Ϯ 185, 391 Ϯ 118 M, and r COmax of 1.02, 1.16, and 1.08, for WT, K464A, and D1370N, respectively. For display, the data were further normalized to these r COmax values. Mean absolute opening rates at 5 mM MgATP are given in Table I B. In E, the measured relative closing rates at each [MgATP] have been multiplied by the mean absolute closing rate for each construct in 5 mM MgATP (Table I B).
22
CFTR Channel Gating Mechanism compared with channel opening, and it predicts a Michaelis-Menten dependence of r CO on [MgATP], as we observe (Fig. 2 D). Evidently, at saturating [MgATP], opening of a WT CFTR channel to a burst is rate limited by a slow MgATP-independent step, but at subsaturating [MgATP] a MgATP binding step appears to limit the rate of channel opening to a burst ( Fig. 2 D).
In an attempt to discern at which NBD this binding event occurs, we mutated presumed key catalytic site residues in either NBD. Compared with WT, both K464A (Walker A lysine in NBD1) and D1370N (Walker B aspartate in NBD2) mutant CFTR channels opened less frequently at low [MgATP] (e.g., 50 M; Figs. 2, A-D), and this defect could be largely overcome by raising the [MgATP], so that, at saturating [MgATP], opening rates of WT, K464A, and D1370N channels differed by less than a factor of two (Table I). Therefore, each of these mutations substantially reduced the apparent affinity for MgATP activation of channel opening (Fig. 2 D), consistent with both mutations altering the binding, rather than the opening, step. As expected (see below) for channels in which opening rate, but not closing rate, is sensitive to [MgATP], the dependence of P o on [MgATP] was not very different from that of r CO , shown in Similar kinetic analysis of patches containing few channels proved technically difficult for K1250A CFTR (NBD2 Walker A lysine mutant) due to the extremely prolonged bursts (see Fig. 6 C, below), which precluded collection of enough events to reliably estimate absolute values of r CO or P o . So we recorded macroscopic current in patches with hundreds or thousands of WT or K1250A channels (Fig. 3, A and B), and determined relative P o as a function of [MgATP] (Fig. 3 C) by normalizing current amplitude at each test [MgATP] to that during bracketing exposures at 5 mM MgATP (Fig. 3, A and B). The curve for K1250A was strongly shifted to higher [MgATP] and was still not saturated at 10 mM MgATP. This shift could reflect effects of the mutation on [MgATP] dependence of rates of opening to and/or closing from bursts. However, single exponential fits to the macroscopic current decay upon nucleotide removal showed that the time constant, which reflects only channel closure from bursts (since, in the absence of MgATP, r CO ϭ 0) and provides an estimate of mean burst duration, was unaffected by changes in [MgATP] (Fig. 3 B Dousmanis et al., 2002). In fact, this relationship implies that the effective dissociation constant for MgATP activation of opening of K1250A channels is likely even larger than is apparent in Fig. 3 C because the other effect of the K1250A mutation, marked slowing of channel closure from bursts, would by itself shift the P o versus [MgATP] curve to lower [MgATP], opposite to our experimental observation.
As increased [MgATP] could largely restore the deficits in opening caused by mutations in either catalytic site, a simple interpretation is that each of the mutations impairs MgATP binding and that both NBD1 and NBD2 catalytic sites must be occupied by MgATP before a CFTR channel can open to a burst.
Influence of [MgATP] and Phosphorylation Status on Burst Duration of WT CFTR Channels
Although the average rate of WT channel closing from bursts does not depend on [MgATP] (Fig. 2 E; see also , normalized to the mean bracketing level at 5 mM MgATP, yielded least-squares. Michaelis fit parameters for WT: P o max ϭ 1.04 Ϯ 0.01, K 0.5 ϭ 57 Ϯ 2 M; for K1250A: P o max ϭ 2.45 Ϯ 0.88, K 0.5 ϭ 6.5 Ϯ 4.8 mM; for display, WT (circles) and K1250A (inverted triangles) data (mean Ϯ SD, 3 Յ n Յ9) were renormalized to these P o max values. Because 10 mM, the highest [MgATP] used, was still far from saturating for K1250A channels, the fit for this mutant is less accurate, evident from large errors on fit parameters. 23 Vergani et al. Anderson and Welsh, 1992;Gunderson and Kopito, 1994;Venglarik et al., 1994;Winter et al., 1994;Csanády et al., 2000;but cf. Zeltwanger et al., 1999), the mean closing rate is reduced roughly twofold by strong phosphorylation (Table I; see also Hwang et al., 1994;Csanády et al., 2000). Because these values are averages extracted from fits to kinetic data from multichannel records, we examined the distribution of isolated burst dwell times to determine whether more than one component was present. Durations of isolated bursts were obtained from records with few, if any, superimposed openings (see materials and methods and Csanády et al., 2000). Values were pooled for each of three conditions: at saturating, 5 mM, [MgATP] during exposure to PKA (Fig. 4 A) and after PKA washout (Fig. 4 B), and at 10 or 15 M [MgATP] after PKA washout (Fig. 4 C). In all three conditions, the distribution of WT CFTR channel burst durations revealed a single population, and the distribution means confirmed the results of multichannel kinetic analysis: viz., highly phosphorylated CFTR channels closed on average about twofold more slowly than partially phosphorylated channels ( Table I). Also like WT, the burst duration distributions of K464A mutant channels were well described by single exponential functions (Fig. 4, D-F).
We performed a limited analysis of the gating kinetics of CFTR channels mutated at two other presumed catalytic site residues in NBD1, the invariant Walker B aspartate, D572 (Fig. 5,C and E), and the adjacent residue, which is a serine (S573, Fig. 5, D and E) in CFTR's NBD1 but is the conserved Walker B glutamate in most NBDs (though it is an aspartate in NBD1 of some ABC-C subfamily members). The mean closing rate from bursts was not substantially altered by these NBD1 mutations (compare Fig. 5 E and Table I): for D572N, r OC (5 mM MgATP ϩ PKA) ϭ 1.4 Ϯ 0.2 s Ϫ1 (n ϭ 9), and r OC (5 mM MgATP) ϭ 3.1 Ϯ 0.6 s Ϫ1 (n ϭ 3); for S573E, r OC (5 mM MgATP ϩ PKA) ϭ 2.2 Ϯ 0.3 s Ϫ1 (n ϭ 7).
Nor was the opening rate of either mutant orders of magnitude lower than that of WT channels in the presence of PKA and saturating [MgATP] (Fig. 5, A-D). Thus, for D572N CFTR, r CO (5 mM MgATP ϩ PKA) ϭ 0.34 Ϯ 0.1 s Ϫ1 (n ϭ 9), and r CO (5 mM MgATP) ϭ 0.35 Ϯ 0.1 s Ϫ1 (n ϭ 3), although these values ("total" estimates, see materials and methods) likely overestimate true opening rate, as the somewhat lower maximal P o (0.18 vs. 0.29 for WT) of this mutant precluded . Fitted curves (through data points) show single exponentials from maximum-likelihood fits. Improvement of the fit by inclusion of a second exponential component was judged using the algorithm described in Csanády et al. (2000). Only for K464A at M MgATP (F) could the likelihood be significantly increased by including a second component, though with a shorter (but not longer; Ikuma and Welsh, 2000) mean: 1 ϭ 30 ms, a 1 ϭ 0.17; 2 ϭ 263 ms, a 2 ϭ 0.83; increase in log likelihood, ⌬LL ϭ 8.3; number of bursts fitted, M ϭ 263; giving (⌬LL Ϫ ln(2M) ϭ 2.0). The small differences between means at mM and M MgATP (B vs. C, E vs. F) may be only apparent, as the mean b, estimated by multichannel kinetic fits, from these same stretches of record at M MgATP is not significantly different from that during intervening stretches in 5 mM MgATP (for WT: b M /b 5mM ϭ 1.03 Ϯ 0.07, n ϭ 9; for K464A: b M /b 5mM ϭ 0.95 Ϯ 0.13, n ϭ 7). (G and H) Representative traces showing gating of K464A and D1370N channels at 15 M MgATP (after PKA removal). Prolonged bursts of K464A channels (Ikuma and Welsh, 2000) are not evident. Though variability among the four patches containing sufficiently few D1370N channels precluded pooling the data for burst distribution analysis, in none of those patches (analyzed separately) did introduction of a second component significantly improve the maximum likelihood fit.
Catalytic Site Mutations at NBD2 Slow Channel Closing from Bursts
In contrast to NBD1, corresponding mutations expected to impair hydrolysis in NBD2 substantially slowed exit of CFTR channels from open bursts. D1370N channels closed 4-5-fold more slowly than WT CFTR ( (Table I). The K1250A mutation more dramatically slowed channel closing from bursts, resulting in prolonged bursts lasting tens of seconds (Fig. 6 C; cf. Gunderson and Kopito, 1995;Ramjeesingh et al., 1999;Zeltwanger et al., 1999). Analysis of the macroscopic current relaxation upon nucleotide withdrawal in patches containing many K1250A channels indicates that their average burst duration was 08ف s in the presence of PKA (see below, Fig. 10, E and G) but 04ف s after PKA had been removed (Fig. 3 B), at least two orders of magnitude longer than bursts of WT channels under the same conditions (Fig. 3, A vs Table I). We also found that mutation of the conserved Walker B glutamate in NBD2, E1371, to serine (to mimic the native equivalent residue in NBD1) caused a marked slowing of channel closing from bursts ( Fig. 6 D). Overall, these very different consequences of mutating corresponding key catalytic site residues in NBD1 and NBD2 suggest that, in the gating cycle of a normal WT CFTR channel, termination of the open burst is timed by an event occurring at the NBD2 catalytic site, likely hydrolysis of the nucleotide bound there.
Poorly Hydrolyzable ATP Analogs Can Open WT CFTR Channels, but Only at a Low Rate
Like closing from a burst, opening of a WT CFTR channel to a burst at saturating [MgATP] is also rate limited by a slow step, distinct from nucleotide binding (Fig. 2 D). Similarly, we have shown recently that in cardiac CFTR channels opening to a burst is rate limited by a Mg 2ϩ -dependent slow step that is distinct from, and follows, ATP binding (Dousmanis et al., 2002). To further investigate the nature of this slow step, we compared the opening kinetics of WT CFTR channels exposed to MgATP with those of the very same channels exposed to poorly hydrolyzable ATP analogs. In patches containing
hundreds of WT CFTR channels ( Fig. 7 A), the superposition of numerous opening and closing transitions at saturating (5 mM) MgATP precluded kinetic analysis, but at 10 M MgATP individual transitions could be identified and single-channel gating parameters extracted. All channels closed promptly upon ATP removal, and exposure to millimolar concentrations of MgATP␥S, MgAMP-PNP, or MgAMPPCP elicited measurable channel activity, but in all cases with less frequent bursts than seen at 10 M MgATP (Fig. 7 A). For each analogue, we measured the rate of opening to a burst relative to that in 10 M MgATP in the same patch (Fig. 7 B). Using the fact that at 10 M MgATP r CO is 11.1 Ϯ 3.1% (n ϭ 3) of the maximal opening rate attainable at high [MgATP] (Fig. 2 D), we can express the relative opening rate for each analogue as percentage of maximal: 5.3 Ϯ 0.5% (n ϭ 32) at 5 mM and 3.5 Ϯ 0.9% (n ϭ 7) at 0.5 mM MgAMPPNP, 3.8 Ϯ 1.0% (n ϭ 11) at 5 mM MgAMPPCP, and 6.0 Ϯ 1.0% (n ϭ 5) at 2 mM MgATP␥S. Two findings argue that this observed low efficacy of millimolar MgAMPPNP in promoting channel opening cannot be overcome by applying even higher concentrations, as might be anticipated if the phosphate chain modification greatly reduced nucleotide binding affinity. First, assuming a hyperbolic relationship between relative opening rate and [MgAMPPNP] (as established for MgATP; Fig. 2 D), the two data points at 0.5 and 5 mM suggest that MgAMPPNP supports a V max of 5.4% of the maximal opening rate in MgATP, with a K 0.5 ϭ 280 M. Second, addition of 200 M MgAMPPNP to a 50 M MgATP-containing solution (Fig. 8 A) reduced the rate of opening to bursts of WT CFTR channels by %04ف on average (Fig. 8 B). Ignoring the relatively infrequent openings due to the 200 M MgAMPPNP it-self, and treating MgAMPPNP as a simple competitive inhibitor of MgATP action, we estimate an apparent K i for MgAMPPNP of 230 Ϯ 70 M (n ϭ 3).
The apparent affinity of CFTR channels for MgAMP-PCP is comparable. Thus, a mixture of 10 M MgATP with 5 mM MgAMPPCP was found to be less than half as effective as 10 M MgATP alone in opening the channels (r CO[5 mM AMPPCP ϩ 10M ATP] /r CO[10 M ATP] ϭ 0.41 Ϯ 0.05, n ϭ 8). In this case, with such a high concentration of the analogue, many of the openings would have been due to MgAMPPCP binding alone, i.e., the analogue would have acted both as a ligand causing channels to open and as a competitive inhibitor of ATPinduced opening. From the ratio r CO[5 mM AMPPCP ϩ 10 M ATP] /r CO[10 M ATP] and the measure of relative opening rate in 5 mM MgAMPPCP alone, we can estimate an effective dissociation constant for MgAMPPCP of 063ف M.
In principle, the gating we observe in MgAMPPNP alone could reflect contaminating MgATP, as ATP can form during AMPPNP synthesis if conditions are not kept strictly anhydrous (Thomas Billert, personal communication). Indeed Carson and Welsh (1993) reported contamination of AMPPNP by up to 0.5% ATP, and HPLC analysis (JenaBioScience GmbH) of our Sigma-Aldrich AMPPNP preparations revealed a small contamination (Յ1%) that might have been ATP. To examine this possibility, AMPPNP stock solutions were pretreated with hexokinase, which transfers the terminal phosphate from ATP, but not from AMPPNP (Yount, 1971), to glucose. Compared with parallel "mock" pretreatments omitting glucose, inclusion of glucose during AMPPNP treatment with hexokinase did not result in a consistent reduction in opening rate (r CO HKϩ /r CO HKϪ ϭ 0.9 Ϯ 0.2, n ϭ 15; compared in the Figure 7. Opening of WT CFTR channels by poorly hydrolyzable ATP analogs. (A) Representative recordings from four patches containing many channels, previously phosphorylated by PKA; each patch was exposed to 10 M MgATP, to (mM) analogue, and to 5 mM MgATP (shown for only two patches). Maximum likelihood fits using C-O-C F model gave, for MgATP␥S b ϭ 2.0 Ϯ 0.6 s (n ϭ 5), for MgAMPPNP b ϭ 1.6 Ϯ 0.2 s (n ϭ 32), and for MgAMP-PCP b ϭ 0.36 Ϯ 0.05 (n ϭ 11). (B) Summary of r CO values for nonhydrolyzable analogs normalized to r CO at 10 M MgATP (which is %11ف of maximum, Fig. 2 D) in the same patch.
26
CFTR Channel Gating Mechanism same patch). Also, MgAMPPNP from Sigma-Aldrich and MgAMPPNP from Roche Diagnostics were equally effective in stimulating channel opening (r CO Sigma / r CO Roche ϭ 1.1 Ϯ 0.2, n ϭ 7; compared in the same patch), even though HPLC analysis of Roche AMPPNP samples showed no peak at retention times close to that of ATP. These results indicate that the channel openings in MgAMPPNP alone were not due to traces of contaminating ATP but resulted from interaction of CFTR with the imidotriphosphate.
WT Channel Closing from Locked-open Bursts Is Faster After Activation in MgAMPPNP Alone than After Activation in Mixtures of MgAMPPNP with MgATP
During exposure of WT CFTR channels to MgAMPPNP or MgATP␥S alone, channel closing from bursts, as well as opening, differs from that seen in MgATP (Fig. 7 A). Thus, during exposures to MgAMPPNP or to MgATP␥S (but, interestingly, not to MgAMPPCP) apparently prolonged bursts could be observed. Assuming a single population of open bursts (i.e., a C-O-C F gating model, possibly an oversimplification) and applying our standard kinetic analysis, we obtained an approximation for the mean burst duration in the presence of each analogue (e.g., Table ID) and normalized it to that at 10 M MgATP in the same patch, yielding average ratios of 7.5 Ϯ 3.5 (n ϭ 4), 4.7 Ϯ 0.6 (n ϭ 32), and 1.2 Ϯ 0.2 (n ϭ 11) for MgATP␥S, MgAMPPNP, and MgAMPPCP, respectively. In contrast to this 5-10-fold average lengthening of bursts in the presence of MgAMPPNP or MgATP␥S alone, mixtures of either analogue with MgATP have been shown to "lock open" WT CFTR channels in prolonged bursts 50-100-fold longer than those in MgATP alone (Gunderson and Kopito, 1994;Hwang et al., 1994;Zeltwanger et al., 1999;Csanády et al., 2000; see also Fig. 10, A and D). To address whether this quantitative difference reflects a direct influence of the simultaneous presence of MgATP and analogue on burst duration, we prephosphorylated WT channels, removed the PKA, and then directly compared closing after exposure to MgATP alone, to MgAMPPNP alone, or to a mixture of MgAMPPNP and MgATP, applied in random order (Fig. 9, inset). For each condition, current traces after nucleotide withdrawal were summed to give ensemble pseudomacroscopic current relaxations (Fig. 9, main panel), which revealed that closing after washout of MgAMPPNP alone (Fig. 9, red trace) was clearly slower than after exposure to MgATP alone (Fig. 9, black trace), in qualitative agreement with the standard kinetic analysis above. However, the relaxation after removal of MgATP ϩ MgAMPPNP (Fig. 9, blue trace), although biexponential, was far slower than expected if the two components reflected closing of two populations of channels, each one independently activated by either just MgATP or just MgAMPPNP. Indeed, the slow component was visibly slower ( s ϭ 37 s) than the decay after removal of MgAMPPNP alone 7ف( s), suggesting that the combined presence of MgATP and MgAMP-PNP further stabilized the locked-open channel burst states. This synergy cannot be explained by models in which timing of closing depends on interactions of each channel with a single nucleotide. It implies that interactions between a single channel and at least two nucleotide molecules determine the rate of exit from a locked-open burst.
Closing from Locked-open Bursts Is Faster for K464A Mutants than for WT Channels
Although the K464A mutation did not alter open burst duration of channels exposed to MgATP (Figs. 2 E, 4, and 5), regardless of phosphorylation status ( Fig. 4; Table I), it did significantly reduce the duration of certain unusually prolonged bursts. For example, nucleotide withdrawal from patches containing hundreds of WT CFTR channels opened with MgATP plus MgAMPPNP mixture in the presence of PKA resulted in a slow biexponential current decay (Fig. 10 A). On average, 87 Ϯ 4% (n ϭ 18) of the total amplitude of the relaxation was attributable to the slow component (Fig. 10 C) whose time constant, s ϭ 48.3 Ϯ 4.5 s (n ϭ 18; Fig. 10 D), provides an estimate of the mean dwell time in the locked-open burst . The much smaller fast component (fractional amplitude, a f ϭ 0.13) had a time constant, f ϭ 2.0 Ϯ 0.5 s (n ϭ 18), not greatly different from the normal burst duration of WT channels opened by MgATP in the presence of PKA. For K464A channels, on the other hand ( Fig. 10 B), the slow component comprised a somewhat smaller fraction (a s ϭ 0.63 Ϯ 0.04, n ϭ 16, Fig. 10 C) of the current
decay, but its time constant, s ϭ 9.5 Ϯ 0.8 s (n ϭ 16; Fig. 10 D), was markedly reduced. The smaller fractional amplitude of the slow component for K464A channels can be explained by this observed shortening of their locked-open bursts without the mutation markedly altering the frequency of entry into such bursts. At steady state, the average fraction of time a single channel spends in a particular state is identical to the average fraction of the population of such channels that occupies that state at any instant. Accordingly, because at the moment of nucleotide withdrawal (steady state in the presence of MgATP ϩ MgAMPPNP ϩ PKA) a fraction a f of open channels was in short bursts (lasting f seconds), and a fraction a s was in long bursts (lasting s seconds), each single channel may be expected to have spent (at steady state) a f of its open time in short bursts, and a s of its open time in long bursts. So the ratio of the fast to slow fractional amplitudes, a f /a s , equals the ratio n f /m s , where n and m are the average numbers of short and long bursts, respectively, entered by each channel in a given (sufficiently long) time interval. We may therefore calculate m/(n ϩ m) ϭ 2.0ف for WT channels, i.e., 1 in every 5ف openings results in a long burst. The analogous estimate for K464A channels gives an average of 1 locking in every 6ف openings. Thus, although mutation of the Walker A lysine at NBD1 substantially shortened locked-open bursts, the mutation apparently altered the frequency of entering these locked bursts little, if at all.
The K464A mutation also shortened (Fig. 10, E-G) the similarly prolonged bursts of NBD2 mutant K1250A channels exposed to MgATP alone (Fig. 6 C). The control record (Fig. 10 E) illustrates the slow decay of macroscopic current after washout of MgATP and PKA from a patch containing hundreds of K1250A CFTR Figure 9. Exit from MgAMPPNP-locked burst states is slower when bursts are initiated in the presence of MgATP. Patches with hundreds of prephosphorylated WT CFTR channels were repeatedly subjected to -03فs long exposures to nucleotides (as in inset), in varied sequence. Each trace in the main figure is the sum of 21 recordings, synchronized upon nucleotide washout (arrow; also in inset), from 12 patches, each exposed to 0.5 mM MgATP, 5 mM MgAMPPNP, or 0.5 mM MgATP ϩ 5 mM MgAMPPNP alternately, an equal number of times. Exponential decay fit parameters are: after MgATP, a ϭ 33 pA, ϭ 0.8 s; after AMPPNP, single a ϭ 8 pA, ϭ 6.8 s; double a f ϭ 6 pA, a s ϭ 6 pA, f ϭ 0.7s, s ϭ 8.8 s; after MgATP ϩ MgAMPPNP, a f ϭ 20 pA, a s ϭ 18 pA f ϭ 2 s, s ϭ 36.6 s. As solution exchange time was 0.5-1s, fast components do not accurately reflect channel closing. , and hence the mean open-burst dwell time, was less than half that of channels bearing the K1250A mutation alone (Fig. 10 G).
Closing from Bursts During Activation by Poorly Hydrolyzable Analogs Alone Is Faster for K464A Mutants than for WT Channels
Like WT CFTR (Fig. 7), mutant K464A channels could be opened by millimolar concentrations of the analogs MgAMPPNP or MgATP␥S alone (Fig. 11), with rates of opening to bursts of 1.5 Ϯ 0.2% (n ϭ 4) at 0.5 mM and 2.9 Ϯ 0.3% (n ϭ 4) at 5 mM MgAMPPNP, and 5.0 Ϯ 0.6% (n ϭ 8) at 2 mM MgATP␥S, of the maximal rate at saturating [MgATP], values not very different from those for WT channels under the same conditions. However, the 5-10-fold prolongation of WT bursts by these analogs (Fig. 7 A) was not evident in K464A channels (Fig. 11): for K464A channels opened by MgAMP-PNP or MgATP␥S alone, the mean b values were only 1.1 Ϯ 0.2 (n ϭ 8) or 2.2 Ϯ 0.5 (n ϭ 8) times larger, respectively, than at 10 M MgATP. Because this consequence of the K464A mutation is manifest during expo-sure to essentially nonhydrolyzable ATP analogs it cannot be ascribed to any failure of the mutant channel to hydrolyze nucleotide at the NBD1 catalytic site, but instead must be attributed to the alteration of NBD1 structure per se.
D I S C U S S I O N
We may draw several conclusions from these analyses of gating kinetics of WT and of NBD mutant CFTR channels, in the presence of MgATP and/or of poorly-hydrolyzable analogs: (a) nucleotide binds at both NBD1 and NBD2 catalytic sites before channel opening; (b) the slow opening transition, after nucleotide binding, is highly sensitive to the structures of the -␥ phosphate bridging group and of the ␥ phosphate; (c) no further nucleotide binding is required to terminate an open burst; (d) hydrolysis of the nucleotide at NBD2 precedes normal, rapid closing from bursts; (e) if that hydrolysis is prevented, the structure of the NBD1 catalytic site and of the nucleotide bound there can modulate rate of exit from the resulting prolonged (locked) open burst. Together, these findings define asymmetric yet interacting roles for the two NBDs in controlling MgATP-dependent channel gating, and they suggest schemes that describe the underlying molecular mechanisms. In the following, we critically evaluate the information and arguments upon which these conclusions are based.
CFTR Cl Ϫ Channel Opening to a Burst
Opening is preceded by nucleotide binding to both catalytic sites. That CFTR channel opening rate varies with [MgATP] (Gunderson and Kopito 1994;Venglarik et al., 1994;Winter et al., 1994;Zeltwanger et al., 1999;Csanády et al., 2000) implies that channel opening to a burst requires at least one MgATP binding step. The saturable dependence of r CO on [MgATP] (Fig. 2 D) means that some (relatively slow) step unrelated to nucleotide binding sets the maximal rate of channel opening at saturating [MgATP]. In principle, that slow step could precede or follow MgATP binding. But, because we find that the maximal rate of opening to bursts is sensitive to the structure of the activating nucleotide (e.g., to details of the polyphosphate chain, see above, or to the presence of an 8-azido moiety: r CO max Mg8-azidoATP /r CO max MgATP ϭ 0.42 Ϯ 0.04, n ϭ 17; unpublished observation), this suggests that the ratelimiting opening step follows nucleotide binding. These considerations support our interpretation of CFTR channel opening to a burst as reflecting a relatively slow conformational change after relatively rapid nucleotide binding. In principle, NBD mutations could alter binding or opening steps, or both. Our results show that mutations within the Walker motifs of either NBD1 (K464A) or NBD2 (D1370N, Figure 11. Gating of prephosphorylated K464A channels by poorly hydrolyzable ATP analogs, as indicated. Unlike WT (Fig. 7 A), K464A burst duration was not increased during exposure to MgAMPPNP (A and B, b ϭ 270 Ϯ 50 ms, n ϭ 8), and was only slightly increased during exposure to ATP␥S (C, b ϭ 655 Ϯ 170 ms, n ϭ 8), compared with bursts in MgATP (b ϭ 276 Ϯ 21 ms, n ϭ 16) in the same patches. Note that, due to the lower apparent affinity of K464A for MgATP (Fig. 2), the relative opening rate of mutant channels at 10 M MgATP averaged only 2.3 Ϯ 0.8% (n ϭ 3) of that in saturating MgATP (compared with %11ف for WT), so the opening rate of K464A channels was similar in the presence of millimolar concentrations of the poorly hydrolyzable analogs or of 10 M MgATP. 29 Vergani et al.
K1250A) reduce the apparent affinity of the MgATP binding site(s) involved in channel opening (Figs. 2 and 3), but (at least for K464A and D1370N) affect the maximal opening rate little (Table I). This combination of effects could be explained if each mutation were to cause a similar energetic destabilization of both the closed channel with MgATP already bound (i.e., reduce MgATP-CFTR binding energy) and the transition state of the subsequent slow opening step (Fersht, 1999). But is it reasonable to expect these mutations to reduce nucleotide-CFTR binding energy? Structural information and nucleotide photolabeling data suggest that it is. Because the Walker A lysine interacts extensively with the  and (when present) ␥ phosphate groups of the bound nucleotide in all NBD X-ray structures, replacing the large positively-charged side chain with a methyl group may be expected to reduce substrate binding energy by both steric and electrostatic mechanisms (Junop et al., 2001). Accordingly, although no major difference in [␣ 32 P]8-azidoATP photolabeling at 0ЊC was detected between WT and K464A/K1250A or K464A CFTR , the K464A mutation alone greatly reduced photolabeling of NBD1 by M [␣ 32 P]8-azidoATP at 37ЊC (Aleksandrov et al., 2002) and virtually abolished stable (i.e., surviving extensive post-incubation washing) photolabeling at 30ЊC (unpublished data). So, possibly, the lysine does contribute significant nucleotide binding energy but only after a temperature-sensitive conformational change. The Walker B aspartate may also be expected to contribute binding energy for the MgATP complex, since it is thought to help coordinate the catalytic-site Mg 2ϩ ion in NBDs, as also observed in F1-ATPase (Weber et al., 1998). Thus, the presence of Mg 2ϩ enhanced labeling of NBD2 when CFTR was incubated (at 0Њ or 37ЊC) with [␣ 32 P]8-azidoATP (Aleksandrov et al., 2002). Plausibly, then, both Walker A lysine and Walker B aspartate do normally contribute to the binding energy of the nucleotide with which they interact. Therefore, the simplest interpretation of the reduced apparent affinity with which MgATP elicits opening of K464A and D1370N (and K1250A) mutants compared with WT is that the mutations impair nucleotide binding at two different sites, such that at subsaturating [MgATP] channel opening is limited by MgATP binding at NBD1 in K464A, but at NBD2 in D1370N (and K1250A). That nucleotide binding at either NBD can be made rate limiting suggests that in WT CFTR both NBD1 and NBD2 catalytic sites need to be occupied before a channel can open. This provides a straightforward explanation for the similar consequences for channel opening of introducing mutations into the otherwise structurally and functionally divergent NBD1 and NBD2 sequences. Because we also show here that two distinct nucleotides (ATP and AMPPNP) interact with a single WT CFTR channel to determine (lockedopen) burst length (Fig. 9), and yet the lack of influence of [MgATP] on burst duration (Figs. 2 E, 3 B, and 4) implies that all nucleotide binding steps occur before the channel opens, these findings lend independent support to our conclusion that a CFTR channel normally opens to a burst only after MgATP has bound to both NBD1 and NBD2 active sites.
Before accepting that conclusion, however, we must consider the possibility that nucleotide occupancy of a single "opening" site suffices to open a WT CFTR channel, and that mutations at the other catalytic site impair opening indirectly by allosterically influencing binding at the opening site. Structural information on NBDs (Armstrong et al., 1998;Hung et al., 1998;Diederichs et al., 2000;Hopfner et al., 2000;Chang and Roth, 2001;Gaudet and Wiley, 2001;Karpowich et al., 2001;Yuan et al., 2001;Locher et al., 2002) argues strongly that K464 and D1370 (and K1250) are, indeed, part of two distinct catalytic sites. Allosteric interactions between CFTR's two NBDs (compare Powe et al., 2002) could, therefore, permit the K464A, D1370N, and K1250A mutations to all affect the same binding site. In CFTR's close ABC-C family relatives, NBD2 reportedly does allosterically influence NBD1 in SUR1 (Ueda et al., 1997 and MRP1 (Hou et al., 2000); and, in MRP1, there also appears to be a reciprocal allosteric action of NBD1 on NBD2 (Gao et al., 2000;Hou et al., 2002). However, direct measurements of nucleotide occupancy in WT CFTR (assayed by photolabeling with [␣ 32 P]8-azidoATP or [␣ 32 P]8-azidoADP at 37ЊC) provide no support for allosteric interactions, as occupancy at NBD1 appeared unaffected by mutation of K1250 in NBD2, and occupancy at NBD2 appeared unaffected by mutation of K464 in NBD1 (Aleksandrov et al., 2002).
Furthermore, several observations argue that nucleotide binding at NBD1 alone is not sufficient for WT CFTR opening, and hence that NBD1 could not be the sole "opening" site. Thus, WT channel opening rate drops to virtually zero within seconds after nucleotide removal (e.g., Fig. 3 A), whereas labeled 8-azidoATP remains bound at the NBD1 catalytic site throughout several minutes of nucleotide-free wash before irradiation (Aleksandrov et al., 2001(Aleksandrov et al., , 2002Basso et al., 2002). Moreover, covalent modification of the NBD2 Walker A sequence (Cotten and Welsh, 1998), and the K1250A (Fig. 3 C) and the D1370N (Fig. 2 D) mutations 9-8ف( Å apart; e.g., Hung et al., 1998), all reduce apparent affinity for MgATP activation of opening. The simplest explanation is that these three disparate modifications all directly alter the NBD2 ATP binding site rather than that they all allosterically affect the NBD1 site. Most likely, therefore, the rightward shift in ing rate reflects the lower affinity of a binding step, required for channel opening, at NBD2 itself. Nor is it likely that, in WT channels, NBD2 could be the sole "opening" site. A major reason is that WT channels with a single ATP molecule bound at the NBD2 catalytic site must be rare, given the higher nucleotide affinity at NBD1 than at NBD2 (Aleksandrov et al., 2001(Aleksandrov et al., , 2002Basso et al., 2002). Indeed, because in WT CFTR NBD1 appears to remain nucleotide-bound throughout many gating cycles , the nucleotide-binding step that controls the timing of WT channel opening at subsaturating [MgATP] most likely occurs at NBD2 (compare Gunderson and Kopito 1995). However, we cannot rule out that, at low [MgATP], mutant K464A CFTR channels might open to bursts with only NBD2 occupied by nucleotide. In fact, although opening rates for WT and D1370N mutant CFTR channels (Fig. 2 D, blue and green symbols) are satisfactorily described by the Michaelis equation (i.e., opening limited by binding to a single site) the opening rates of K464A channels (Fig. 2 D, red symbols) at low (Յ50 M) [MgATP] are slightly higher than expected. If confirmed, these results would be consistent with the right-shifted K464A [MgATP]-r CO curve reflecting principally a reduced nucleotide affinity at NBD1 (now lower than the affinity at NBD2) and a low, but nonzero, opening rate of K464A mutant CFTR channels with nucleotide bound only at the unmodified NBD2 site.
Therefore, present evidence suggests that nucleotide normally binds to both of WT CFTR's NBDs before the channel opens, and that opening is limited by nucleotide binding at NBD2 in WT, D1370N, and K1250A CFTR channels, but probably by nucleotide binding at NBD1 in K464A CFTR channels.
Does CFTR channel opening require ATP hydrolysis? What is the nature of the slow step that rate limits opening of CFTR channels to bursts at saturating [MgATP]? We and others initially proposed that opening is coupled to nucleotide hydrolysis because, at submillimolar concentrations, MgAMPPNP and MgATP␥S seemed unable to open CFTR channels that could be opened readily by MgATP and other hydrolyzable nucleoside triphosphates (Anderson et al., 1991;Carson and Welsh, 1993;Gunderson and Kopito, 1994;. However, millimolar concentrations of the poorly hydrolyzable analogs AMPPNP and ATP␥S were recently shown to support CFTR channel opening (Aleksandrov et al., 2000), and we confirm that finding here . But, by directly comparing gating of the same channels, in the same patch, during exposure to MgAMPPNP, MgAMPPCP, or MgATP␥S, and to MgATP, we find that at concentrations of these analogs expected to be saturating (Figs. 7 and 8; see also Weinreich et al., 1999;Aleksandrov et al., 2001Aleksandrov et al., , 2002 the opening rates of and K464A (Fig. 11) channels are only %5ف of that reached at saturating [MgATP]. The low frequency of bursts elicited by MgAMPPNP or MgATP␥S probably accounts for earlier failures to observe opening in patches with few (Յ3) channels during brief exposures to these analogs (Gunderson and Kopito, 1994;Hwang et al., 1994). The failure in patches with many channels, studied at 53فЊC (Anderson et al., 1991;Carson and Welsh, 1993), is perhaps attributable to the weakened interactions between CFTR and MgAMPPNP or MgATP␥S at that higher temperature (Schultz et al., 1995;Mathews et al., 1998;Aleksandrov et al., 2000;but cf. Quinton and Reddy, 1992;Hwang et al., 1994). Interestingly, ATP-bound CFTR channels in the absence of Mg 2ϩ ions, like MgAMPPNP-bound channels, were found to enter open bursts at a similarly low rate, %2ف of that seen with MgATP-bound channels (Dousmanis et al., 2002). Although it is unlikely that CFTR can hydrolyze MgAMP-PNP, MgAMPPCP, and MgATP␥S, at up to 5%, or free ATP at up to 2%, of the rate at which it can hydrolyze MgATP, we cannot rule out on the basis of these results alone that hydrolysis is necessary for channel opening.
Results of mutagenesis experiments provide a stronger challenge to the idea that channel opening requires hydrolysis. Whereas the K464A mutation in NBD1 has been reported to reduce 02ف fold the ATPase activity of purified CFTR (Ramjeesingh et al., 1999), we find that the same mutation diminishes maximal opening rate by only Ͻ50%, similar to effects of other NBD1 catalytic site mutations (Figs. 2 and 5). Moreover, recent comparisons of photolabeling with [␣ 32 P]8-azidoATP and [␥ 32 P]8-azidoATP suggest that the catalytic activity of NBD1 in CFTR may be very low, even in WT (Aleksandrov et al., 2002;Basso et al., 2002). This might be related to the nonconserved catalytic-site residues, S573 instead of the Walker B glutamate and S605 instead of the "switch" histidine (Hung et al., 1998;Schneider and Hunke, 1998), as well as, given recent head-to-tail NBD dimers with composite catalytic sites (Hopfner et al., 2000;Locher et al., 2002), to the unusual LSHGH signature sequence in NBD2 (Jones and George, 1999). So it seems unlikely that ATP hydrolysis at NBD1 controls opening of WT CFTR channels.
For NBD2, comparisons of hydrolytic and gating rates are more difficult. For example, our measurements of D1370N CFTR gating show a twofold reduction in maximal opening rate (Table I), but no ATPase measurements are available for D1370N CFTR. However, the corresponding mutation abolished hydrolytic activity in several other ABC ATPases (e.g., Koronakis et al., 1995;Urbatsch et al., 1998;Hrycyna et al., 1999), though it only halved ATPase V max in the DNA mismatch repair protein, MutS (Junop et al., 2001). On 31 Vergani et al.
the other hand, ATPase measurements on purified CFTR have shown that the K1250A mutation abolished ATP hydrolysis (Ramjeesingh et al., 1999), whereas opening of K1250A channels was impaired, but not abolished, at normal [MgATP] Gunderson and Kopito, 1995;Ramjeesingh et al., 1999;Powe et al., 2002); indeed, the greatly reduced apparent affinity for MgATP we observed (Fig. 3 C) implies that the maximal opening rate of K1250A may be several-fold greater than that measured at 1-2 mM MgATP. Although these mutagenesis results cannot rule out that opening requires hydrolysis at the NBD2 catalytic site, this is rendered less likely by our conclusions that hydrolysis at NBD2 is linked to termination, rather than initiation, of a burst (see below), and that all nucleotide binding occurs before opening (above). So, how nucleotides bound at the NBDs allow a CFTR channel to exit its long shut state is still only partially answered. We can conclude that the divalent ion and detailed structure of the  and ␥ phosphate groups in the activating Mg 2ϩ -nucleotide complex influence the energy barrier encountered by a nucleotide-bound channel entering an open burst. One possible interpretation, consistent with results presented here and with the high sensitivity of opening rate to temperature (Aleksandrov and Riordan, 1998;Mathews et al., 1998), is that the opening step corresponds to formation of a prehydrolysis complex (Dousmanis et al., 2002), most likely at the NBD2 catalytic site.
CFTR Cl Ϫ Channel Closing from a Burst
Closing from bursts occurs without further MgATP binding. We found no clear dependence of burst duration on [MgATP] (10 M to 5 mM) in WT CFTR (Figs. 2 E,3 A,and 4,B and C) or in K464A, D1370N, or K1250A mutant channels (Figs. 2 E, 3 B, and4, E-H), indicating that all ATP binding events precede channel opening and no further binding to the open channel is needed to complete the gating cycle. Rate of burst termination was similarly essentially independent of [MgATP] in most other studies (Gunderson and Kopito 1994;Venglarik et al., 1994;Winter et al., 1994;Csanády et al., 2000), but a Յ50% prolongation of average burst duration at [MgATP] above 1 mM (Zeltwanger et al., 1999) was interpreted as reflecting entry into a second, more stable bursting mode favored by MgATP binding at a lower affinity site, proposed to be NBD2. This [MgATP] dependence of burst duration was reported to be exaggerated in K1250A mutant channels, in which brief bursts were observed at 10 M MgATP and only at higher concentrations did the characteristic (e.g., Fig. 6 C, above) prolonged bursts appear (Zeltwanger et al., 1999;Ikuma and Welsh, 2000;Powe et al., 2002). Though we occasionally observed brief bursts in K1250A channels at 10 M MgATP (not illustrated), these were very rare, with a frequency of occurrence not demonstrably different from that in nominally MgATP-free bath solution (r CO10 M /r CObath soln ϭ 0.72 Ϯ 0.12, n ϭ 6). Thus, brief bursts of K1250A channels might reflect infrequent nucleotide-independent events, unrelated to the physiological gating cycle of WT channels, an interpretation consistent with those brief bursts surviving mutation of the Walker A lysine in either, or both, NBDs (Zeltwanger et al., 1999;Ikuma and Welsh, 2000;Powe et al., 2002).
Moreover, our finding that D1370N channels at low (15 M) [MgATP] both enter and exit bursts more slowly on average than WT channels (Figs. 2, D-E, and 4 H) demonstrates that this single NBD2 mutation impacts every gating cycle, regardless of the fact that D1370N channels have an intact WT NBD1 sequence. This provides further evidence against the existence, at low [MgATP], of gating cycles in which nucleotide interacts exclusively with NBD1 Gadsby andNairn, 1994, 1999;Zeltwanger et al., 1999).
Closing from bursts is normally preceded by hydrolysis at the NBD2 catalytic site. A pronounced increase of WT CFTR channel burst duration occurs when the poorly hydrolyzable analogue MgAMPPNP (Figs. 8 and 9 A; Gunderson and Kopito, 1994;Hwang et al., 1994;, or the ATPase inhibitor orthovanadate (VO 4 ) (Baukrowitz et al., 1994;Gunderson and Kopito, 1994), are added to the MgATP used to activate the channels. Both results suggest that interfering with a hydrolytic cycle delays channel closure from bursts. In other ABC ATPases, VO 4 inhibits steady state hydrolytic activity by forming a tightly bound MgADP-VO 4 complex at the active site, mimicking the bipyramidal pentacovalent transition-state intermediate (e.g., Urbatsch et al., 1995;Chen et al., 2001). In CFTR too, a trapped MgADP-VO 4 complex or tightly bound, nonhydrolyzable, MgAMPPNP molecule might be responsible for locking channels in open bursts by preventing normal completion of a hydrolytic cycle. Comparison of the kinetics of photoaffinity labeling and of channel gating after exposure of WT CFTR to VO 4 in the presence of Mg-8-azidoATP suggests that the Mg-8-azi-doADP-VO 4 complex responsible for burst prolongation more likely corresponds to the 8-azido nucleotide labeling of NBD2 than of NBD1: current decays much faster ( s 03ف s) than [␣ 32 P]8-azido-nucleotide dissociates from NBD1 ( 51ف min) , whereas photolabeling at NBD2 is more labile and is lost if a wash precedes UV irradiation (Aleksandrov et al., 2001(Aleksandrov et al., , 2002. Similarly, presumed disruption of ATP hydrolysis by targeted mutation of key active site residues also slowed exit from bursts, but only if the mutations were at the NBD2 catalytic site (Fig. 5 vs. Fig. 6). Together, these observations strongly suggest that in a locked-open WT channel the nonhydrolyzable ana-
32
CFTR Channel Gating Mechanism logue or ADP-VO 4 complex that delays burst termination is tightly bound at the NBD2 site, implying that normal, rapid closure from a burst is preceded by hydrolysis of the nucleotide bound at NBD2 Gunderson and Kopito, 1995). This interpretation has been challenged on three grounds: (a) that AMPPNP binds more tightly to NBD1 than to NBD2 (Aleksandrov et al., 2001(Aleksandrov et al., , 2002, (b) that CFTR channel closing is only weakly temperature dependent (Aleksandrov and Riordan, 1998), and (c) that CFTR channel gating is an equilibrium process (Aleksandrov and Riordan, 1998).
(a) High affinity interaction of 8-azidoAMPPNP (or AMPPNP) with NBD1 in WT CFTR (Aleksandrov et al., 2001(Aleksandrov et al., , 2002 does not rule out an action of the analogue at NBD2 in locked-open channels. In fact, such photolabeling data, obtained at 37ЊC (at which temperature effects of AMPPNP on channel gating are diminished: Schultz et al., 1995;Mathews et al., 1998) on unphosphorylated, and hence mostly closed (Linsdell and Hanrahan, 1998) CFTR channels, are probably unable to detect the small fraction of CFTR molecules occupying locked-open burst states under those conditions. (b) The rate of CFTR channel closing from bursts is strongly temperature dependent (Mathews et al., 1998;Csanády et al., 2000), as expected if this transition is rate limited by hydrolysis. The analysis that suggested weak temperature dependence (Aleksandrov and Riordan, 1998) did not distinguish between the duration of bursts, bounded by relatively long interburst closed times, and the duration of intraburst openings, bounded by brief flickery closures which are probably unrelated to channel interactions with ATP (Table I) as also indicated by their persistence in locked-open channels long after washout of all nucleotides (e.g., Figs. 6, C and D, and 9;Zeltwanger et al., 1999;Dousmanis et al., 2002). (c) The conclusion that CFTR channels gate near equilibrium was derived from analysis of the temperature dependence of P o , starting from the assumption that P o reports equilibrium occupancy of closed and open channel states, as in a conventional ligandgated channel in which closing is simply the opening reaction in reverse (Del Castillo and Katz, 1957). Gating of WT CFTR channels, however, violates microscopic reversibility, as evident from the temporal asymmetry (Gunderson and Kopito, 1995) of transitions between the closed state and two open burst states (distinguishable by their slightly different conductance levels in filtered records). This indicates that the open-burst and closed-interburst states of CFTR channels are not at thermodynamic equilibrium, and that an external source of free energy drives the transitions preferentially in one direction around a cycle. The likely energy source is a MgATP complex, bound by the closed channel before opening to a burst and released only in the form of hydrolysis products as the channel returns to the closed-interburst conformation, so ensuring that the closing reaction is not the reverse of opening.
Unlike other ABC proteins, which hydrolyze ATP to actively transport substrates against their electrochemical potential gradient, CFTR catalyzes dissipative electrodiffusive Cl Ϫ ion movement. Instead, CFTR might harness energy released by ATP hydrolysis to drive conformational changes that would otherwise occur only rarely, making the rate of hydrolysis a timing device, analogous to GTP hydrolysis by G-proteins Manavalan et al., 1995). Our results are consistent with a scheme (Fig. 12 A) in which the transition to an open burst, after two MgATP complexes are bound, has a large negative ⌬G, Figure 12. (A) Simplified scheme illustrating proposed linking of steps in WT CFTR channel gating, and nucleotide binding and hydrolytic cycles. Yellow ovals depict MgATP complexes, smaller red oval is inorganic phosphate, P i (colored orange in prehydrolysis complex on open channel); the CFTR protein is represented as a green (NBD1) and blue (NBD2) semicircle, with shape altered (signifying induced-fit conformational changes in the NBDs; Karpowich et al., 2001) upon nucleotide binding. "C" represents closed interburst states of the channel and "O" symbolizes the collection of states during open bursts. Thickness and length of arrows indicate relative rates of individual steps. There is no evidence for strict sequential binding of the two MgATP complexes, but the alternative pathway to the doubly occupied closed state, in which nucleotide binds first at NBD2, probably occurs infrequently in WT CFTR (though not necessarily in mutants) and so was omitted for clarity. (B) Cartoon illustrating a possible physical interpretation of the scheme in A, in which NBD dimerization couples ATP binding and hydrolysis at the catalytic sites to opening and closing of the channel pore. Two semicircles represent NBD1 (green) and NBD2 (blue), and the transmembrane domains are represented by straight-line segments connected to the NBDs. Closed and Open channels are indicated by converging or near-parallel transmembrane domains, respectively. Catalytic sites and Cl Ϫ permeation pathway are structurally connected such that NBD dimer formation results in opening of the channel pore. so that the reverse reaction-exit from the burst with ATP still bound-is very slow. Hydrolysis of the bound triphosphate makes the C→O transition of the channel "reversible," by speeding exit from the otherwise stable open burst states, though via a pathway distinct from that of entry to the burst. In contrast, when hydrolysis is prevented (by the presence at NBD2 of AMPPNP or ADP-VO 4 , or of catalytic site mutations), the channel remains locked in the "O" states since exit from the burst can then occur only through very slow reversal (in proper thermodynamic sense) of the transition that initiated the burst.
However, the scheme as drawn suggests tight coupling between channel gating and ATP hydrolysis, which is inconsistent with the largely unaltered gating of the catalytically impaired K464A mutant (with ATPase V max apparently reduced -02فfold; Ramjeesingh et al., 1999). In fact, evidence suggests that part of the hydrolysis catalyzed by WT CFTR may be uncoupled from channel function. Thus, while phosphorylation of cellfree CFTR by exogenous PKA is an absolute requirement for channel opening (e.g., Linsdell and Hanrahan, 1998), some ATP hydrolysis by partially purified CFTR can be detected also before PKA treatment (Li et al., 1996;Aleksandrov et al., 2002). Levels of phosphorylation above basal might therefore be required to couple events at the NBDs with gating of the transmembrane pore. Even higher levels of steady state phosphorylation could prolong normal hydrolytic bursts (Table I, Channel closing from locked-open bursts in nonhydrolytic conditions is modulated by NBD1. Under normal hydrolytic conditions, burst duration was unaffected by mutation of the NBD1 Walker A lysine (Figs. 2, 4, and 5), or of other NBD1 catalytic site residues (Fig. 5), suggesting that the NBD1 catalytic site structure does not, in that case, influence the step that rate-limits burst termination. But when hydrolysis (at NBD2) was prevented, by supplying nucleotide resistant to hydrolysis (Figs. 9, and 10, A-D; Fig. 7 vs. Fig. 11), by adding VO 4 , or by mutating the NBD2 Walker A lysine (K1250A; Fig. 10, E-G), the K464A mutation resulted in less prolonged bursts. Very similar reduction of locked-open burst duration by the K464A mutation has been described recently in NIH3T3 and CHO cells (Powe et al., 2002). In addition, the inferred absence of a Mg 2ϩ ion from the NBD1-nucleotide complex similarly appeared to shorten the locked-open bursts entered by CFTR channels after Mg 2ϩ withdrawal (Dousmanis et al., 2002).
A further indication that the detailed structure of the NBD1-nucleotide complex influences the rate of nonhydrolytic exit from bursts ("unlocking") is that burst duration was prolonged less when MgATP was replaced by just MgAMPPNP than when it was replaced by a mixture of MgATP and MgAMPPNP (Fig. 9); this suggests that slowing of burst termination is greatest when a single CFTR channel interacts simultaneously with an MgATP and an MgAMPPNP complex. The MgAMP-PNP complex that prevents hydrolytic burst termination (in both conditions) is unlikely to reside at NBD1, because neither introduction of mutations expected to interfere with hydrolysis at NBD1 (Figs. 2 E, 4, D-F, and 5) nor the presence of a tightly bound nucleotide at NBD1 (-51فmin dwell time of labeled 8-azido nucleotide at NBD1; Basso et al., 2002) results in prolonged bursts. We therefore infer that MgAMPPNP bound at NBD2 prevents hydrolysis there (and hence rapid burst termination), while the presence at NBD1 of MgATP, rather than another MgAMPPNP complex, increases dwell time in the locked-open burst.
Although the primary cause of burst prolongation in all these instances appears to be an inhibition of hydrolysis at NBD2, we conclude that the energy barrier for unlocking is influenced by the molecular structure of the NBD1 catalytic site with its bound Mg 2ϩ -nucleotide complex. Whether this role of NBD1 is direct, or indirect via allosteric interaction with NBD2, remains to be determined.
Possible Functional Significance of NBD Dimerization
Though the scheme in Fig. 12 A is oversimplified (e.g., it considers neither the short-lived "flickery" closed state, nor the role of phosphorylation by PKA), it nevertheless can account for the data on WT and mutant CFTR channel gating described here. Moreover, we have presented evidence to support each of the components of this simplified scheme, i.e., two nucleotidebinding steps preceding a slow opening step, relatively rapid closing via hydrolysis at NBD2, and much slower nonhydrolytic closing. Unfortunately, the difficulty of collecting adequate numbers of CFTR's relatively infrequent gating events, combined with the lack of biochemical information on CFTR mutants (whether D1370N is capable of ATP hydrolysis, for instance), precludes extraction of the many (Ն7) rate constants from fits to data, even for a scheme as simple as the one in Fig. 12 A. However, simulations of that scheme readily reproduce the observed dependence of r CO (Fig. 2 D) or P o (Fig. 3 C) on [MgATP] for WT CFTR when rate constants are chosen to yield intrinsic dissociation constants of 01ف and 40 M at NBD1 and NBD2, respectively, maximal opening 3.0ف( s Ϫ1 ) and (hydrolytic) closing 3ف( s Ϫ1 ) rates as given in Table IB, and a much slower nonhydrolytic closing (reverse of opening) rate
|
2018-04-03T02:15:38.760Z
|
2003-01-01T00:00:00.000
|
{
"year": 2003,
"sha1": "044ccc99fee0812d2c430a2a548ce9bf5382fc75",
"oa_license": "CCBYNCSA",
"oa_url": "http://jgp.rupress.org/content/121/1/17.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "88f5b50c6bfd751d3d59148a65f8ecb48da8edda",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
210131650
|
pes2o/s2orc
|
v3-fos-license
|
Surfing the Hyperbola Equations of the Steady-State Farquhar–von Caemmerer–Berry C3 Leaf Photosynthesis Model: What Can a Theoretical Analysis of Their Oblique Asymptotes and Transition Points Tell Us?
The asymptotes and transition points of the net CO2 assimilation (A/Ci) rate curves of the steady-state Farquhar–von Caemmerer–Berry (FvCB) model for leaf photosynthesis of C3 plants are examined in a theoretical study, which begins from the exploration of the standard equations of hyperbolae after rotating the coordinate system. The analysis of the A/Ci quadratic equations of the three limitation states of the FvCB model—abbreviated as Ac, Aj and Ap—allows us to conclude that their oblique asymptotes have a common slope that depends only on the mesophyll conductance to CO2 diffusion (gm). The limiting values for the transition points between any two states of the three limitation states c, j and p do not depend on gm, and the results are therefore valid for rectangular and non-rectangular hyperbola equations of the FvCB model. The analysis of the variation of the slopes of the asymptotes with gm casts doubts about the fulfilment of the steady-state conditions, particularly, when the net CO2 assimilation rate is inhibited at high CO2 concentrations. The application of the theoretical analysis to extended steady-state FvCB models, where the hyperbola equations of Ac, Aj and Ap are modified to accommodate nitrogen assimilation and amino acids export via the photorespiratory pathway, is also discussed. Electronic supplementary material The online version of this article (10.1007/s11538-019-00676-z) contains supplementary material, which is available to authorized users.
Introduction
The steady-state Farquhar-von Caemmerer-Berry (FvCB) leaf photosynthesis model is broadly recognised by plant biologists and physiologists as one of the most useful models to assess in vivo the net CO 2 assimilation rate (A) of plant leaves as a function of CO 2 concentration (C) under different environmental cues. The initial FvCB model was first described in the 1980s for C 3 plants (Farquhar et al. 1980), then modified to include the triose phosphate utilisation (Sharkey 1985a, b) and later extended to other works on C 4 plants, antisense transgenic plants, the effect of bicarbonate pumps at the chloroplast envelope and global climate change, among others (Bellasio et al. 2016;Price et al. 2011;von Caemmerer 2000;Wullschleger 1993). Together with the basic rectangular hyperbolic FvCB model (Farquhar et al. 1980;Sharkey 1985a, b), other non-rectangular hyperbolic, exponential and empirical steady-state models have also been described (Duursma 2015; Ethier and Livingston 2004;Goudriaan 1979;von Caemmerer 2000). In the basic FvCB model, the steady-state CO 2 assimilation rate proceeds at the minimum of three limitation rates denoted as A c , A j and A p , which depend on the activity of the ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco), the ribulose-1,5-bisphosphate regeneration and the triose phosphate utilisation, hereafter abbreviated as states c, j and p.
In the basic FvCB model, the analysis of the net CO 2 assimilation rate did not consider the (apparent) mesophyll conductance to CO 2 diffusion (g m )-hereafter defined as the conductance for CO 2 diffusion from the intercellular space to the site of Rubisco carboxylation assuming that photorespiratory and respiratory CO 2 release occurs in the same compartment as Rubisco carboxylation (von Caemmerer 2013)-and thus, its value was assumed to be infinite. Consequently, the CO 2 concentration in the intercellular (or substomatal) space (C i ) was set equal to the CO 2 concentration at the site of Rubisco carboxylation (C c ) and both rate curves, A/C c and A/C i , were not distinguished between each other. The inclusion of a finite value for g m into the initial FvCB model transforms A c and A j into quadratic equations. This transformation was indeed demonstrated to provide a more accurate estimation of the values for the maximum carboxylation rate (V cmax ) of Rubisco and the maximum electron transport J max under steady-state conditions (Ethier and Livingston 2004;Niinemets et al. 2009;von Caemmerer 2000). From a mathematical point of view, the main difference between the equations of the A/C c and A/C i rate curves is that the latter are non-rectangular hyperbolae, whose curvature shape in the first quadrant of the Cartesian coordinate system depends on the magnitude of g m .
Nowadays, A/C i , instead of A/C c , rate curves are extensively used in the estimation of biochemical parameters from leaf photosynthesis, where g m is assumed to be finite and purely diffusional and not to depend on the CO 2 concentration inside the leaf. However, the assumption that g m remains constant has been challenged in some studies and new extensions have been incorporated into the FvCB model (Flexas et al. 2007; Tholen et al. 2012). For instance, g m was proposed to depend on the ratio of mitochondrial CO 2 release to chloroplast CO 2 uptake and to decrease particularly at low C i (Tholen et al. 2012), although other factors, such as the intracellular arrangements of chloroplasts and mitochondria in C 3 leaves, were later included in a more generalised model to better explain the dependence of g m on the above ratio (Yin and Struik 2017). A decrease in the values for g m was also observed in response to an increase in C i (Flexas et al. 2007). In this latter study, either modifications in chloroplast shape, which could prevent the chloroplast association with the cell surface, or the involvement of aquaporins, which could facilitate CO 2 diffusion across cell membranes by a pH-dependent process, was proposed to regulate the variation of g m . Besides, g m is tightly co-regulated with the stomatal conductance (g s ) (Flexas et al. 2008). g s varies with both the atmospheric CO 2 concentration and the limitation state (Buckley 2017). The value of g s declines with increased atmospheric CO 2 concentration under RuBP regeneration-limited photosynthesis, but, in contrast, it increases with increased atmospheric CO 2 concentration under Rubisco-limited photosynthesis (Medlyn et al. 2011).
When the A/C i rate curves of the FvCB model are analysed under steady-state conditions and the photorespiratory and respiratory CO 2 release is also assumed to take place at the site of the Rubisco carboxylation, the quadratic equations for A c , A j and A p (see "Appendix 1", Eqs. A8-A10) can be fitted following different approaches, where g m is taken as a constant parameter (Duursma 2015; Gu et al. 2010;Sharkey 2016;Su et al. 2009). Some of the nonlinear fitting methods require starting from initial guessed parameters and letting the fit improve with successive iterations, while others constrain the C i values at which the transition point between c and j occurs. A wealth of data on the transition point between the states c and j indicates that its value is species-and season-dependent, and so it should not be constrained in the fitting method (Duursma 2015; Miao et al. 2009;Zeng et al. 2010). The above fitting methods also take up that the A/C i rate curves reach asymptotic values for A at supraoptimal CO 2 concentration, when there is experimental evidence for the inhibition of the net CO 2 assimilation rate by CO 2 itself at high concentrations (Woo and Wong 1983). Also, some of these fitting methods make use of approximate estimations for J max and T p -where T p stands for the rate of phosphate release in triose phosphate utilisation-when C c approaches infinity in an A/C c rate curve (Su et al. 2009) or they reasonably assume that the order of the three limitation states along the C i axis is the same as along the C c axis (Gu et al. 2010). Dynamic models of photosynthesis are also suitable to analyse the leaf CO 2 assimilation response under fluctuating environmental stimuli such as sunlight irradiance, atmospheric CO 2 concentration or stomatal response to light (Bellasio 2019;Morales et al. 2018;Noe and Giersch 2004); however, they add complexity to the analysis or they have not been developed completely to date.
The simplicity of the quadratic equations for A c , A j and A p still makes the steadystate FvCB model very useful in fitting approaches to estimate biochemical parameters from leaf photosynthesis (Duursma 2015; Gu et al. 2010;Sharkey 2016;Su et al. 2009). After nearly 40 years of research on the FvCB model, its quadratic equations still hide mathematical features of interest to establish when this model becomes short or when an extended FvCB model would be more suitable for the estimation of the biochemical parameters. On the mathematical analysis of the 3 Page 4 of 22 FvCB model we present here, the rotation of the coordinate system has been a key strategy to reach the conclusion that the quadratic equations of the FvCB model cannot explain the inhibition of the net CO 2 assimilation rate at very high C i . Also, the mathematical analysis of the limiting conditions for the transition points between A c , A j and A p shows that they do not depend on the finite value of g m .
Computer Analysis
The computer algebra system Wolfram Mathematica v. 10.3 (Wolfram Research 2015) was used to program scripts to solve analytically the asymptotes and transition points of A/C c and A/C i rate curves of the three limitation states c, j and p. Comparative analyses were performed with hyperbolae in standard form after the rotation of the coordinates. The scripts were also run to plot representative A/C c and A/C i rate curves. The chosen and finite values for the kinetic constants for Rubisco and other biochemical parameters in the simulations are in the range of those experimentally determined for different C 3 plant species (Jahan et al. 2014;von Caemmerer 2000). A list of definitions is given in Table 1 for the sake of clarity.
Brief Description of the Asymptotes and Transition Points of the Rectangular Hyperbola Equations of the Basic FvCB Model
According to the basic FvCB model for leaf photosynthesis in C 3 plants (Farquhar et al. 1980;Sharkey 1985a, b), the hyperbola equations of the dependence of the net CO 2 assimilation rate on the CO 2 concentration at the site of the Rubisco carboxylation (i.e. A/C c ) are as follows: with where A proceeds at a minimum of the three limitation rates A c , A j and A p . The equations of the three rate curves in the basic FvCB model are branches of rectangular hyperbolae opening upwards and downwards or left and right, where the coordinate system has been rotated 45° (Appendix 1). The two asymptotes of each of the hyperbolic equations (Eqs. 2-4) are perpendicular to each other with slopes 0 and infinite (Table 2). An elemental analysis of the transition points between the rate equations of the three limitation states gives the following sets of solutions: Together with the transition points (C xy c2 , A xy c2 ) between any two limitation states of the three states c, j and p (superscripts x and y) in the first quadrant of the Cartesian coordinate system (subscript 2), there is a common transition point (C xy c1 , A xy c1 ) in the fourth quadrant (subscript 1) when ≠ 0 ( 0 ≤ ≤ 1 ). Carbon and electron requirements for the assimilation of nitrogen and export of amino acids through the photorespiratory pathway (Busch et al. 2018) are not addressed here, and the standard definition for α in the basic FvCB model remains (see below for further discussion).
Dependence of the Oblique Asymptotes of the Non-rectangular Hyperbola (or Quadratic) Equations of the FvCB Model on r m
The mathematical analysis becomes more challenging if A/C i , instead of A/C c , rate curves are used. When steady-state conditions for CO 2 diffusion are achieved, A c , A j and A p can be determined after the substitution of C c for C i using the equation Oblique asymptote Horizontal asymptote Centre
Non-rectangular hyperbolae
where the finite and "constant" mesophyll resistance to CO 2 diffusion is r m = 1∕ g m . Quadratic equations are obtained for A c , A j and A p (Eqs. A8-A10). They are now non-rectangular hyperbolae opening upwards and downwards, for the case of A c and A j , and left and right, for the case of A p , where the coordinate system has now been rotated anticlockwise an angle, here denoted β (Appendix 1). One of the two asymptotes from each non-rectangular hyperbola is parallel to the horizontal axis, but the other is now oblique with a slope exactly equal to the mesophyll conductance to CO 2 diffusion, i.e. g m = 1∕ r m , a result which is valid for A c (von Caemmerer 2000) and also for A j and A p . This conclusion is reached following the analysis of the coefficients of the quadratic equations obtained after the anticlockwise rotation of the coordinate system by β. When Eqs. A6 and A7 are compared with Eqs. A8-A10, some key features emerge: firstly, the summation of the coefficients of C 2 i is equal to zero and, secondly, the second coefficient of the quadratic equations is, in fact, the summation of the two asymptotes of each hyperbola (Appendix 1). The equations of the two asymptotes for A c , A j and A p are therefore summarised as follows: where y x asyp and y x asyn stand for the oblique and horizontal asymptotes of A c , A j and A p , respectively.
It is worth noting that the use of = 0 directly in Eq. 4 is an oversimplification of A p . The oblique asymptote of A p is present, even when α is assumed to be equal to zero (Eq. 13a). The intersection between the two asymptotes of A p (i.e. y p asyp and y p asyn ), in particular when = 0 , gives a limiting value below which C i is meaningless. In fact, the approximation A p = 3T − R d is not valid in the whole C i domain between * ≤ C i ≤ ∞ . The discontinuity is more obvious when ≠ 0 because there is a C i domain for which no real values for A p can be obtained. The suitable C i domain for the nonlinear fitting of A p in the FvCB model is thus confined to the negative root of its branch opening right (Fig. 1), a result which is also in line with the study by Gu et al. (2010). When ≠ 0 , the values of the negative root of the A p branch opening right decrease as C i increases.
The Limiting Conditions for the Transition Points in the FvCB Model Do Not
Depend on r m The transition points for the negative roots of the quadratic equations for A/C i (Eqs. A8-A10) can be solved mathematically and written in a simple form making use of the analytical solutions (Eqs. 5-10) for A/C c as follows: The above solutions could be further extended to include the stomatal resistance to CO 2 diffusion ( r s = 1∕ g s ) in A c , A j and A p (see Appendix 2). Figure 2 summarises the changes in the transition points in the first and fourth quadrants of the Cartesian coordinate system when the resistance(s) to CO 2 diffusion is included in the net CO 2 assimilation rate curves. For the sake of clarity, only A c and A j are shown. In the mathematical analysis, it can be observed that the common transition point (C xy i1 , A xy i1 ) between the three states c, j and p is always present; in contrast, the transition points (C xy i2 , A xy i2 ) depend on the biochemical parameters and might not be present in the net CO 2 assimilation rate curves. Two limiting conditions can now be investigated in order to analyse all the possible combinations between the transition points between A c , A j and A p regardless of the value of r m . One is that C xy i2 approaches C xy i1 (i.e. C xy i2→1 ) and another is that C xy i2 approaches infinity (i.e. C xy i2→∞ ). The equation C xy i1 = C xy i2 has to be solved for the analysis of the first limiting condition, whereas only the values for the ) of C xy i2 (Eqs. 15a-19a) have to be inspected to analyse the second limiting condition. In the analysis, the constraint K co > 2 * is imposed based on the values reported for C 3 plants (von Caemmerer 2000); consequently, there are no finite values for the biochemical parameters of the second summand (i.e. r m A xy c2 ) of C xy i2 (Eqs. 15a-19a) that can make this summand approach infinity. The ratios between the biochemical parameters to reach the above limiting conditions for the rectangular equations of an A/C c rate curve (i.e. C xy c2→1 and C xy c2→∞ ) can be derived straightforward from Eqs. 5a-10a. The results are as follows: For , respectively, Among the above ratios (Eqs. 20a-22b), the ratios between the biochemical parameters for C jp c2→1 and C cp c2→1 are of non-biochemical significance. They imply that there should be conditions for which one could expect triose phosphate import to chloroplasts (i.e. T p < 0). In fact, if these transition points are analysed, particularly, in a non-rectangular A/C i rate curve, one can observe that both transitions, (C are the same as those found for C xy c2→1 and C xy c2→∞ (Eqs. 20a-22b). This means that the ratios between the biochemical parameters in the two limiting conditions do not depend on the value of r m . The limiting conditions for the transition points can therefore be reduced as follows: The graphic representation of A c , A j and A p for A/C i rate curves shows, firstly, that there are no experimental ratios for the biochemical parameter for which A p (with T p > 0 ) can be the only limitation state along the domain * − r m R d < C i < ∞ and, secondly, there are ratios between the biochemical parameters for which A c or A j can be the only limitation state along the domain * − r m R d < C i < ∞ (Fig. 3a, b). Additional ratios between the biochemical parameters can be found for which there are one or two transition points in the first quadrant of Cartesian coordinate system Fig. 3c-f). The latter ratios are equivalent to those discussed before for A/C c rate curves (Gu et al. 2010). Regardless of the number of transition points (0, 1 or 2) that the ratios of the biochemical parameters can yield between the three limitation states in the first quadrant of the Cartesian coordinate system, the transition points in the fourth (or third) quadrant are always present.
Analysis of the Inhibition of the Net CO 2 Assimilation Rate at High CO 2 Concentrations
If the steady-state FvCB model is strictly followed, one can state, first, that the slopes of the oblique asymptotes of the non-rectangular hyperbolae only depend on g m = 1∕ r m , while the slopes of the horizontal asymptotes of A c , A j and A p remain unchanged regardless of the value for g m (Eqs. 11a-13b) and, second, the slopes of the bisecting lines (Table 2) of A c , A j and A p correspond with the angle (or the perpendicular angle) of the rotation of the coordinate system that makes the summation of the coefficients of C 2 i equal to zero (Eqs. A11 and A12). This means that there are no mathematical solutions for the quadratic equations of A c , A j and A p (Eqs. A8-A10) in the FvCB model for which the slopes of the horizontal asymptotes can be modified to reach negative values. The fraction of glycerate ( ≠ 0 ) that does not return to chloroplasts through the photorespiratory cycle (Harley and Sharkey 1991) makes A p decrease as C i increases, but the slope of the horizontal asymptote of A p remains unchanged, no matter what value α ( 0 ≤ ≤ 1 ) has. This indicates that A p must finally reach a constant value as C i increases. This conclusion also applies to the extended FvCB model described in the study by Busch et al. (2018), where the parameter α of the basic FvCB model is replaced with two new parameters α G and α S that stand for the proportion of glycolate carbon taken out of the photorespiratory pathway as glycine, and the proportion taken out as serine, respectively. Although α G and α S might not be constant and depend on the photorespiratory pathway and the reduction of supplied nitrate (Busch et al. 2018), the new equations for the three limitation states remain, from a mathematical point of view, as rectangular hyperbolae with horizontal asymptotes equivalent to those summarised in Table 2. Likewise, the extension of the FvCB model using r m as a fluxweighted quantity that depends on mitochondrial respiration and photorespiration effects does not explain the inhibition of A/C i at high C i either (Tholen et al. 2012).
Despite what has been said above, there are lines of experimental evidence that indicate that negative slopes can be indeed observed in A/C i rate curves. Woo and Wong (1983) showed that supraoptimal CO 2 concentrations inhibited the net CO 2 assimilation in cotton plants, and they proposed that an acidification mechanism mediated by CO 2 could affect both the thylakoid electron transport and the activity of key enzymes of the Calvin-Benson-Bassham cycle (Kaiser and Heber 1983;Ögren and Evans 1993). At this point, one could speculate on some mathematical explanations for negative slopes in experimental A/C i rate curves. In the first place, one could wonder whether other rotation of the coordinates-different from the one that yields Eqs. A28 and A29-would be possible under steady-state conditions, which rendered negative asymptotes instead of horizontal asymptotes. If this were possible, the summation of the coefficients of C 2 i (Eqs. A11 and A12) would not be zero and so the chosen fitting method should start from extended quadratic equations as Eqs. A6 and A7, where at least four parameters defining the hyperbola equation and an angle of rotation had to be determined. In this case, the angle of rotation should depend on g m together with other biochemical parameters. Alternatively, one could wonder whether the steady-state conditions do not hold along the whole C i domain, particularly at supraoptimal CO 2 concentrations. If this were the case, one can assert that the equation A = (C i − C c )∕ r m is not always valid. So, A decreases at supraoptimal CO 2 concentrations because either r m is not only diffusional and so it increases as C i increases (Flexas et al. 2007) or the photosynthetic activity is indeed inhibited by CO 2 acidification (Kaiser and Heber 1983;Ögren and Evans 1993;Woo and Wong 1983). Reliable nonlinear fittings of A c and A j of the FvCB model can thus be possibly obtained under steady-state conditions using standard approaches (Duursma 2015; Gu et al. 2010;Sharkey 2016;Su et al. 2009); however, the use supraoptimal CO 2 concentrations to fit A p might cast doubts on the fitted biochemical parameters if evidence for negative slopes in the experimental A/C i rate curves is observed. Based on the variation of g m with C i (Flexas et al. 2007), other nonlinear fitting approaches proposed the combination of gas exchange methods with chlorophyll fluorescence-based methods to estimate g m by using only data within the j state (Yin and Struik 2009).
Conclusions
The analysis of the steady-state FvCB model for C 3 plants starting from the standard equations of hyperbolae after rotating the coordinate system has disclosed some features hidden in the quadratic equations of A c , A j and A p of the A/C i rate curves. In particular, academic interest has been the angle of the rotation of the coordinate system from which it has been established that the oblique asymptotes of the three limitation rate curves share a common slope whose value depends only on g m . A p always has an oblique asymptote regardless of the value of α. The limiting conditions for the transition points in the FvCB model do not depend on g m . The hyperbola equations of A c , A j and A p in the FvCB model or in some of the extended steady-state FvCB models here discussed can only provide horizontal asymptotes when the CO 2 concentration approaches infinity when, in contrast, there is experimental evidence for negative slopes in A/C i rate curves at high CO 2 concentrations. This leads us to the conclusion that extended quadratic equations containing a C 2 i term might be required for the analysis of A c , A j and A p or, in contrast, that steady-state conditions do not hold, particularly, with increased CO 2 concentrations. Dynamic modelling taking into account the decrease in the values for g m or the activity inhibition of key enzymes of the Calvin-Benson-Bassham cycle by CO 2 acidification could alternatively provide suitable models for the estimation of the biochemical parameters from leaf photosynthesis.
where LR and UD stand for hyperbolae opening left and right, and upwards and downwards, respectively. The only difference between the left-hand sides of Eqs. A6 and A7 is the sign of the term a 2 b 2 in the numerator of the last coefficient. The above quadratic equations for non-rectangular hyperbolae opening left and right or upwards and downwards can now be compared, coefficient by coefficient, with each of the quadratic equations of the A/C i rate curves of the three states c, j and p of the steady-state FvCB model. After the substitution of C c with C i − r m A in Eqs. 2-4, the new A c , A j and A p are as follows: In the first instance, it is observed that the summation of the coefficients of C 2 i in each of the three rate curves is zero (Eqs. A8-A10). Therefore, the rotation of the coordinates has to fulfil the condition: This implies that one of the two asymptotes of A c , A j and A p is now parallel to the horizontal axis of the Cartesian coordinate system (i.e. the slope m asy1 = 0 ). Because the rotation does not change the angle between the two asymptotes of the hyperbolae (Eq. A3), θ remains constant and so the slope of the second asymptote ( m asy2 ) has to be as follows: The rotation of the coordinates was chosen to be anticlockwise and so the positive value only applies here. The slopes m asy1 and m asy2 will be renamed m asyn and m asyp , respectively, after the rotation of the coordinates (see below). Figure 4 summarises the (A7) y 2 UD + y UD 2a 2 cos sin + 2b 2 cos sin x + 2a 2 k cos − 2b 2 h sin b 2 sin 2 − a 2 cos 2 + b 2 cos 2 − a 2 sin 2 x 2 − (2b 2 h cos + 2a 2 k sin )x + b 2 h 2 − a 2 k 2 + a 2 b 2 b 2 sin 2 − a 2 cos 2 = 0, (A11) b 2 cos 2 − a 2 sin 2 = 0, and so main changes in the graphic representation of the two types of hyperbolae and their asymptotes after the rotation of the coordinates.
In the second instance, A c , A j and A p share a common term within the second coefficient of the quadratic equations of A c , A j and A p (i.e. C i ∕ r m ). This implies that for the three equations, the following equality holds: The anticlockwise rotation of the coordinates by = arctan b∕ a + n , where n = 0 , yields: (A14) 2a 2 cos sin + 2b 2 cos sin Fig. 4 Hyperbolae opening upwards and downwards (thick black lines) and left and right (thick grey lines) in standard form (a) and after a rotation of the coordinates by an angle β (b) together with their positive (thin black line) and negative (thin grey line) asymptotes and their respective centres (black dots). The angle of rotation β was chosen to be equal to b/a and so to fulfil one of the conditions of the quadratic equations of A c , A j and A p of the (non)rectangular hyperbolic FvCB model for C 3 plants. A c and A j are rotated hyperbolae opening upwards and downwards, while A p opens left and right (see text for further details)
a b
This solution indicates that the oblique asymptotes of three A c , A j and A p rate curves have in common their slope, which results equal to the mesophyll conductance to CO 2 diffusion ( g m = 1∕ r m ).
In the third instance, the second coefficient of each quadratic rate curve is in fact the negative value of the sum of its two asymptotes. If Eqs. A4 and A5 are applied to both the negative and positive asymptotes of the standard forms of the hyperbolae (Eqs. A1 and A2), the following equations are obtained: where y asyn and y asyp stand for the new asymptotes after the rotation of the coordinates by β (Fig. 4). The sum of the asymptotes is equal to the negative value of the summation of the coefficients of y LR and y UD in Eqs. A6 and A7: If, particularly, = arctan b∕ a , then the slopes have the following values in Eqs. A16 and A17: m asyn = 0 and m asyp = 2ab (a 2 − b 2 ) . So now we can rewrite the second coefficients of the A c , A j and A p rate curves as follows: where the superscript indicates the name of the states c, j or p.
In order to know individually the equations for each of the two asymptotes of A c , A j and A p , one can derive the values of the negative asymptotes ( y asyn ) from the term that multiplies x in the third coefficient of the quadratic equations (Eqs. A6 and A7). This term is, in fact, the negative value of the product between Eqs. A14 and A16, when m asyn = 0 . Therefore, Eqs. A19-A21 can now be split as follows: (A16) y asyn = ak + bh a cos + b sin + a sin − b cos a cos + b sin x = n asyn + m asyn x, and (A17) y asyp = ak − bh a cos − b sin + a sin + b cos a cos − b sin x = n asyp + m asyp x, y asyn + y asyp = − 2a 2 cos sin + 2b 2 cos sin x + 2a 2 k cos − 2b 2 h sin b 2 sin 2 − a 2 cos 2 (A19) y c asyn + y c asyp = C i + K co r m + V cmax − R d (A20) y j asyn + y j asyp = C i + 2 * r m + J∕ 4 − R d (A21) y p asyn + y p asyp = The intersection between the oblique and horizontal asymptotes can be used to know the centre of the hyperbolae. The slopes of the bisecting lines are as follows: The vertices of each hyperbola can also be derived from the intersections between the equation of the bisecting line-passing through the vertices and the centre of the hyperbola-and the corresponding quadratic equation (Eqs. A8-A10). Table 2 includes a summary of the equations and values that describe the rectangular and non-rectangular hyperbolic equations of the FvCB model.
Appendix 2
Under steady-state conditions, the net CO 2 assimilation rate can also be obtained as A = (C a − C i )∕ r s , where r s = 1∕ g s , if particularly the transpiration rate (E) is considered negligible and consequently g s ± E∕ 2 ≈ g s (Farquhar and Sharkey 1982). The transition points for the new equations in A/C a rate curves, when both r m and r s are grouped together in the analysis, are now as follows: For A c = A j ,
|
2020-01-11T14:03:51.391Z
|
2019-12-23T00:00:00.000
|
{
"year": 2019,
"sha1": "c2fd73404279229c6c3ef63e5e859d01be2d61d7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11538-019-00676-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "16762599736c928e4c076b13372321588444bc07",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
}
|
10562125
|
pes2o/s2orc
|
v3-fos-license
|
Abnormal Cerebrovascular Reactivity in Patients with Parkinson's Disease
Background. Orthostatic hypotension (OH) is an important nonmotor manifestation of Parkinson's disease (PD). Changes in cerebrovascular reactivity may contribute to this manifestation and can be monitored using transcranial Doppler. Objective. To identify possible changes in cerebrovascular reactivity in patients with OH. Methods. Twenty-two individuals were selected and divided into three groups: with and without OH and controls. Transcranial Doppler was used to assess basal mean blood flow velocity, postapnea mean blood flow velocity, percentage increase in mean blood flow velocity, and cerebrovascular reactivity as measured by the breath-holding index. Results. PD patients had lower values of basal velocity (p = 0.019), postapnea velocity (p = 0.0015), percentage increase in velocity (p = 0.039), and breath-holding index (p = 0.04) than the controls. Patients with OH had higher values of basal velocity (p = 0.09) and postapnea velocity (p = 0.19) but lower values of percentage increase in velocity (p = 0.22) and breath-holding index (p = 0.32) than patients without OH. Conclusions. PD patients present with abnormalities in a compensatory mechanism that regulates cerebral blood flow. OH could be an indicator of these abnormalities.
Introduction
Parkinson's disease (PD) is characterized by slow degeneration of specific neurons in the enteric, peripheral, and central nervous system [1]. Analysis of lesions in PD by Braak et al. [2] showed that the disease progresses in six stages in a caudorostral direction, starting in caudal regions of the brain stem, such as the dorsal motor nuclei of the glossopharyngeal and vagus nerves and the anterior olfactory nucleus, and spreading to practically the whole cortex [3]. Based on these pathological findings and the clinical presentation of the disease, the definition of PD as only a motor disease is believed to be clearly inadequate. Dysautonomias are one of the most important nonmotor complications of PD [4], and orthostatic hypotension (OH) is quite a common complaint [5], occurring in around 40% of PD patients [6].
A drop in systemic arterial blood pressure is normally compensated for by a sympathetically mediated increase in vascular tone and cerebral vasodilation. PD patients, however, present with worse hemodynamic parameters because of degeneration of central and peripheral nuclei (brainstem, cerebral cortex, spinal cord, and autonomic ganglia [2]); baroreflex failure, with a reduction in the number of catecholaminergic neurons in the nucleus of the solitary tract [7]; diffuse cardiac noradrenergic denervation of the left ventricle [8]; abnormal pressure natriuresis and diuresis due to loss of specific neurotransmitters [9]; suboptimal release of norepinephrine when the patient stands up [10], with an increase in the number of -adrenoreceptors in an attempt to control sympathetic dysfunction in this position [11]; and the presence of Lewy bodies in axons in the paravertebral sympathetic chain and the stellate ganglion [10]. Therefore, a decrease in sympathetic tone in PD patients with OH is well known and the mechanisms involved are well established, which is not yet the case for the mechanisms responsible for maintaining cerebral blood flow [12].
Transcranial Doppler (TCD) allows cerebral blood flow velocity (cBFV) and the contractility of cerebral vessels to be measured dynamically and with high temporal resolution [13]. An increase in the concentration of CO 2 in the blood stream leads to vasodilation of the intracranial microcirculation, which can be observed in TCD as an increase in cBFV. This change in cBFV in response to a vasodilatory stimulus is known as cerebrovascular reactivity (CVR) [14]. Various techniques can be used to estimate CVR [15,16], such as measurement of the percentage change in the mean blood flow velocity (mBFV) in the middle cerebral artery (MCA) between hyperventilation and inspiration of increasing concentrations of CO 2 or the inspiration of 5% CO 2 . However, the technique based on the use of breath-holding as the vasodilatory stimulus is the most suitable as it is both practical and easy-to-use [16]. The breath-holding index (BHI) was first described by Ratnatunga and Adiseshiah [17], who observed that the change in the mBFV of the MCA after a period of apnea without prior forced inspiration divided by the apnea duration gave an estimate of the change in cerebral blood flow and therefore CVR. Markus and Harrison [16] showed that this methodology was equivalent to those based on inspiration of CO 2 and also defined an ideal time and minimum apnea duration (30 and 15 seconds, resp.).
Despite the existence of these various approaches, to the authors' knowledge there are no studies that provide absolutely conclusive findings about the changes in CVR in PD patients. While the first studies to correlate the findings of TCD with OH and PD did not find any changes, more recent studies using other approaches found significant changes in CVR in patients with PD compared with controls. The present study is the first to use the BHI to show that these changes occur and, furthermore, is one of the few ones to compare OH patients with patients without OH rather than only with controls.
The aim of the present study was to identify possible changes in CVR measured using the BHI in patients with OH associated with PD.
Materials and Methods
The study sample consisted of 20 patients with a confirmed diagnosis of PD according to the UK Parkinson's Disease Society Brain Bank clinical diagnostic criteria [18] who were being followed up regularly at the Neurology Service at the Campos Gerais Regional University Hospital (HURCG) [19].
Patients with Parkinsonism-plus disease, Parkinsonism as an associated feature of heredodegenerative diseases, and secondary Parkinsonism were excluded, as were patients who were using dopamine agonists and those who refused to sign the voluntary informed-consent form.
The study was approved by the State University of Ponta Grossa (UEPG) Research Ethics Committee (COEP) (reference number FA 22591).
Clinical Assessment.
Patients were evaluated clinically, neurologically, and for the presence of OH [19], which was defined as a drop of at least 20 mmHg in systolic blood pressure and/or a drop of at least 10 mmHg in diastolic blood pressure as a result of a change from a supine to a standing position after one minute [11]. Ultrasound studies of extracranial and intracranial blood flow were performed to exclude occlusive diseases.
Patients were then divided into two groups: those with OH ( = 9) and those without ( = 11). Two patients from the group with OH and five from the group without OH were not examined by TCD as they did not consent to proceed with the study. One patient from the group with OH and one from the group without OH were excluded from the study because of inadequate temporal acoustic windows. A control group ( = 11) was formed from healthy individuals recruited among the patients' relatives and hospital staff. The final study population therefore consisted of six patients in the group with OH, five in the group without OH, and eleven controls.
Transcranial Doppler (TCD)
. TCD was carried out in a quiet, dimly lit room. Tests were carried out between 9 am and noon Brasília time. All the patients were required to lie down in the examination room for 5 minutes before the start of the test, which lasted between 15 and 20 minutes. Patients were instructed not to use the first dose of levodopa (L-dopa) in the morning. All tests were performed by a single researcher who had previous experience in TCD. The researcher was unaware of the clinical condition of each participant.
An S4-2 2 MHz sector-phased array transducer coupled to a Philips HD11 XE Ultrasound System (Philips©, Philips Medical Systems B.V., Netherlands) was used to assess the M1 segment (45-55 mm deep) of the MCA in both sides.
The tests started on the left MCA and the following information was collected: basal mean blood flow velocity (bBFV) in cm/s; postapnea mean blood flow velocity (aBFV) in cm/s; and duration of apnea in s. Throughout the procedure the transducer was kept in the place that had been initially identified as suitable for measuring the desired parameters. CVR (absolute value) was estimated using the BHI [16], which is given by the following formula: where BHI = breath-holding index; aBFV = mean postapnea blood flow velocity; bBFV = basal mean blood flow velocity; = duration of apnea in seconds. Values of BHI of less than 0.70 were considered abnormal [16]. The percentage increase in blood flow velocity (%IBFV) was analyzed.
Statistical
Analysis. The statistical differences between the means of the groups were measured using the twotailed Student's -test for normal distributions and the Mann-Whitney test for nonnormal distributions. Fisher's exact test was used for categorical variables. The effect size was analyzed using the odds ratio for categorical variables and Cohen's coefficient for continuous variables (0.2: weak effect; 0.5: moderate effect; and 0.8: strong effect). The results are given as mean ± standard deviation (SD) or as an odds ratio (OR) and 95% confidence interval (CI) (OR (95 %CI)). values of less than 0.05 were considered statistically significant. The analysis was performed with MedCalc 5.1 (Belgium) (MedCalc version 11.5.1-MedCalc, Mariakerke, Belgium) and Microsoft Excel.
Demographics.
Of the patients with PD, six (54.54%) had OH, but only two (33.3%) of these patients presented with complaints compatible with OH. There were no significant differences in age (67.36 ± 11.73 versus 64.727 ± 11.867, = 0.61) or gender between the control group and patients with PD. The ratio of men to women was 2.67 : 1 (eight men, 72.74%, and three women, 27.27%). The control group consisted of seven men (63.64%) and four women (36.36%), giving a ratio of men to women of 1.75 : 1 (Table 1). There were no significant differences in terms of L-dopa use, age, and gender between patients with and without OH ( Table 2).
Relationship between TCD Findings and OH.
Patients with PD had lower values of bBFV, aBFV, %IBFV, and BHI than controls (Table 3). Individuals with OH had lower values of bBFV, aBFV, %IBFV, and BHI than controls (Table 4). PD patients without OH also had values of bBFV and aBFV that were significantly lower than in controls (Table 5).
When the results for PD patients with and without OH were compared, the former had higher values of bBFV
Discussion
The prevalence of OH in the study sample was 54.54%, and 33% of the patients were symptomatic. These figures agree with the prevalence reported in the literature of 40-60% among PD patients [6] with only 20% reporting some symptoms [20].
The present study has shown that PD patients have altered CVR compared with controls. To our knowledge, it is the first study to establish a correlation between BHI and PD. Niehaus et al. [21], using TCD and the tilt-table test, reported a small increase in heart rate (HR) and a greater, more prolonged drop in arterial blood pressure (ABP) in PD patients who were tilted close to the upright position. However, no changes in cBFV were observed in these patients, whose CA was similar to that of the control group. Angeli et al. [22] observed a hypotensive response to orthostatic stress, with intracranial vasodilation and lower diastolic pressure in PD patients monitored with TCD during tilt-table testing. Gurevich et al. [12] compared CA and CVR in PD patients with multiple system atrophy (MSA) and pure autonomic failure (PAF) using TCD, the acetazolamide test, and the tilt-table test but failed to find any change in CVR. Our findings of altered CVR and lower cBFV agree with more recent studies that used TCD to analyze cerebral hemodynamics in PD patients [23,24]. Vokatch et al. [11] used TCD and thigh cuffs to assess CA and found striking differences in mBFV between controls and PD patients, especially after a reduction in blood pressure, providing strong evidence of impaired CA in patients with this disorder. Furthermore, L-dopa did not appear to influence the changes in these parameters. Using the cold pressor test, Tsai et al. [24] found similar changes in cBFV. However, they did not take into account whether their patients were using L-dopa or not in their study. Bouhaddi et al. [25] used TCD and the tilt-table test to compare PD patients taking and not taking L-dopa and concluded that this medication could further impair autonomic control of heart rate and blood pressure.
Previously published studies of OH in PD patients [11,21,22,24] that investigated CA did not reach definitive conclusions about its impact on cBFV or the possible impact of the use of L-dopa on autonomic dysfunction [25].
In the present study, the values of bBFV, %IBFV, and BHI were lower in PD patients with OH than in controls. CVR can be estimated by measuring the change in cBFV in response to vasodilatory stimuli [14]. bBFV and aBFV represent cBFV at baseline and after a normally vasodilatory stimulus, respectively, and %IBFV is the relative difference between them [16]. Previous studies did not identify these differences even though they used methods that were theoretically similar to the method using breath-holding as a vasodilatory stimulus [16]. This probably occurred because these studies used 8% CO 2 instead of BHI.
The probable pathophysiological explanation for the TCD findings observed in the present study is that hemodynamically compromised tissue is supplied by arterioles that are already maximally, or near maximally, dilated. A stimulus that is normally vasodilatory is therefore unable to produce an adequate response [16]. It appears that OH patients have greater degeneration of the sympathetic nervous system, leading to significant hemodynamic impairment. Hence, all the cerebrovascular reserve capacity may be used up under basal conditions, and when an increase in blood supply is required these values cannot be compensated for, resulting in the changes observed in aBFV, %IBFV, and BHI, a drop in pressure and, consequently, OH. PD patients without OH probably have less severe autonomic impairment, which is reflected in a lower flow velocity under basal conditions [23,25]. Therefore, because CVR is less affected in these patients, their response to changes in blood supply requirements is normal and does not lead to OH. A similar hypothesis has NB: PD = Parkinson's disease; OH = orthostatic hypotension; bBFV = basal mean blood flow velocity (cm/s); aBFV = mean post-apnea blood flow velocity (cm/s); %IBFV = percentage increase in mean velocity during breath-holding (%); BHI = breath-holding index. already been proposed by Haubrich et al. [26], although they concluded that autoregulatory mechanisms in PD patients were the same as in healthy individuals. This study has a number of limitations. Firstly, we assumed that L-dopa does not influence CVR in PD patients [11,23]. This could have been confirmed by carrying out two sets of tests, one with and the other without the medication. Secondly, as BHI and similar indexes [16] have only been tested on patients without any neurological condition to estimate CA, it would be useful to investigate these indexes in PD patients. Lastly, as our sample was small, a study with more patients and controls should be carried out to confirm the conclusions.
We have shown that PD patients have abnormal cBFV, indicating that cerebral hemodynamic alterations may also be present in these patients. Individuals with PD and OH appear to have altered CVR and great difficulty in satisfying tissue requirements under nonbasal conditions, which could explain the clinical findings for these individuals. Nevertheless, further studies are required to confirm these results.
|
2016-05-15T10:48:44.148Z
|
2015-06-16T00:00:00.000
|
{
"year": 2015,
"sha1": "87228d03e55046c34fe3ae3c12148bda8d6c6cbf",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/pd/2015/523041.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f96c4dd2d5ef93de626368130c33e3edae89cf92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54593897
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Biofeedback Therapy on Hand Function and Daily Activities in Stroke Survivors
Background: Due to progressive increase in stroke incidence and need for effective therapy, the effectiveness of biofeedback therapy on the hand function and activities of daily living performances in stroke survivors were studied. Patients and Methods: In this randomized control trial study, 24 participants (mean age 54.75 years) were divided randomly in experiment and control groups. Their affected hands were evaluated before and after intervention using Barthel index questionnaire, Ashworth test and goniometry in elbow, wrist and finger. The two groups received current occupational therapy intervention for 3 months, 3 sessions (each session was 45 minutes) in a week. The experimental group also had 15 minute biofeedback therapy in each session. Results: Biofeedback trained group showed more decrease in spasticity, significants increase in range of motion in elbow (P < 0.001), wrist (P < 0.003) and finger (P < 0.001) and significant increase in activities of daily living performances (P < 0.001). Conclusions: Biofeedback in accompanying with routine occupational therapy promised to be more effective in stroke survivors.
Background
Stroke as the most common brain disorders is the third cause of death [1,2] and its consequences continued more than 24 hours [3].The rate of stroke incidence is about 3 in 100,000 in 3rd and 4th decade of age which increases to 300 in 100,000 people during 8th and 9th decades [4].Long lasting and disabling consequences mark stroke as the third cause of death due to disease in the world [2].The most common stroke related disorders are manifested as hemiplegia, imbalance, incoordination and spasticity which are especially seen in upper extremities [5].Motor and psychomotor disorders lead to limb inactivation, additional paralysis and palsy, problems in activities of daily living performance, problems in personal activity and social participation and finally more dependency and decrease in quality of life [6].Lots of treatment procedures are available to deal with these problems, such as early medical interventions and rehabilitation programs and a wide range of techniques and approaches are used currently in rehabilitation programs [7].
These techniques have partial effectiveness especially for upper extremities and hand as key role player in activities of daily living (ADL) performances.On the other hand recovery should occur up to 11th week after stroke incidence, because after that period the expectation of recovery is very low [8].Regarding these facts, it is necessary to develop therapeutic techniques or combine different techniques to improve and accelerate recovery of upper extremity in stroke survivors [7].
As an effective intervention in this field, using electromyographic biofeedback, help the patients to control motor activities [9,10].Using biofeedback in accompanying with traditional interventions have been studied in some experiments and has showed its effectiveness in improving function in gross muscles (shoulder muscles or legs) [11].While these studies have focused on recovery of gross muscles [12][13][14][15], fine movements of hand which are very critical in activities of daily living are ignored.Only in one research, the effectiveness of biofeedback therapy on the spasticity in wrist flexors, upper extremity function and ADL performances has been studied in the patients with stroke [16].
The results indicated improvements in reducing spasticity, upper extremity function, and increase in joints range of motion [16].However there are not enough convincing evidences to support the effectiveness of biofeedback while there are some controversial reports [17][18][19].Furthermore, it is not clear whether these functional improvements, can lead to better activities of daily living performances?
Objectives
Because of very few if any study on the effectiveness of biofeedback on the hand function and also controversies in the previous results, the effectiveness of applying biofeedback therapy with occupational therapy exercises was investigated on recovery of hand function and ADL performances in patients with stroke.
Patients and Methods
In this randomized control trial study the effects of biofeedback therapy in addition to current occupational therapy exercises were studied on 24 stroke patients (9 males and 15 females) in one setting; Rehabilitation Center of Tabasom (Tehran, Iran).Participants were selected based on inclusion criteria such as: 1-stroke diagnosis by neurologist, 2-scoring 22 and more in mini mental state examination (MMSE), 3-recognized as having score 2 and more in modified Ashworth test of spasticity, 4-absence of accompanying disorders such as seizure, psychological disorders, hearing or visual problem, or orthopedic disorders in upper extremities, 5-at least three months passed from incidence of stroke, 6-interested in participating in the study and 7-not suffering from hemianopia, Wernicke aphasia and global aphasia.This study and research was approved by "University of Social Welfare and Rehabilitation ethical committee".Informed consents were obtained prior to experiment and contents were comprehended and signed by patients or their legal representative.
All participants were provided with the information sheet and ensured that their participation in the research is voluntary and they are able to withdraw from the study in every stage of the process.Following their consent data were collected in the participant's convenient time and day.All people with stroke who provided consent to the study were included in the study.Subjects were blinded to the purpose of the study.There were five tools for collecting data.A questionnaire was used within which data on age, sex, right or left dominance, effected side, post-stroke duration, and the duration of receiving rehabilitation services were collected.
Folstein's mini-mental state examination (MMSE) with 6 subscales for orientation, registration, attention, calculation, recall, and language and praxis tests was used to estimate the patients' cognitive ability to participate in biofeedback therapy [20].The modified Ashworth scale was used to measure the severity of spasticity in effected hand [21][22][23][24].This scale has been designed to rating spasticity in different muscles and its spectrum rates are from zero (no increase in tonicity) to 4 (rigidity in flexion and extension).Then, the active range of motion (ROM) in upper limbs' joints including elbow, wrist and matacarpophalangeal were measured by goniometry.The extension ROM in elbow was measured in supine position.Because measuring of ROM in elbow starts from 150 degree full flexion and reaches zero degree, the full extension, therefore the angle of extension in elbow was subtracted from 150 to show a positive trend during increase in ROM.The ranges of motion in the wrist and finger were measured in a sagittal plane.
The ROM in the wrist was from zero to 70 degree full extension and in the finger was 90 degree in full extension.
Finally, the Barthel index (BI) was used to asses daily function status and independency in 10 categories of activities including bowel, bladder, grooming, toilet use, feeding, transfer, mobility, dressing, steps, and bathing.All assessments were repeated after the intervention period [25].
Participants were randomly assigned in the experimental or control groups.Participants in both groups received current occupational therapy including muscle stretching, positioning, facilitating normal patterns of movement, facilitator and inhibitory techniques, reflex inhibitory patterns, facilitating higher level reflexes and muscle tone normalization.Participants in experimental group received an additional biofeedback therapy for 10 minutes; altogether for 45 minutes each session.Intervention duration included three sessions a week for three months (altogether 36 sessions).
In biofeedback therapy, after stabilizing hand on the table with a hand-rest, electrodes were set on the bulk of wrist extensor muscles and lateral epicondyle of humerus, patients sat in front of monitor and watched the diagram of muscular contraction.By adjusting the threshold, if the patient could produce an activity in the extensor muscles above the threshold, music broadcasted from the machine.Therefore, he/she could receive appropriate feedback about contraction in the targeted muscle either in visual or auditory signals.The biofeedback tool in this research was Procomp Infiniti 5 channel model, made in USA.
The collected data from the two groups were analyzed using SPSS-20.Descriptive statistics were used for quantitative and qualitative data, and the statistical test was Kolmogorov-Smirnov that used to evaluate normal distribution of data.Equality of variables between the two groups was compared before intervention using independent t-test for quantitative and χ 2 tests for qualitative variables.Statistical variance analysis for repeated measures (repeated measure ANOVA) was used to study the changes in test scores in each group during consequent assessments and then the mean scores of each test during sequential testing were compared in each group separately using paired t-test (P < 0.01).
Results
As seen in the Table 1, from 24 participants (15 females and 9 males), 8 females and 4 males were assigned in experimental group and the rest (7 females and 5 males) in the control group.Only 2 subjects were left handed (both in experimental group) and the rest were right handed.Thirteen subjects were affected in the left side (right brain hemisphere) and 11 subjects were affected in right side of the body.From left side affected subjects, 6 subjects were assigned in experimental group and 7 subjects in the con-Zahedan J Res Med Sci.2015;17 (10):e2204 trol and from right side affected subjects, 6 patients were in the experimental group and the rest in the control group.According to the spasticity evaluation using modified Ashworth scale, 4 patients in experimental group, were rated as 2 score and 8 subjects as 3 score before intervention.In the control group 3 subjects have 2 and 9 subjects had scores 3 in modified Ashworth before intervention.After intervention, in experimental group 8 subjects had 2 and in control group 4 subjects were rated as 2 and the rest were rated as having scores 3 in modified Ashworth scales.
The mean ROM of elbow in experimental group was 32.5 degree before intervention which increased to 82.1 degree after intervention, (approximately 50 degree increase in ROM), while mean increase in ROM in control group was reaching to 41.67 degree (post intervention) from 17.5 degree (pre-intervention) that is about 24 degree increase in mean).
In both groups, intervention (either occupational therapy or occupational therapy with biofeedback) resulted in increase in ROM in elbow, wrist and finger joints which are shown in the Table 2.
The mean ROM of wrist in experimental group was 13.75 degree before intervention which reached to 60.83 degree after intervention, while wrist Rom in control group increased from 11.67 to 43.92 degree.
Mean increase in the finger ROM in the group was 32.91 degree, from 11.67 degree to 44.58 degree after intervention but in control group it was 7.92 degree, from 5.83 to 13.75 degree.
Post intervention assessment showed an increase in activities of daily living performances, the Barthel index score from 62.75 to 73.08 in experimental group and from 60.08 to 63.5 in control group.That is occupational therapy accompanying with biofeedback lead to more than 10 scores in Barthel index while occupational therapy alone has only 3.4 scores increase in Barthel index.The mean of changes in the elbow ROM after intervention were 49.6 ± 36.02 degree and 24.17 ± 28.47 degree in experimental and control groups, respectively.Using covariance analysis these data were analyzed.Biofeedback therapy caused a significant increase in elbow ROM of the patients with stroke (P = 0.001).
The mean increases in the wrist ROM in the experimental and control groups were 60.83 ± 15.79 degree and 43.92 ± 20.12 degree respectively.Covariance analysis of data showed a significant increase in the wrist ROM of experimental group (P = 0.003).Also the mean increases of finger ROM were 44.58 ± 23.88 degree and 13.75 ± 27.48 degree for the experimental and control groups, respectively.Using covariance analysis, mean ROM of finger after intervention were analyzed which showed a significant effectiveness of biofeedback intervention on finger ROM (P = 0.001).Furthermore, mean scores in the Barthel index score for the experimental and control groups after intervention were 73.08 ± 13.64 and 63.50 ± 9.99 Raw score, respectively.Using covariance analysis, patients' scores in Barthel index were analyzed after intervention, which results indicated the effectiveness of the biofeedback therapy on the activities of daily living performances (P = 0.001).
Discussion
According to the results of our study, using biofeedback training in addition to current occupational therapy exercises in patients with stroke leaded to significant decrease in upper extremity spasticity.Also, significant increases were observed in range of motion of elbow, wrist and fingers joints in experimental group (who received biofeedback and occupational therapy) in comparison with control group (who received only occupational therapy exercises).Furthermore, increase in the activities of daily living performance was remarkably more in experimental group when compared to control group.These finding demonstrated the effectiveness of biofeedback therapy when in accompanying with occupational therapy exercises.
Hemiplegia is one of the most consequences of stroke [26] which leads to disorder in activities of daily living performances and decrease in quality of life [27].Hence rehabilitation team focuses on acquiring the maximum of independence in activities of daily living performances of stroke survivors [28].
Many different and alternative techniques are used by occupational therapist including biofeedback therapy to reach the mentioned goals.This technique causes the activation of voluntary control on single muscles in patients with sensory motor disorders.In addition, increase in range of motion and decrease in spasticity can increase in activities of daily living performances (if accompanied by active participation).Active participation in ADL necessitates the activity of different gross and fine muscles.While some studies have shown the improvement of gross muscles after biofeedback therapy [15,29], there was few if any study on the effectiveness of biofeedback on muscles involved in fine motor activities [7].
In present research data analysis showed a positive effect of biofeedback on the range of motion in elbow, wrist and finger in patients with stroke.Furthermore, increase in the ROM is coupled with improvement and facility in activities of daily living performances, as can be seen in patients' scores in Barthel index.Although, considering Barthel index scores showed an improvement in activities of daily living performances in both the experimental and control groups, but this recovery in experimental group were significantly more than that of control group.Data support the effectiveness of biofeedback therapy on the activities of daily living performances in the experimental group.While data on goniometry indicated effectiveness of occupational therapy on the range of motion in control group, but differences between the two groups were significant.In addition, while participants in the both groups showed decrease in spasticity (Ashworth score) but in the experimental group many more patients showed a decrease in Asworth scores from 3 to 2. In accordance with these findings, the effectiveness of electromyography biofeedback in functional recovery of hand in hemiplegic patients has been reported [16,30,31].In the mentioned research [16] biofeedback therapy and placebo biofeedback have been compared which has showed better recovery in active ROM of wrist in subjects received biofeedback when compared to control group (receiving placebo).In that research, griping a glass which is a complicated hand function was assessed.This function has showed an improvement in both groups and there was not a significant difference between them.According to the author's report, this could be due to psychological role of placebo biofeedback which can act as a motive for activities of daily living performances.Hence more studies are needed to shed light on these dark angles.Furthermore, it has been reported [11] that application of electromyography biofeedback on upper extremities in hemiplegic stroke survivors caused decrease in hyperactivity of biceps brachii, wrist and finger flexors, thenar eminence and flexor synergist at all.In addition this intervention has lead to optimized neuromuscular function and recovery of function by following the treatment protocol.
In the present study, the subjects who showed a decrease in spasticity after biofeedback therapy were much more than these subjects in control group.Accordingly, in a systematic review in 2007, researchers using electromyography biofeedback in upper extremities of stroke patients were reviewed [32].One of the researches has shown the positive effects of electromyography biofeedback in accompanying with rehabilitation programs on the ROM of shoulder.The two other studies have shown the effectiveness of these treatments on functional ability of upper extremities.Therefore, considering the present results and previous studies, it could be concluded that, using biofeedback technique in accompanying with routine occupational therapy can effectively improve the ROM and reduce spasticity in the upper limbs of stroke survivors.
Stroke survivors suffer from disability in activities of daily living performances which consequently leads to decrease in their quality of life.Considering the present findings, the biofeedback therapy is a potent treatment modality in increasing the ROM in upper limb and improves the activities of daily living performances which can lead to increase in independency and quality of life.These factors are among the most key points in rehabilitation of the patients with stroke.
Table 1 .
Characteristics of the Stroke Patients a,b
Table 2 .
The ROM in Elbow, Wrist and Finger Before and After Intervention in the Two Groups of Patients With Stroke a,b a Abbreviations: Exp, experimental group; ROM, range of motion.b The values are presented as mean ± SD.
|
2018-12-04T04:24:43.858Z
|
2015-10-25T00:00:00.000
|
{
"year": 2015,
"sha1": "e0134d2193514f3bc9d76d9aa0085d781bcb61fc",
"oa_license": "CCBYNC",
"oa_url": "http://cdn.neoscriber.org/cdn/serve/71/de/71de6ee33d5b538b9ec4fa2088e4b2bdd58d915b/zjrms-17-2204.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "71de6ee33d5b538b9ec4fa2088e4b2bdd58d915b",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119619526
|
pes2o/s2orc
|
v3-fos-license
|
KAM for quasi-linear autonomous NLS
We consider a class of fully nonlinear Schr\"odinger equations and we prove the existence and the stability of Cantor families of quasi-periodic, small amplitude solutions. We deal with reversible autonomous nonlinearities and we look for analytic solutions. Note that this is the first result on analytic quasi-periodic solutions for fully nonlinear PDEs.
Introduction and Main Results
In this paper we prove the existence of Cantor families of quasi-periodic solutions for the following fullynonlinear Schrödinger equation The non linearity f is reversible, gauge preserving and x-independent (see Hypothesis 1.1) and has a zero of order three in u = 0, i.e. f(u, u x , u xx ) := f (3) (u, u x , u xx ) + g(u, u x , u xx ) (1.2) where f (3) is homogeneous of degree three while g has a zero of order at least five. We will consider two cases: Case 1. g is analytic as function C 3 → C in the ball of radius r 0 > 0. Then we fix a > 0 and extend (1.1) to x ∈ T a . Here T a is the compact subset of the complex torus T C := C/2πZ with Re(x) ∈ T and |Im(x)| ≤ a.
Case 2. g ∈ C q (U r0 , R 2 ), where U r0 is the ball of radius r 0 in in the real sense.
Since f vanishes of order 3 at u = 0, equation (1.1) can be seen, at least in a neighborhood of the origin, as a perturbation of the linear Schrödinger equation here p ≥ 1/2 while 0 < a ≤ a/2 in Case 1 while a = 0 in Case 2. Note that there is an isometric one-to-one correspondance between a sequence {u k } and a function u = k u k e ik·x in H p (T a ), i.e. the analytic function on the complex strip T a that are p−Sobolev on the boundary. We will use the same symbol u ∈ h a,p to indicate both the sequence and the function. A natural question is to know whether equation (1.1) has periodic, quasi-periodic or almost periodic solutions close to zero, and more precisely solutions bifurcating from a periodic solution of (1.3). We recall that a quasiperiodic solution of (1.1) is an embedding and a frequency vector ω ∞ ∈ R d such that u(t, x) = v(ω ∞ t, x) is a solution of the equation of (1.1) and v(ϕ, x) ∈ H p (T d+1 a ). Note that both the embedding v and the frequency vector ω ∞ are a unknown of the problem. Proving existence and stability for quasi-periodic solutions in infinite dimensional systems is a natural extension of KAM theory for lower dimensional tori [1]. The first KAM results for PDEs have been obtained by Kuksin [2] and Wayne [3]. Such results were restricted to the case in which the spatial variable ranges in a finite interval with Dirichlet boundary conditions. In order to consider the case of periodic boundary conditions, Craig-Wayne used a Lyapunov-Schmidt reduction method in [4] later generalized by Bourgain in [5], [6]. Other developments of KAM Theory for PDEs can be found in [1], [7], [8], [9]. For extension of KAM Theory to higher spatial dimension we mention the papers by Bourgain in [6] for the nonlinear Schrödinger equation on T 2 with a convolution potential and [10] for an existence result on T d . We mention also the remarkable results by Berti and Bolle [11], [12] which study equations in presence of a more natural multiplicative potential; The latter approach, based on a multi-scale analysis, has been very fruitfully exploited in the study of PDEs also on manifolds different from tori. In [13] Berti, Corsi and Procesi studied NLW and NLS on compact Lie groups and homogeneous manifolds. There are very few and recent results on reducibility on tori. We mention Geng-You in [14] for the smoothing NLS, Eliasson-Kuksin in [15] for the non resonant NLS and Procesi-Procesi [16] for the completely resonant NLS which involves deep arguments of normal forms developed in [17], [18]. All the aforementioned papers, both using KAM or multi-scale, are on semi linear PDEs with no derivatives in the non linearity.
More recently KAM theory has been developed also for dispersive semilinear PDEs on the one dimensional torus when the nonlinearity contains derivatives of order δ ≤ n − 1, here n is the order of the highest derivative appearing in the linear constant coefficients term. The additional difficulty in this case is that, due to the presence of derivatives in the nonlinearity, the KAM transformations used to diagonalize the linearized operator might be unbounded. The key idea to overcome such problem has been introduced by Kuksin in [19] in order to deal with non-critical unbounded perturbations, i.e. δ < n − 1, with the purpose of studying KdV type equations, see also [20]. The key idea is to note that the linear frequencies of KdV have good separation properties, which allow to control derivatives in the nonlinearities up to the second order. This approach, developed for the KdV that has a strong dispersion law, has been further exploited by the Chinese school to cover the "less" dispersive case of NLS in presence of one derivative in the non linearity, i.e. the critical case when δ = n − 1. In particular we mention Zhang, Gao and Yuan [21] and Liu and Yuan [22] for derivative NLS, and Berti-Biasco-Procesi [23]- [24] for the derivative NLW.
Concerning quasi-linear or fully non-linear PDEs, i.e. δ = n, we quote the papers by Iooss-Plotnikov-Toland [25] and by Baldi [26], [27] in which is studied the existence of periodic solutions. The first existence results of quasi-periodic solution for quasi-linear PDEs has been obtained by Baldi-Berti-Montalto in in [28], for the forced case, then in [29] for the autonomous one, see also [30]. Recently such results has been extended to the forced NLS in [31] ( reversible setting), [32] (Hamiltonian setting) to the water wave equation in [33] and to the Kirchoff equation in [34], see also [35].
We remark that all the aforementioned papers on fully non-linear PDEs provide existence and stability of quasi-periodic solutions with Sobolev regularity, even when the non-linearity g is an analytic function. This is due to the strategy proposed in these papers in order to deal with quasi-linear and fully non-linear perturbations. Moreover all the results above require a Hamiltonian structure, in the case of autonomous systems. In [36], we discussed a general strategy in order to deal with both Hamiltonian and reversible equations, in which we treated both analytic and finite regularity cases in a unified way. We remark that the abovementioned paper contains only applications to semi-linear PDEs.
The aim of this paper is to apply the stategy of [36] to an autonomous fully nonlinear NLS and to prove existence of analytic solutions (for completeness we also give the result for Sobolev solutions, when the nonlinearity has only finite regularity). In order to avoid the complications coming from double eigenvalues we decided to work in a reversible setting. The first difficulty we have to overcome, before applying any quadratic scheme, is that equation is completelyresonant, i.e. the solutions of (1.3) are periodic, clearly in order to prove the existence of quasi-periodic solutions we need some non-degeneracy hypothesis on f (since for instance f = 0 is not acceptable!), more precisely on its leading term f (3) . Let us first state our reversibility hypotheses explicitly. Hypothesis 1.1. Assume that f is such that (i) f(−η 0 , η 1 , −η 2 ) = −f(η 0 , η 1 , η 2 ).
We are now ready to state our main Theorem on the existence of quasi-periodic solutions of d frequencies which is based on the following "genericity" condition. Definition 1.2 (Genericity). For any finite d ∈ N, given a non-trivial polynomial P : C d → C we say that x 0 ∈ C b is "generic" if P (x 0 ) = 0. .2) is an analytic function. Assume the Hypothesis 1.1 and moreover that (a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 , a 8 ) is not resonant. There exists a non trivial polynomial such that for any d ∈ N with d > 2 and for any choice of v 1 , . . . , v d ∈ N such that x 0 = (v 1 , . . . , v d ) is generic with respect to the polynomial the following holds.
There exists a = a(d, f), ε 0 = ε 0 (d, f) and c = c(d, f) ≪ 1 such that for all ε ∈ (0, ε 0 ), there exists a Cantor set such that for all ξ ∈ C ε the NLS equation (1.1) has a quasi-periodic solution with frequency ω ∞ given by the embedding v(ξ) ∈ H 1 (T d+1 with M an invertible matrix. Moreover one has v(ϕ, x) = −v(ϕ, −x) and v(ϕ, x) =v(−ϕ, x), and the solution is linearly stable. Remark 1.4. As far as we know Theorem 1.3 is the first result of analytic quasi-periodic solutions for a fully non-linear partial differential equation. Some of the key ideas follow closely the ones of [28], [31], etc..., however in order to prove the existence of analytic solutions we have to modify that strategy in various non-trivial ways, which we shall illustrate in the next paragraph. While our approach can surely be applied to other equations, such as for instance the KdV equation, with very little modifications, it does not seem straightforward at all to generalize to the water wave equation [33].
In the case of finite regularity we have a similar result.
Theorem 1.5. Consider the equation (1.1) in case 2. For any d ∈ N with d > 2 there exists q = q(d) such that for any non linearity f ∈ C q that satisfies Hypothesis 1.1 and moreover such that (a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 , a 8 ) is not resonant, there exists a non trivial polynomial such that for any choice of v 1 , . . . , v d ∈ N generic with respect to the polynomial the following holds.
The proofs of our two results are very similar, and we shall concentrate on the more difficolt and novel, analytic case.
Remark 1.6. In stating our Theorems we put some effort in distinguishing the non-resonance conditions on on (a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 , a 8 ) from the genericity conditions on v 1 , . . . , v d . Informally we are saying that for all non-resonant equations of the form (1.1) there exist many quasi-periodic solutions, and that for most choices of d sites v 1 , . . . , v d there exist quasi-periodic solutions essentially supported on those sites for all times. For example given any choice of (a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 , a 8 ) such that a 1 = 0, then the genericity condition can be verified by removing only a co-dimension one algebraic manifold (which may depend on the choice of the parameters a i ) in v 1 , . . . , v d .
It may be possible that for some choices of the a i one does not need to impose any further genericity condition, however we have not investigated this question. Indeed, even for equations with no derivatives in the non-linearity such as the quintic NLS it can happen that, for specific choices of the sites S, the behavior of the solutions of the non linear equation differs drastically from the one of the linear equation (see for instance [37] or [38]). In order to avoid such phenomena one has to restrict to "generic" choices of S, in the sense of Definition 1.2.
Remark 1.7. It is possible that our result can be further refined in order to prove existence of quasi-periodic solutions also for some resonant choices of the a i , however some conditions on this parameters are necessary. Indeed it is not possible that all equations of the form (1.1) have quasi-periodic solutions, as can be seen in the following example. We start by considering a linear Schrödinger equation iv t − v xx = 0 and writing v = u + |u| 2 u, then we deduce the equations for u. We have iv t − v xx = (1 + 2|u| 2 )(iu t − u xx ) + u 2 (iū t −ū xx ) + 4|u x | 2 u + 2u 2 xū = 0 and after some computations we get which has the form (1. 2) with f (3) = −4|u x | 2 u − 2u 2 xū + 2u 2ū xx , and satisfies Hypothesis 1.1. Now evidently all the small solutions of this equation are periodic (since the map u → v is invertible close to zero) and indeed it turns out that such choice of f (3) corresponds to resonant a i , namely a 1 = a 2 = a 4 = a 5 = 0, a 3 = −4, a 6 = 2, a 7 = −2, a 8 = 0.
Description of the paper and strategy of the proof. Since the proof involves many different arguments we explain how the paper is organized.
Weak Birkhoff normal form. As preliminary step one looks for an approximate solution for the NLS 1.1 which will be the starting point for an iterative algorithm. Hence in Section 3 we find a better approximate solution. One first rewrites the NLS as a dynamical systeṁ u = χ(u) = j∈Z χ j (u)∂ uj (1.10) for some τ > d and γ ∼ O(|ξ|). Now we need to control the linearized operator in the z directions, the leading terms are those coming from the terms O(v 2 z) (which we have not removed from Π S c χ) . The resulting matrix, denote it by Ω(θ, ξ), is of order O(|ξ|) hence in principle not perturbative w.r.t. γ. We discuss this in Proposition 7.70 of Section 7.3 where we study the invertibility of the linearized operator in the normal directions.
A crucial point is the so called "twist" condition with respect to the parameters ξ. What we need to check is that if one "moves" the initial data ξ then the frequencies move in a non "trivial" way. We have said that the map ξ → ω(ξ) is a diffeomorphism, this is the so-called twist condition. In order to perform our scheme we also need a twist condition on the normal directions, namely on the average in θ of Ω(θ, ξ). The analysis of this last issue is performed in Lemmata 9.87 and 9.88 in Section 9. Note that this is a delicate question, since we are requiring a modulation of infinitely many normal frequencies by only finitely many parameters. The analysis would be much simpler if one considers a fully nonlinear perturbation, of order at least four, of the cubic integrable NLS. In such a case, for any choices of the tangential site in S, one would obtain that the map ξ → ω(ξ) is a diffeomorphism by exploiting the integrability properties of the system. Here we need to introduce a notion of "genericity" (see Definition 1.2) which implies that for "most" choices of the cubic terms and "most" choices of the tangential sites the frequencies satisfy a "twist" condition. Interestingly we can produce explicitly non generic choices of cubic non linearities such that for any choice of tangential sites the twist condition is false. In particular it turns out that the Jacobian of the map ξ → ω(ξ) has at most rank 2. It would be interesting to investigate whether quasi periodic solutions exist for such "degenerate" cases ,see also Remark 1.7.
KAM scheme. Once we are in the setting of (1.11), we wish to apply the Abstract KAM theorem of [36]. Such theorem gives an explicit (if complicated) set of parameters ξ (denoted by O (∞) ) for which quasi-periodic solutions exist for (1.11), provided that G is tame and satisfies some smallness conditions at least close to the approximate invariant torus. In sections 5 and 6 we first introduce the necessary notations and then state the Theorem 6.2, and verify that all the hypotheses are fulfilled in our setting. We refer to the introduction of [36] for a detailed description of the strategy. The theorem is mostly just a quadratically convergent iterative scheme which produces a sequence of changes of variables H n such that (H n ) ⋆ F (θ, 0, 0) tends to zero (among other properties). This means that (H n ) −1 (θ, z = 0, y = 0) is an approximately invariant torus, with a better and better approximation, we call this object an approximate solution.
The remainder of the paper is devoted to proving that the set O (∞) is non-empty. Such set is explicitly defined in the Theorem as the intersection of the sets O (n) where one has appropriate tame estimates on the inverse of the linearized vector field at the n'th approximate solution, see Definition (6.43). We have to show that the O (n) have positive measure and that the same holds for their intersection.
Let us denote the linearized vector field at the n'th approximate solution by L n . The strategy in [28], [29], [31]... is to prove the bounds on L n by constructing a bounded change of variables Q n which approximately diagonalizes it, and then imposing the invertibility of L n by assuming bounds on the eigenvalues and controlling the norm of Q n , Q −1 n . In turn the fact that the diagonalizing change of variables exists is ensured by assuming bounds on the differences of the eigenvalues and by exploiting the fact that L n is a pseudo-differential operator. This results in an explicit description of O (n) in terms of first and second order Melnikov conditions on the eigenvalues.
This strategy however has a serious problem in the analytic setting. The change of variables which diagonalizes L n in the analytic case is NOT bounded from the space in itself but loses some of the analyticity radius.
This can be trivially seen from the following example. One of the changes of variables used in order to diagonalize is a change of variable z(x) → (T β z)(x) := z(x + β(x)). This change of variables is used in order to conjugate L n to an operator whose principal term (the term containing the highest derivatives) is diagonal. Now it is evident that this operator maps H p (T) in itself but one cannot expect the same to hold for H p (T a ), where the radius of analyticity should be reduced by ∼ |β|. This means that at each step n we lose some analyticity, of the order of the corresponding β n . Now in the strategy of [28], etc. the β n are all small but more or less all of the same size so that the algorithm would collapse after a finite number of steps.
In order to overcome this difficulty we reason as follows. In [36] we have shown that in performing the iterative scheme which produces the changes of variables H n and the approximate solutions we can apply any change of variables which does not ruin our approximation procedure (namely which maps an approximately invariant torus into itself), we call such changes of variables compatible, see Definition 6.44. With this fact in mind at each step we apply to (H n ) ⋆ F the change of variables T αn which conjugates L n to an operator whose principal term is diagonal. Note that we can apply the changes of variables T α due to the fact that they preserve the pseudodifferential structure, which we specify in Definition 5.29. In this way our algorithm is closed, moreover not only (H n+1 ) ⋆ F (θ, 0, 0) becomes smaller at each step but also the principal term of L n+1 becomes closer to being diagonal. This means that |α n+1 | ≪ |α n | and our loss of analiticity converges.
In Section 7 we first discuss various types of changes of variables (from which we shall choose the compatible changes of variables explained above). Then we show how to approximately diagonalize the resulting operator and deduce the estimates on the inverse of a matrix from Melnikov conditions on the eigenvalues.
In section 8 we use the results of the previous setting in order to define recursively the compatible changes of variables L n . Then we show that the sets O n can be expressed in terms of Melnikov conditions on the eigenvalues. Finally in section 9 we compute the measure of the sets O n and of their intersection.
Functional setting
In this Section we introduce the functional spaces on which we work. Moreover we analyze in a specific way the rôle of the "reversibility" condition and how we use it in Theorems 1.3 and 1.5.
Scales of Sobolev spaces
For the analytic contest we introduce the space of analytic functions that are Sobolev on the boundary for a > 0 and for some b ≥ 1. Clearly the space H s (T b a ) is in one-to-one correspondence with the sequences space. We denote the space of sequences by h a,p (see (1.4)). If the parameters a = 0 then we denote by H p (T b ; C) the usual Sobolev space of functions defined on T b .
In order to prove Theorem 1.3 and 1.5 it is convenient to study the equation as dynamical system on the phase space H 1 (T a ; C) (or H 1 (T; C) in the Sobolev case), i.e. look for u(t) ∈ H 1 (T a ; C) quasi-periodic in t. In order to distinguish these two cases, for the autonomous system IN the paper we shall use the equivalent notation h a,p to denote the functions in H p (T a ; C). We shall write H s (T d+1 ; C) to denote functions v(ϕ, x) defined for (ϕ, x) ∈ T d+1 .
Due to the complex nature of the NLS we need to work on product spaces. We will usually denote We also write H s x to denote the phase space of functions in H s On the product spaces H s we define, with abuse of notation, the norms For a function f : Λ → E where Λ ⊂ R n and (E, · E ) is a Banach space we define and for γ > 0 the weighted Lipschitz norm In the paper we will work with parameter families of functions in H s , If one deal with parameters family u = u(λ) ∈ Lip(Λ, H s ) where H s = H s , H s and Λ ⊂ R d we simply write f Hs,γ := f s,γ , or u s,p,γ in the analytic contest. All the discussion above holds for the product space h a,p := h a,p × h a,p . Along the Thesis we shall write also a ≤ s b ⇔ a ≤ C(s)b for some constant C(s) > 0.
Moreover to indicate unbounded or regularizing spatial differential operator we shall write O(∂ p x ) for some p ∈ Z. More precisely we say that an operator Clearly if p < 0 the operator is regularizing. Now we define the subspaces of trigonometric polynomials , and the orthogonal projection This definitions can be extended to the product spaces in (2.2) in the obvious way. We have the following classical result.
Lemma 2.8. Fo any s ≥ 0 and ν ≥ 0 there exists a constant C := C(s, ν) such that We omit the proof of the Lemma since bounds (2.9) are classical estimates for truncated Fourier series which hold also for the norm in (2.6) and in the analytic case. For any u := (u + , u − ) ∈ h a,p := h a,p × h a,p .
we consider the dynamical systeṁ where f ± are defined in such a way that, on the subspace U := {u + = u − }, the system (2.10) is equivalent to (1.1). Essentially one look for an extension such that (f + , f − ) = (f,f). If f is analytic this extension is completely standard, indeed one may Taylor expand f as totally convergent series in u,ū (and their derivatives). In the C q case this requires some care, see Section 1 in [31] for more details. Here the notation of a vector field is the following: Note that the map F : h a,p → h a,p−2 defined by is a composition operator. This implies that the linearized operator at some u is of the form Thus χ linearized at any u has a very special multiplicative structure, namely on U it acts on functions h(
Reversible structure.
Consider the following involution (2.14) By Hypothesis 1.1 it turns out that equation (2.10) is reversible with respect the involution (2.14) and hence we have Hence the subspace of "reversible" solutions is invariant. Actually we look for odd reversible solutions i.e. u which satisfy (2.15) and u(t, x) = −u(t, −x). Hence we choose as phase space of (2.10) essentially the couples of odd functions in H p (T a ). Then (2.15) reads u(t, x) =ū(−t, x). It shall be convenient to introduce also the following spaces of odd or even functions in x ∈ T. For all s ≥ 0, we set Note that odd reversible solutions means u ∈ X s , moreover an operator reversible w.r.t. the involution S maps X s to Z s . Definition 2.9. We denote with bold symbols the spaces G s : We denote by H s x := H s (T) the Sobolev spaces of functions of x ∈ T only, same for all the subspaces G s x and G s x .
Remark 2.10. Given a family of linear operators A(ϕ) : H s x → H s x for ϕ ∈ T d , we can associate it to an operator A : H s (T d+1 ) → H s (T d+1 ) by considering each matrix element of A(ϕ) as a multiplication operator. This identifies a subalgebra of linear operators on H s (T d+1 ). An operator A in the sub-algebra identifies uniquely its corresponding "phase space" operator A(ϕ). With reference to the Fourier basis this sub algebra is called "Töpliz-in-time" matrices (see formulae (7.7), (7.8)).
Recalling the definitions (2.17), we set, Definition 2.11. An operator R : H s → H s is "reversible" with respect to the reversibility (2.14) if We say that R is "reversibility-preserving" if In the same way, we say that A : Remark 2.12. Note that, since X s = X s ×X s ∩U, Definition 2.11 guarantees that a reversible operator preserves also the subspace U, namely (u,ū)
Weak Birkhoff Normal Form
In this Section we select a finite dimensional subspace "approximatively" invariant for the system (2.10) from which the solution for the entire system will bifurcate. This procedure is necessary since NLS equation is completely resonant near u = 0. In other words all the solution of the linear equation are periodic. Let fix some notation. Given a finite set of distinct numbers {j 1 , . . . , We choose S + = {v 1 , . . . , v d } ⊂ N as above and denote v = Π S u the tangential variables and z = Π ⊥ S u the normal ones. For a finite dimensional subspace E := span{e ijx : |j| < C}, C > 0 we denote Π E its L 2 projector.
As notation we will also indicate with R(v q z r ) a homogeneous polynomial with M a q, r-multilinear operator in the variables v ± , z ± .
Definition 3.13. For any natural k consider a 2k-uple = (j 1 , . . . , j 2k ) ∈ Z 2k . We say that is a k-resonance if We say that a k-resonance is trivial if j i = j i+1 up to a permutation of the {j 2l } k l=1 . We say that a 2k-uple is non- Lemma 3.15. For S generic one has that that there are no non-trivial 3-resonances with at least five points in S Proof. We just need to exhibit the polynomial which gives such genericity condition.
Given = (j 1 , . . . , j n ) ∈ Z n for some n we say that if at most l of the j i are not in S .
Note that for each fixed n the set (j 1 , . . . , j n ) ∈ A 1 is finite dimensional. For a finite dimensional subspace E := span{e ijx : |j| < C}, C > 0 we denote Π E its L 2 projector.
where Ψ is a finite rank map. The map Φ(u) is defined for all 1 u ∈ h a,p such that u a,p1 ≤ ǫ 0 , and satisfies the bounds: for all u : u a,p1 ≤ ǫ 0 . Actually Ψ is tame modulus in the sense of [39], namely it respects interpolation bounds also for the higher order derivatives. Finally Φ preserves h a,p odd ∩ U and the new vector field Φ * χ := Υ restricted to U is reversible and has the form
5)
h (>5) collects all terms with degree greater than 5, and for 2 u ∈ U Proof. Now consider the equation (1.1). As notation for a vector field F we denote by F j1...j 2k+1 j the coefficient We divide note that ǫ 0 is fixed in terms of r 0 and a, p 1 . 2 extending polynomials outside U is trivial just apply u → u + andū → u − .
while (3.6), comes from f . The other terms collect respectively the part of degree 5 and > 5 of g. We want to eliminate from χ all the monomials u + such that the list (j 1 , j 2 , j 3 , j) ∈ A 1 ∩ N. Note that this is a finite set of monomials.
We define the transformation Φ (1) as the time−1 flow map generated by the vector field By construction the transformation Φ (1) has finite rank. Moreover one has hence the vector field F 3 in (3.7) is reversibility preserving. By construction the push-forward of the vector field Φ is either a trivial resonance or is not in A 1 or j ∈ S and at least two among j 1 , j 2 , j 3 are in S c (see the second summand in B σ 1 in (3.5)) . The trivial resonances in A 1 give A(u), all the other terms either contribute to B 1 or to Q. More explicitly In this way the systeṁ u = Y(u) possesses an invariant subspace H S and its dynamics is integrable and, as we will see, non-isochronous.
In order to enter a perturbative regime we need to cancel further term from the vector field. In particular we look for a transformation Φ (2) such that the field Υ := Φ (2) * Y does not contain monomials u σ j1 u −σ j2 u σ j3 u −σ j4 u σ j5 ∂ u σ j such that the list (j 1 , j 2 , j 3 , j 4 , j 5 , j) ∈ A 1 ∩ N, as in degree three this is a finite set of monomials. Φ (2) is the time 1 flow of the vector field F 5 of the form Again by construction Φ (2) has finite rank. Moreover since Y is reversible then F 5 is reversibility preserving.
contains only monomials such that (j 1 , j 2 , j 3 , j 4 , j 5 , j) is either a resonance or is not in A 1 . By Lemma 3.15 all the resonances in A 1 are trivial and hence contribute to the first summand in B 1 . Now we perform the last step in order to cancel out from Y 3 For the tame estimates (3.3) we refer to [39].
Then ( see Lemma 7.1 of [29] for a detailed proof ) where the first term is described in (2.13), while for some fixed N Here a (m) , b (m) are functions in h a,p depending on q and such that, for any m, one of the a (m) , b (m) is equal to e ijx for some j ∈ E. In other words R is a linear operator sum of two terms, one maps Π E h a,p into h a,p and the other maps h a,p into Π E h a,p .
Action-angles variables
In the previous paragraph we have worked in the Fourier basis and we have shown that the change of variables preserves h a,p odd . Now we restrict our vector field to h a,p odd defined in (2.16). In the present setting however it will be more convenient to express h a,p odd over N by passing to the "sine" Fourier basis. We want to switch the tangential variables to polar coordinates. We set this is a well defined , analytic change of variables for ξ i > 0, |y i | < ξ i . Our phase space is hence (see (2.16)) is a sequence space indexed in S c := N \ S + with a Hilbert structure w.r.t. the norm in (4.2). In the new variables U becomes For ε small we consider ξ ∈ ε 2 Λ = ε 2 [1/2, 3/2] d and the domain One can check that there exist constant c 1 and c 2 such that, if then one has Φ (ξ) : D a,p+ν (s 0 , r 0 ) → B ǫ0 , where the vector field Υ is well defined. We assume that our parameters ε, r 0 , s 0 satisfy (4.5) so that we can apply Φ ξ to our vector field. In the new variables one has u(x) = v(θ, y; and reads where M is the twist matrix We define ω (0) ∈ R d the vector of unperturbed frequencies as where (Ω (−1) ) σ σ = iσdiag j 2 , (Ω (−1) ) −σ σ = 0. With this notation F = N 0 + G has an approximately invariant torus.
Nonlinear functional setting
We set V a,p := C d × C d × ℓ a,p .
We shall need two parameters, p 0 < p 1 . Precisely p 0 > d/2 is needed in order to have the Sobolev embedding and thus the algebra properties, while p 1 will be chosen very large and is needed in order to define the phase space.
In order to study the properties of the vector field F we first need to introduce some notations. We define a norm (pointwise on y, w) by setting s,a,p (5.14) where . Remark 5.19. Note that, since in this case ℓ a,p = Π ⊥ S h a,p odd then fixing p 0 ≥ (d + 1)/2 we have that · s,a,p in (5.17) is nothing but the norm of the Sobolev space H p (T d s × T a ). In particular one can check that such norm is equivalent to the one introduced in [36].
It is clear that any f as in (5.13) can be identified with "unbounded" vector fields by writing where the symbol f (v) (θ, y, w)∂ v has the obvious meaning for v = θ i , y i while for v = w is defined through its action on differentiable functions G : ℓ a,p → C as Similarly, provided that |f (θ) (θ, y, w)| is small for all (θ, y, w) ∈ T d s × D a,p (r) we may lift f to a map and if we set θ s,a,p := 1 we can write Note that y s,a,p = r −2 0 |y| 1 , w s,a,p = r −1 0 w a,p .
We are interested in vector fields defined on a scale of Hilbert spaces; precisely we shall fix ̺, ν, q ≥ 0 and consider vector fields for some s < s 0 , a + ̺a 0 ≤ a 0 , r ≤ r 0 and all p + ν ≤ q. Moreover we require that p 1 in Definition 5.18 satisfies p 1 ≥ p 0 + ν + 1.
Remark 5.21. Here ν represents the loss of regularity of the field F . In the NLS case (for F in (4.8)) one has ν = 2. We shall give same definition for generic ν ≥ 0 since we shall also need to deal with bounded vector field, i.e. ν = 0.
We need to introduce parameters ξ ∈ O 0 a compact set in R d . Given any compact O ⊆ O 0 we consider Lipschitz families of vector fields 25) and say that they are bounded vector fields when p = p ′ and a = a ′ . Given a positive number γ we introduce the weighted Lipschitz norm Definition 5.23. We shall denote by V v,p with v = (γ, O, s, a, r) the space of vector fields as in (5.21) with ρ = 0. By slight abuse of notation we denote the norm · γ,O,s,a,p = · v,p .
Remark 5.24. Note that we have a projector operator defined on the whole space C 2d × ℓ a,p . On the space ℓ a,p = Π ⊥ S h a,p odd one has the projector given by Note that the space ℓ a,p we have now defined satisfies Hypothesis 1 in [36].
Polynomial decomposition
In V v,p we identify the closed monomial subspaces As said after (5.13) it will be convenient to use also vector notation so that, for instance Note that the polynomial vector fields of degree 1 are so that, given a polynomial F ∈ P 1 we may define its "projection" onto a monomial subspace Π V (v,v 1 ) in the natural way.
Since we are not working on spaces of polynomials, but on vector fields with finite regularity, we need some more notations. Given a C 2 vector field F ∈ V v,p , we introduce the notation By Taylor approximation formula any vector field in V v,p which is C 2 in y, w may be written in a unique way as sum of its Taylor polynomial in P 1 plus a C 2 (in y, w) vector field with a zero of order at least 2 at y = 0, w = 0. We think of this as a direct sum of vector spaces and introduce the notation we refer to such operators as projections.
Definition 5.25. We identify the vector fields in V v,p which are C 2 in y, w, with the direct sum where R 1 is the space of C 2 (in y, w) vector fields with a zero of order at least 2 at y = 0, w = 0. On W (1) v,p we induce the natural norm for direct sums, namely for We can and shall introduce in the natural way the polynomial subspaces and the norm (5.31) also for maps since the Taylor formula holds also for functions of this kind. We also denote Tame vector fields. We now define vector fields behaving "tamely" when composed with maps Φ. In order to simplify the notation, from now on we set Definition 5.26. Fix a large q ≥ p 1 , and a set O. Consider a C 5 vector field . Then (i) for any m = 0, . . . , 3 and any m vector fields for all (y, w) ∈ D a1,p1 (r ′ ) and p ≤ q. Here d U F is the differential of F w.r.t. the variables U := {y 1 , . . . , y d , w} and the norm is the one defined in (5.33).
(ii) For m = 1, 2, 3 and given h 1 , . . . , h m−1 as in (5.34), consider the linear maps D m : We say that a bounded vector field F is adjoint tame if the conditions (T m ) -(T m ) * above hold with ν = 0.
Remark 5.27. Note that in the definition above appear two regularity indices: 3 being the maximum regularity in y, w and q the one in θ. Note that in the w−component the norm · X p is equivalent to the norm · s,a,p .
Normal form decomposition
In this Section we introduce a suitable decomposition of our vector fields.
We then decompose is the set of vector fields with (5)-regularity in y, w, R contains all of R 1 and all the polynomials generated by monomials not in N ⊕ X . We shall denote Π R := 1 − Π N − Π X and more generally for In order to apply the Abstract KAM Theorem of [36] it remains to introduce a suitable subspace of vector field which is called E, accordingly with the notations used in [36].
We first introduce the notion of Pseudo-differential vector field.
Definition 5.29 (Pseudo-differential vector fields). We say that a vector field F is of Schrödinger pseudodifferential type if there exists N > 0 such that its differential in a neighborhood of u 0 = (θ, 0, 0) has the form, is a linear operator ℓ a,p → ℓ a,p of finite rank equal to N with (·, ·) L 2 the usual L 2 scalar product on T (note that ℓ a,p is identified with odd functions in h a,p ) where and for some constant depending on F . The same holds for the other coefficients.
We remark that the condition that d w F (w) (u)[·] maps ℓ a,p+2 to ℓ a,p implies some parity conditions in x ∈ T on the coefficients.
We denote by E the subset of F ∈ V v0,p such that for (θ, y, w) restricted to U in (4.3), is Tame: F is tame according to Definition 5.26; Pseudo differential: F is a pseudo differential vector field according to Definition 5.29; Gauge preserving: the vector field F commutes with X M := i ∂ θi + iz∂ z − iz∂z; Real-on-Real: one has, Remark 5.31. First of all one can note that the component F (θ) is even in the variables θ while F (y) is odd in θ. By condition (5.41) we see that the field F is reversible with respect to the involution on the subspace U.
Remark 5.32. The condition that F is Gauge preserving is equivalent to requiring that Ψ Given a map Φ and a Gauge preserving vector field F , then we recall that We can also check weather Φ is the time one flow of a vector field g which is Gauge preserving. We write F as in (5.19) ( written using Fourier series in the variables θ) then it is Gauge preserving iff We now introduce some special classes of linear vector fields.
Definition 5.33 (Finite rank vector fields). For ν ≥ 0 we say that a vector field f : We denote by B be the set of bounded vector fields ( We have the following result. Lemma 5.34. The set B E defined in (5.33) coincides with the vector fields g ∈ B such that: We say that vector field g ∈ B E is reversibility-preserving or equivalently E−preserving.
The proof of the Lemma above is postponed to Section 7.1.
Definition 5.35. Given K > 0 and a vector field f ∈ A we define the projection Π K f as (recall (5.24)) and we define E (K) as the subspace of A v,p where Π K acts as the identity.
We recall also the Definition of diagonal vector fields.
Definition 5.36 (Normal form). We say that N 0 ∈ N is a diagonal vector field if for all K > 1
Estimates on F
In this Section we study the properties of the vector field F introduced in (4.8).
We have the following.
Proposition 5.37 (Properties of F ). One has that the vector field F in (4.8) belongs to the subspace E in 5.30.
Proof. The field F is tame according to Definition (5.26). This properties follows by the fact that the composition operator in (2.12) satisfies tame estiamtes (see [31] for more details) and by the properties of the map Ψ in Proposition 3.16. One has that F restricted to U is reversible w.r.t. the involution S defined in (5.43) and Real-on-real (see (5.41), (5.42)). This follows by Hypothesis 1.1 (see also (3.4) in Proposition 3.16). One has that F preserves the Mass again by Hypothesis 1.1. The field F in (4.8) is pseudo-differential according to Definition 5.29. Indeed, recalling the definition of χ(u) in (2.10) and its differential in (2.13), one has that the linearized operator in the w−direction d w F (θ, y, w) has the form (2.13) up to a finite rank operator of the form (5.39) for some N := N 0 . This is actually the form in (5.38). Recall also that u = v + w in (4.6). The estimates (5.40) follow recalling Lemma 2.19 in [31]. Now we give estimates on the tameness constants of the field F .We set F 0 := F defined on the domain in (4.4) D a0,p+ν (s 0 , r 0 ) where the parameters are given by formula (4.5). We define N 0 as in (4.11) and notice that it is diagonal according to Definition 5.36. Now we define the vector fields N where v is the function v(θ, y) in (4.6) evaluated at y = 0 and E defined in (2.10). We set where H 0 is defined by difference. We can see that the terms N By definition, see (4.10) have that ω (0) is ξ−close to the integer vector λ (−1) , hence we fix the size of γ 0 by requiring that ω(ξ) satisfies In the following Lemma we analyze the size of the tameness constants of the field F .
Proof. By definition each term in (5.53) is tame, now we want to estimate their tameness constant in order to prove (5.57). Recall that F 0 in (4.8) is defined in terms of equations (3.4), (3.5) and (3.6). We start to study the terms N that are terms that contribute to F (w) that comes form Q in (3.6) (such term is linear in w and quadratic in v). For instance we can bound using the interpolation properties of the norm · s,a,p where we used that z a,p1 ≤ r 0 . Hence one can check Definition (5.26) with a constant C v0,p (a 2 |v| 2 z xx ) ≤ A 0 = A 0 (p 2 , c). All the other terms in (5.52) can be estimated in the same way. Indeed all those terms are quadratic on v and linear in z. Recall that the norm · v0,p is a weighted norm and on the w− component the weight is r 0 (see (5.14)). This determines the constant A 0 by setting
Now let us estimate the several components of the field
0 . All these terms comes form B + 2 in (3.4) so that the linear term in z is at least of degree 4 in v. Hence one has Note that A 0 here could be in principle larger that A 0 in the estimates of N 0 . In this case one choose, with abuse of notation the largest constant for A 0 . We now note that Π N H (θ) where the term independent on z of degree minimum has degree 5 in v hence This implies the first bound on H 0 in (5.57). Let us study Π X H 0 . By Lemma 6.48 we know that each field in E ∩ X is tame with tameness constant controlled by the norm in (5.47). We have by equation Now by (4.8) and (3.5) one has To get bound (5.59) we used the fact that the only terms of degree 5 in v in B 1 are integrable, as one can see in (3.5). Substituting in (4.8), such terms of degree 5 cancel out in the y component of F 0 . Collecting the bounds (5.58) and (5.59) we have, by (4.5) and (5.56) is at least linear in the variables y, w while the terms in Π R H (y) 0 comes from the last two summand in (3.5) (this follows by the definition of R (1) that collects the terms coming form the second summand and the fact that the integrable terms of order 5 are zero). Following the same reasoning of the previous bounds we get without requiring any additional hypotheses on r 0 . Again in the second bound we use that the integrable terms in B 1 in (3.5) cancel out. Collecting together the bounds in (5.61) we get the (5.57).
Remark 5.39. Note that we have separated the terms N 0 that are not "perturbative" with respect to the size of the small divisors γ 0 ≈ |ξ|.
We have the following Lemma.
An Abstract KAM Theorem
In this Section we show the the nonlinear functional setting introduced in Section 5 allows us to prove a KAM result completely analogous to the abstract Theorem proved in [36]. In order to state the result we need further notations. Recalling Lemma (5.38) we set To resume with this notation we have the following bounds on the field F 0 in (5.53): We need to introduce parameters fulfilling the following constraints.
Lemma 6.41 (Smallness conditions). Consider ε 0 , G 0 , R 0 in (6.1) and K 0 in (6.1). One has that, for |ξ| small enough, the following bounds hold: Proof. Let us check the (6.5a)-(6.5d) using (6.1). Condition (6.5a) is implied by which holds for ε 0 small enough. The (6.5b) reads which is satisfied thanks to the choice of a. The other condition in (6.5b) follows in the same way. Consider condition (6.5c). We must have that which is true thanks condition (6.3d) which implies that ∆p is large enough. The last condition (6.5d) follows in the same way.
We have the following definition.
which is C 3 -tame up to order q = p 2 + 2. We say O satisfies the Mel'nikov conditions for (F, K, v 0 ) if the following holds.
(a) one has that the vector field g := WX satisfies We set moreover r 0 > 0 and a 0 , s 0 ≥ 0, we set for all γ 0 , G 0 , R 0 > 0, Finally, for all n ≥ 0 we denote v n = (γ n , O n , s n , a n , r n ), v 0 n = (γ n , O 0 , s n , a n , r n ).
We say that a left invertible E-preserving change of variables (ii) L * conjugates the C 3 -tame vector field F to the vector fieldF := L * F = N 0 +Ĝ which is C 3 -tame; moreover denoting v 2 := (γ, O, s − 2ρs 0 , a − 2ρa 0 , r − 2ρr 0 ) one may choose the tameness constants ofĜ so that Remark 6.45. We remark the following facts. In the choice of parameters in 6.1 there is some freedom. However some parameters are given by the problem we are studying. Indeed the loss of regularity µ 1 , and the decay parameter η in Definition 6.43 are determined in Section 7.3, precisely in Lemma 7.79. In such Section we will construct a quite explicit set of parameters which satisfy (6.6)-(6.7) just for a suitable choice of µ 1 and η. The same remark holds for the parameter κ 3 in Definition (6.44). Indeed in Section 7.1 we will introduce some changes of coordinates, and in Proposition (8.82) in Section 8 we will prove that such transformations are compatible, according to Definition 6.44, provided that κ 3 is chosen in suitable way. We will also see that, in order to define the transformation of Section 7.1 we will need to fix a minimum regularity p 1 . The other parameters will be chosen according to (6.3a)-(6.3d).
Fix γ 0 > 0 and assume that For all n ≥ 0 we define recursively changes of variables L n , Φ n and compact sets O n as follows. Set which satisfies the homological equation for ((L n ) * F n−1 , K n−1 , v 0 n−1 , ρ n−1 ). For n > 0 let g n be the regular vector field defined in item (2) of Definition 6.43 and set Φ n the time-1 flow map generated by g n .
Then Φ n is left invertible and F n : Moreover the following holds.
where u n = (γ n , O n , s n + 12ρ n s 0 , a n + 12ρ n a 0 , r n + 12ρ n r 0 ).
(ii) The sequence H n converges for all ξ ∈ O 0 to some change of variables and Remark 6.46. We remark that hypothesis (6.13) is clearly satisfied by the field F in (4.8) thanks to Lemma 5.38 and the choices in (6.1). This is exactly the result stated in Theorem 2.25 in [36]. In particular we are in the setting of Example 3 in Section 4 of [36]. Such rather abstract result is based on Polynomial and normal form decomposition as detailed in Sections 5.1 and 5.2, and definitions of tame and regular vector fields (see Def. 2.13, 2.18 of [36]). The main point is that regular vector fields must be tame and their tameness constants have to satisfy a series of properties. Actually in [36] is not fixed a particular choice of regular fields but are only listed the required properties. The proof of Theorem 6.2 consists in showing that we are in the setting of [36] section 4 example 3. We shall underline the minor differences.
The (N , X , R)−decomposition is the same, hence we have the following.
Lemma 6.47. The (N , X , R)−decomposition in Definition 5.28 satisfies all the properties of Definition 2.17 in [36]. Note moreover then (N , X , R)−decomposition is triangular according to Definition 3.1 in [36].
Proof. This is proved in Section 4.3 of [36].
The main difference with respect to [36] is in the Definition 5.26 since in the present setting we require not only the properties (T m ) (which were proposed in [36]) but also properties (T m ) * on the adjoint of the linearized vector field. The reason is that if one uses Definition 2.13 of [36], it is not true that finite rank vector fields defined in (5.33) satisfy all the properties of Def. 2.18 in [36], i.e. are regular. Actually we have the following.
Lemma 6.48. The finite rank vector fields of Definition 5.33 satisfy all the conditions of Definition 2.18 in [36], with respect to the tameness constants of Definition 5.26. Moreover | · | p1 is a sharp tameness constant, namely there exists a c, C depending on p 0 , p 1 , p 2 , d such that for any f is a tameness constant and for any tameness constant Finally if f is of finite rank, then it is pseudo-differential according to Definition 5.29.
Proof. This Lemma is the analogous of Lemmata 4.8 and 4.13 of [36]. Note that the present definition of finite rank vector fields (see 5.33) is different from the one in [36] (the norm (5.47) here is stronger). However also the the Definition 5.26 is stronger, hence we get (6.19). Let us give a sketch of the proof. The only non trivial component is f (yi,w · w. The adjoint of the differential of this term is the map . Then the result follows by (5.36) and the definition of | · | v,p .
In Section 4.3 in [36], in order to get the analogous results, see Lemma 4.13 of [36], we had to introduce the notion of "adjoint-tame" vector fields, which is better formalized in Definition 5.26. The other properties stated in Definition 2.18 in [36] follow exactly as in Lemma 4. 15. The fact that f is pseudo-differential follows trivially from the fact that d w f (w) (u) = 0. Lemma 6.49. If F ∈ X is C 1 tame (according to Definition 5.26), then F is finite rank according to Definition 5.33.
Proof. By the definition of X in (5.37) F has, at least formally, the correct structure as (5.46). The only thing we have to prove is that F (yi,w) belongs to ℓ a,p−2 . By definition . By the property The adjoint of the differential is then the map λ → F (yi,w) λ with λ ∈ H p (T d s ). Then the result follows by (T 1 ) * in (5.36).
Note that the strong property above is not required in [36] and will simplify many of our proofs. We remark that property (6.19) is one of the key points together with the properties of tame vector fields detailed in the following Lemma. (i) Commutator. Consider any two C 3 -tame vector fields F, G ∈ W v,p , then the commutator [G, F ] is a C 2 -tame vector field up to order q − 1 with scale of constants where ν F , ν G are the loss of regularity of F , G respectively.
(ii) Composition. Given a tame vector field f ∈ V v,p with scale of constants C p (f ) of the form (5.19) and given a map Moreover if f is a finite rank vector field, i.e. it satisfies (5.46), then Assume that the fields f, g belong to P 1 (see (5.28)) and are such that C v,p (f ) = C v,p (g) ≤ ρ for ρ > 0 small. Consider a constant p 0 ≥ 0 and any tame vector field F : D a,p+ν (s, r) → V a,p , then the pushforward is tame with scale of constants Let us now prove that (T m ) * holds, see (5.36), for m = 1, 2. Essentially this amouts to using the chain rule and then the definition of tameness. Indeed so that Property (T 2 ) * follows by reasoning as follows. In the first summand in (6.25) one uses properties (T 3 ) * on the field F and (T 0 ) on G. In the last summand in (6.25) we denote by , ·] then (AB) * = B * A * and hence (T 2 ) * follows by (T 1 ) * on F and (T 2 ) * on G. The second and third summands use the same ideas. This concludes the proof of item (i). Regarding items (ii)-(iii), the tameness properties (T m ) follow by Lemmata A.5 and B.3 respectively. Regarding the properties (T m ) * we proceed as in (6.25). Finally one can prove Remark B.5 and Lemma B.6 by using items (i)-(iii) exactly as in [36].
The last property we check is the compatibility, according to Definition 2.19 in [36], of the space E with respect to the (N , X , R)−decomposition. This amounts to show the following.
Lemma 6.51. The following properties hold: (i) any F ∈ E ∩ X is a finite rank vector field according with Definition 5.33; (ii) for any F ∈ E ∩ P 1 one has Π U F ∈ E for U = N , X , E (K) ; for U = N . For the proof of Theorem 6.2 we refer the reader to the paper [36]. Indeed it is Theorem 4.16 in Section 4.3 of [36] and we have proven in the previous Lemmata that all the properties used in that paper hold in this slightly different setting. We omit the proof since it would be a word by word repetition.
Pseudo-differential Vector Fields
In this Section we study how a vector field in E defined in 5.30 changes under special changes of variables. First we prove some lemmata of conjugation of non linear vector fields. Then, in Section 7.3, we analyze properties of the linearized operator of compatible vector field of Definition 5.30. In particular we study how to invert it. Roughly speaking we study some particular changes of variables for which the subspace E of vector fields introduce in Definition 5.30 is stable.
The decay norm. In order to give quatitative estimates we introduce an appropriate norm on linear operators on ℓ a,p . We recall that we have identified the sequence space ℓ a,p with the function space Π ⊥ S h a,p odd ⊂ h a,p , by writing this space in the Fourier sine basis.
A linear operator on ℓ a,p is represented by a matrix A := (A k j ) j,k∈S c . It is very natural to extend the operator A to the whole space h a,p with the Fourier exponential basis by setting Note that such extension preserves Π ⊥ S h a,p odd . Moreover it is compatible with the composition of operators. We define a "decay" norm on the operator A induced by the usual decay norm on linear operators on h a,p by setting |A| dec s,a,p = | A| dec s,a,p . The nice fact is that we may deduce all the properties on compositions, inteprolations, by the corresponding ones on the standard decay norms. See for instance [28], [13]. More precisely we have the following definition.
2)
where the extension A is defined in (7.1). If one has that A := A(ξ) for ξ ∈ O ⊂ R d , we define for v := (γ, O, s, a) The decay norm we have introduced in (7.2) is suitable for the problem we are studying. Note that The same for the other parameters. Moreover norm (7.2) gives information on the polynomial and exponential off-diagonal decay of the matrices, indeed setting k − k ′ = (h, ℓ) In the following we consider the decay norm | · | s,a,p in we have introduced in (7.2) in order to deal with linear operators on Π ⊥ S h a0,p+ν odd . We have the following important property. For the proof see [13]. The decay norm satisfies the following classical interpolation estimates.
6b)
Töpliz-in-time matrices We study now a special class of operators , the so-called Töpliz in time matrices, i.e.
To simplify the notation in this case, we shall write They are relevant because one can identify the matrix A with a one-parameter family of operators, acting on the space ℓ a,p × ℓ a,p , which depend on the time, namely Moreover to obtain the stability result on the solutions we will strongly use this property.
then one has that for any w ∈ H p0 (T d s ; ℓ a,p+2 ) ∩ H p+2 (T d s ; ℓ a,p0 ), Similar estimates hold for the term R in (5.38). In other words the operator P is tame, with tameness constants given by the · v,p norms of its coefficients.
Compatible changes of variables
In this Section we study the properties of field in E of Def. 5.30. In particular we study the action on E of the following class of linear changes of variables: • Finite rank changes of variables. the transformation Φ : • Diffeomorphism of the torus 1. the transformation Φ : • Diffeomorphism of the torus 2. the transformation T β : θ → θ + β(θ) • Multiplication operator. the transformation Φ : Proof of Lemma 5.34. By the properties of Finite rank vector fields, see Lemma 6.48, we have that g ∈ B E generates a flow Φ t g = 1 + f t where f t = f ∈ B satisfies (5.48). It is well known that if g, F are Gauge preserving, i.e. commutes with X M , then so does (Φ * F ).
Since the transformation Φ is tame by Lemma 6.50 one has that the push-forward is tame with tameness constants given by (6.24). The fact that (5.41) holds for (Φ * F )(u) follows by the parity conditions (5.41) on F and (5.48). It remains to check that the push-forward of F is pseudo-differential according to Definition 5.29. We evaluate the linearized at some point in a neighborhood of (θ, 0, 0) of the w component; we have has the form (5.38) with coefficients a + i , b + i , c + i , d + i . It is clear that term P + comes from the first summand in (7.12) and has the same form of the term P in d w F (u) but with coefficients a + i (x) = a i (Φ −1 (u); x) (same for b i ). Both the a + i , b + i belong to H p0 (T d s ; h a,p ) ∩ H p (T d s ; h a,p0 ) and satisfy the require bounds (5.40) by the chain rule. The second summand in (7.12) is clearly a finite rank operator and contributes to the coefficients c + i , d + i in K + in (5.39). Let us show that they belong to H p0 (T d s ; h a,p−2 ) ∩ H p−2 (T d s ; h a,p0 ). It follows from the tameness of F by applying (T 1 ) * in order to bound say c + i , and (T 2 ) * for its derivatives. The bounds on d + i = ∂ θi f (w,0) follows f ∈ B.
We have the following result. 14) for some C(p) > 0 depending only on p, d, p 1 , p 2 . The same bounds hold for all the other coefficients in 5.29.
Recall that Φ −1 (u) = u + g(u). The bound follows by estimates (5.40) on a and the estimates on g (see 5.22). The second bound in (7.14) trivially follows. for the third we only need to use the chain rule.
Now we study diffeomorphisms of the x variable and of the θ variables. In the Sobolev case this amounts to studying transformations of the real torus T d+1 . See Appendix in [28]. For completeness we give the statement of an equivalent technical result for the analytic case. For the proof we refer the reader to the Appendix in [36].
We have the following.
Lemma 7.61 (Diffemorphism of the torus 1). Fix ρ > 0. Consider F : O × D a,p+2 (s + ρs 0 , r + ρr 0 ) → V a,p a vector field in E (see 5.30), and a function α ∈ H p (T d s × T a ; C), such that α = α(u) with u in a neighborhood of (θ, 0, 0), which restricted to the real subspace is even in θ, odd in x and satisfies (7.19). Assume that for some c > 0 small and Then setting T α as in (7.20) and defining the map one has that for all ρ small enough Φ * F : D a−2ρa0,p+2 (s + ρs 0 , r [h] = P + + K + has the form (5.38). The operator P + has coefficients a + i , b + i given by
24)
where T −1 α u = u(θ, y + α(θ, y) is the inverse of the diffeomorphism x → x + α(θ, x). In particular, setting v := (γ, O, s + ρs 0 , a), v 1 := (γ, O, s + ρs 0 , a − ρa 0 ) and v 2 := (γ, O, s + ρs 0 , a − 2ρa 0 ), the following holds. For i = 0, 1, 2, the coefficients a + i , b + i , of P + satisfy a + i = a + i (θ, y, w + ; s), with w + = T α w and s = x + α(θ, x) . One has that for C(p) > 0 depending only on p, d, p 1 , p 2 , ρ. The coefficients a + 0 , b + 0 satisfy the same estimates as b 1 . The operator K + has rank N + 4|S| and the coefficients c + i , d + i are such that Proof. The vector field Φ * F is clearly tame, indeed in the new variables the system reads θ We need to study the linearized operator d w (Φ * F ) (w) (u)[·] at u in a neighborhood of (θ, 0, 0). First we note the following. By (7.18) one has that the map Π ⊥ S T α is near to the identity and can be written as
Moreover it is invertible and one has that
where and where Q S is a linear operator of the form (5.39) with N = |S| (here |S| is the cardinality of the set S) with coefficients c i , d i , for i = 1, . . . , |S| which satisfy bounds like We remark also that we can write
(7.29)
Let us start by the term Π ⊥ S T α d w F (w) (u)(Π ⊥ S T α ) −1 , which comes from the linearization of the second term in the equation for w + in (7.27). We know that d w F (w) has the form (5.38), hence we can write By equations (7.29) and (7.30) we have (7.31) The first term in the r. h. s. of (7.31) can be studied explicitly. The equation (7.24) follows by a direct computation (see Section 3 of [31] for further details). All the other terms in (7.31) have the form (5.39) with N N + 4|S|. Indeed, for h ∈ ℓ a,p we have where (T α ) * is the adjoint, with respect to the L 2 scalar product, of T α . One sets c j (x) = (T α ) * e ijx and d j (x) = Π ⊥ S T α L(e ijx ) to get the form (5.39). The other terms can be studied in the same way. Of course the rank of the new operator K + is no more N, but it is increased proportionally with the cardinality of S.
Hence the new term K + has coefficients c + i , d + i for i = 1, . . . , N + 5|S|. Bounds (7.25) and (7.26) follows by applying Lemma 7.59. The field Φ * F is in the subspace E since the function α is even in θ, odd in x and by Lemma (7.60).
Remark 7.62. By estimates (7.26), and using Lemma 7.53, one gets Lemma 7.63 (Diffeomorphism of the torus 2). Consider F as in Lemma 7.61 and the transformation T β : θ → θ + β(θ) with β v,p1 ≤ δ, for some δ > 0 small, which restricted to the real subspace is even in θ and satisfy (7.19), and where p 1 = p 0 + m 1 defined in (7.21). Then setting T β u(θ + β(θ), x) := u(θ + , x) and defining the map Φ : θ + = θ + β(θ), y + = y, w + = w, (7.33) one has that for some ρ small Φ * F : D a,p+2 (s − 2ρs 0 , r + ρr 0 ) → V a,p is tame in E (see 5.30) for δ ≪ ρ. Moreover d(Φ * F )(u)[h] := P + + K + has the form (5.38) where P has coefficients a + ≤ δ with δ > 0 small and p 1 = p 0 + m 1 as in (7.21). Assume also that A(θ, x) is even both in θ ∈ T d and x ∈ T when restricted to the real subspace and condition (7.19) holds. Finally assume that A(θ, x) satisfies bounds like (5.40) with C(F ) A v,p . Then one has that Φ * F is tame in E with tameness constant given by (6.24) where C v,p (G) A v,p . Moreover, writing and K + has the form (5.39) with N N + 5|S| and with coefficients c + i , d + i , i = 1, . . . , N + 5|S|. The coefficients Proof. First of all we note that Φ is a tame map. It is sufficient to apply the definition and use the fact that · v,p is equivalent to · H p (T d b ×Ta) on the functions u(θ, x). Hence also the push-forward is tame. Equation (7.36) follows again by an explicit calculation, and the tame bounds on the coefficients follow by the tameness of the transformation and of the coefficients L i . The bounds (7.37) follows by interpolation properties of the decay norm. One cam study of the finite rank term K + following the reasoning used in Lemma 7.61. Lemmata 7.58, 7.61, 7.63 and 7.64 guarantees that the structure of pseudo-differential operator of the linearized in a neighbourhood of u = (θ, 0, 0) persists under the change of variables we described above. We also have decomposed the linearized operator in homogeneous decreasing symbols of order two, one and zero. By Lemmata above we note that also such decomposition is stable, and we are able to control the tameness constant of each symbol. This not a priori obvious.
Reduction to diagonal plus bounded operator
The following Lemma is the key result of this Section. We give the result on a special class of vector field. Consider vector field F : O × D a,p+ν (s, r) → V a,p with F = N 0 + G and with ν = 2. For γ 0 /2 ≤ γ ≤ 2γ 0 (see (5.54)), set v = (γ, O, s, a) and assume that F is in E (see 5.30) and has the form where K of the form (5.39) with coefficients c j , d j , j = 1, . . . , N and ω (0) defined in (5.54). Moreover we assume that Note that by the reversibility condition one has that on U the function h(θ) is real and even in θ. Write We recall that by Remark 7.57 we have that δ (2) p controls the tameness constant C v,p (N (2) ). In the same way the norm of the functions a i , b i , c i , d i controls the constant C v,p (N (1) ). In the following we will use the following notation: we set C v,p (N (2) ) := max{ a 2 v,p , b 2 v,p }, In Remark 5.19 we already fixed the parameter p 0 > (d + 1)/2 so that the norm · s,a,p for p ≥ p 0 defines a Banach algebra on the space H p (T d s × T a ). Some comments are in order. Note that the term N (2) is pseudo-differential operator of order 2, i.e. maps ℓ a,p+2 to ℓ a,p for any p ∈ R. The term N (1) is an operator which maps ℓ a,p+1 to ℓ a,p . In the following we shall use always this notation. The vector field F in (7.38) we study has a particular structure. In Section 8 we will show the following fact: we construct a sequence of maps L n which are compatible according to Definition 6.44. Then we will apply Theorem 6.2 to F 0 defined in (4.8), then we will show by induction that the sequence of fields F n for n ≥ 1 given by Theorem 6.2 have actually the structure in (7.38). In the following Lemma we show how to construct a map L + which reduces the size of the coefficients a 2 , b 2 in (7.41) for vector fields of the form (7.38).
Step 1. The first step is to diagonalize the second order coefficient by eliminating the terms b 2 through a multiplication operator. We use a transformation of the form are λ 1,2 = (m + a 2 ) 2 − |b 2 | 2 . Hence we set a 2 (ϕ, x) = λ 1 − m. We have that a 2 ∈ R because a 2 ∈ R and a i , b i are small. The diagonalizing matrix is The bound on the inverse M A follows simply by the fact that The bounds on the truncated matrix is the same. One can also prove that Since F is in E then the map Φ 1 defined in (7.62) satisfies all the hypothesis of Lemma 7.64. Such result guarantees that the field 1 + H (1) ) (see notations in (7.38)) is in E and one has C v,p (G (1) ) ≤ C v,p (G) + K ν+1 In particular the term (H (1) ) (θ,0) satisfies a bounds like (7.66) with G (H) (θ,0) . Moreover using equation (7.36) of Lemma 7.64 with M M A = 1 + Π K+ A we obtain that and bounds (7.37) imply on the field F (1) bounds like (7.56)-(7.61). More explicitly one has Moreover the field N and the same bounds holds on the single coefficients a (1) p2 )
Step 2 -Change of the space variable Now we want to eliminate the dependence on x of the coefficients a 2 of the field F (1) . We use a change of variables Φ 2 as in (7.23) of Lemma 7.61. We are looking for α(θ, x) such that the coefficient of the second order differential operator does not depend on x, namely by equation (7.24) for some function a 2 (θ). Since T α operates only on the space variables, the (7.70) is equivalent to Hence we have to set that has solution α periodic in x if and only if T ρ 0 dx = 0. This condition implies Then we have the "approximate" solution (with zero average) of (7.72) where ∂ −1 x is defined by linearity as ∂ −1 x e ikx := e ikx ik , ∀ k ∈ Z\{0}, ∂ −1 x = 0. In other word ∂ −1 x h is the primitive of h with zero average in x. The function α (that is a trigonometric polynomial) satisfies For more details on the estimates on α we refer to [31]. With this definition of the function α and by Lemma 7.59 one has that T α : T a−(ρ/4)a0 → T a if, by (7.76), p1 is small enough. Moreover we know, since F (1) is in E that a 2 is even in θ and in x and satisfies (7.19), hence the function α satisfies the hypotheses of Lemma 7.61. Setting 2 + H (2) ), (7.77) again one has that F (2) is in E and (N (2) )) + C v,p (N (2) )C v,p1 (N (2) ). (7.80) The field N (1) 2 in (7.77) has the form (7.42) with coefficients a (2) i , b (2) i for i = 0, 1. The bound (7.25) hold with where α is the one given in (7.75). More explicitly one has Moreover the coefficients c (2) i , d (2) i of K (2) satisfy the bound (7.26). For more details on the estimates on α we refer to [31]. Lemma 7.61 implies also bounds (7.61) on the coefficients of the field F (2) . In particular coefficients a 2 satisfy bounds (7.60). First we remark that, by Lemmata 7.61 and 7.64, we have that the rank of the finite rank term is increased, and K 2 has rank N + C|S| for some absolute constant C > 0. Secondly we note that in this two steps the function h(θ) did not change.
Step 3 -Time reparameterization. In order to eliminate the dependence on θ of a (2) 2 we use a special diffeomorphism of the torus where α is a small real valued function, 2π−periodic in all its arguments. We consider a transformation Φ 3 of the form (7.33) and we set (2) . We also have that Let us study in detail the from of N (3) . First of all we have Our aim is to find β in such a way the coefficients of the second order is proportional with respect to the coefficients of ω. This is equivalent to require that for some m + := m + c. By setting we can find the (unique) approximate solution of (7.86) with zero average where (ω ·∂ θ ) −1 is defined by linearity (ω ·∂ ϕ ) −1 e iℓ·ϕ := e iℓ·ϕ iω·ℓ , ℓ = 0, (ω ·∂ ϕ ) −1 1 = 0. Note that β is trigonometric polynomial supported on |ℓ| ≤ K. As one can check (see also [31] for more details) the function β in (7.89) satisfy the bound with v = (γ, O, s − (ρ/4), a − (ρ/4)a 0 ) and γ 0 is the diophantine constant of ω. Hence we have This implies that the transformation p1 is sufficiently smaller than ρ (see condition (7.48)). We set with (recalling (4.11)) 2 )) + max{ a 3 ) collects all the terms of order at most O(∂ x ) plus a finite rank term K (3) and has the form (7.42) with coefficients a The coefficients of K (3) satisfy similar bounds as (7.98) (see (7.26)).
(7.101)
Step 4 -Torus diffeomorphism. The aim of the final step is reduce "quadratically" the size of the term (H (3) ) (θ,0) . We define the trasformation Φ 4 : (θ, y, w) → (θ + g(θ), y, w), (7.102) and we call T g the transformation T g u = u(θ + g(θ), x). We setF = (Φ 4 ) * F (3) = N 0 +Ĝ and we study its projection on the subspace N . By a direct calculation one can note that 3 ) . where , denoting by · the average in the variable θ, and we look for g(θ) such that One has the following estimates hold Moreover by the first of (7.106) we have that (7.55) holds. Now for δ (1) p1 , δ (2) p1 small enough (see condition (7.48)) one can use Lemma 7.59 to conclude that In particular g in (7.108) satisfies hypotheses of Lemma 7.63, which applied to F (3) implies that By the discussion above we have that the estimate (7.50) follows by collecting the bounds (7.64), (7.76), (7.91), (7.109) and Lemma 7.59. We check all the bounds on the new vector field (L + ) * F . We havê with N 0 defined in (7.95). Fix now v = (γ, O, s−ρ + s 0 , a−ρ + a 0 ). From this first splitting we have thatF = N 0 +Ĝ and onĜ the following estimates hold: p2 )C v,p1 (G)), (7.112) for η 1 as in (7.47) which implies (7.58). In order to prove (7.112) one reasons as follows. One has that where by (7.107) Use (7.109) to estimate g, (7.101) to estimate (H (3) ) (θ,0) , hence using (7.100) and the smallness of δ one gets the estimates (7.112) with η 1 in (7.47). Trivially one has also Now we want to rewrite the fieldF in a form more similar to (7.94) by using (7.103). We havê 4 + H (4) , This is another way to write (7.111). But now we give a precise estimates on the low norm of the component θ of the field H (4) on N . First of all we have Now if we look at the the component (H (4) ) (θ,0) , by using equation (7.107), we obtain (7.116) and hence, using (7.48), we get which implies (7.59). Now by (7.115) we have, by using (7.99) and (7.80), that (N (1) ).
Inversion of the linearized operator in the normal directions
We consider a vector field F = N 0 + G of the form (7.38) with all the properties in equations (7.39)-(7.45) and which satifies the hypotheses of Lemma 7.67. We set Now we apply Lemma 7.67 to the field F and we obtain the fieldF = (L + ) * F = (1 + h + )(N + 0 +N (1) +N (2) +Ĥ) in (7.53). We want to describe a set O ′ which satisfies the Mel'nikov conditions in 6.43 for (F , K, v is v 2 of Lemma 7.67 with O replaced by O 0 . One can note that the conditions (6.6) and (6.7) on the operator W are equivalent to find an "approximate" solution g ∈ B E of the equation We recall that ω(θ) := (1 + h + )(ω + +Ĥ (θ,0) ) and Ω(θ) : (2) ). By construction one of the effect of the map L + is that the sizes of the termsĤ (θ,0) andN (2) are "much" smaller with respect to the size of H (θ,0) ) and N (2) (see equations (7.59) and (7.57)).
In the course of our algorithm we shall only need to find an approximate solution of (7.124), up to an error which is of the order ofĤ (θ,0) andN (2) .
The third equation has the form Lu = ω + · ∂ θ − Ω(θ) g = f, (7.128) Hence we need to invert the an operator of the form L in (7.128).
In order to solve the fourth equation in (7.125) we need to invert the operator ω + · ∂ θ + Ω T (θ) .
Here Ω T (θ) is traspose w.r.t. te bilinear form a · b = j a j b j . We remark that we need to invert the latter operator on the space ℓ a,p and not only on its dual in order to get bounds like (6.6). We briefly explain our strategy. In this Section, more precisely in Lemma 7.75, we will show that, under some non degeneracy conditions on the eigenvalues of Ω(θ) (see (7.178)), one can construct a family of invertible operators Q = Q(θ) on ℓ a,p such that where D is diagonal an M a "small" remainder. This is actually what we get in (7.179) in Lemma 7.75. Equation Then we have for x, y ∈ ℓ a,p . This means that equation (7.131) reads In other words, if the matrix Q, diagonalizes approximatively the operator ω + · ∂ θ − Ω(θ), then its adjoint Q T diagonalizes approximatively the operator ω + · ∂ θ + Ω T (θ). This procedure clearly leaves the spectrum invariant.
As eplained above, we need to check that the matrix Q T acts on ℓ a,p (and not only on its dual space). Actually this property is guaranteed by the fact that Q is the identity plus a matrix with finite decay norm | · | dec s,a,p (see Definition 7.52). Hence of course the adjoint matrix Q T satisfies the same bounds of Q. The discussion above implies that if one can solve the third equation in (7.125) in such a way bounds like (6.6) and (6.7) hold, then one can do the same for the fourth equation and give similar bounds. This is why all the rest of the Section is devoted to the study of equation (7.128).
Definition 7.69. Consider the spaces X, Y, Z in (2.17). We define for G = X, Y, Z of G = H s (T d s × T : C) endowed with the norm · s,a,p defined in We study is the invertibility of the operator L.
where we renameâ i ,b i a i , b i the coefficients ofN (1) . In particular we have that L in (7.134) is the linearized operator of a field F belonging to the subspace E of compatible vector field in (5.30). This means that L is tame, gauge preserving, pseudo-differential, reversible and real-on-real, i.e. it belongs itself to E.
The inversion of L stands on two fundamental results. The first is the following: Proposition 7.70. Fix ε 0 = |ξ| 1 4 with ξ ∈ ε 2 Λ (see (4.4) and (6.1)), recall the definition of the parameters R 0 , G 0 in (6.1) and that γ 0 := c|ξ| (see (5.54)) and that p 1 = p 0 + m 1 in (7.21). Consider L defined for ξ ∈ O 0 in E of the form whereK is of the form (5.39) with coefficients c i , d i , the coefficients a such that the conjugated L + := S −1 LS is in E and has the form j | γ ≤ C|ξ|,K + of the form (5.39) with coeffcients c + i , d + i for i = 1, . . . , N + C 1 |S| ( where |S| is the cardinality of the set |S|). Moreover We divide the proof into two steps.
Step 1 -Descent Method. In this step we want to eliminate the term a 1 := a (0) 1 + a ′ 1 in the operator of order O(∂ x ). We follows the strategy used in Step 4 of Section 3 in [31]. We introduce a change of coordinates of the form for a function s small enough in such a way S 1 is invertible. By a direct calculation we have that the coefficients a (1) We look for s such that a 1 ≡ 0. Recall that by the reversibility one has on U that a 1 has zero average in x. Hence, by setting 1 + s = exp (q(θ, x)), one has that a (1) that have unique (with zero average in x) solution One has that the solution q is satisfies the estimates where we used the estimate |m + − 1| γ ≤ C|ξ|. Clearly the function s satisfies the same estimates in (7.147). Hence we have obtained Now since the transformation S 1 = 1 + O(|ξ|) trivially (see Lemma (7.64)) one has again that a (1) the coefficients a for i = 1, . . . , N + C 1 |S| ofK (1) satisfy the bounds (8.21). The study of the termK (1) follows by following the same reasoning used in Lemma 7.61. Moreover by equation (7.146) one has that q is even in x and hence the transformation S 1 does not change the parity of the coefficients. Moreover satisfies condition (7.19) which implies that S 1 satisfies the hypotheses of Lemma 7.64.
The following properties is a consequence of Lemma 7.71. Lemma 7.72. Let us define the operator A := Ψ 1 − {Ψ 1 } where {Ψ 1 } σ ′ ,k σ,j (l) = Ψ σ,j σ,j (l) for σ = σ ′ , j = k and l = 0. Then one has that |A∂ x | dec v,p + |∂ x A| dec v,p ≤ C(p)|ξ| where | · | dec v,p is defined in (7.3) with j ∈ Z instead of Z + . Proof. One has that since (Ψ 1 ) σ ′ ,k σ,j (l) = 0 outside the set |l| ≤ 2 and |j − k| ≤ C S and the decay norm of B is controlled by the norm of its coefficients a (0) 0 . In particular note that we used Lemma 7.71 in the following way. For instance we have the estimate |B −σ,k σ,j (l)k| (7.166) and one uses the gain of two derivatives of the denominator to control the two derivatives in the numerator. Hence one control the coefficients using b The bounds second term and the Lipschitz estimate follows in the same way. By Lemma 7.72 follows that for |ξ| small the map S 2 is invertible. Moreover we have the following Lemma Lemma 7.73. Consider a linear operator A = (A) σ ′ σ for σ, σ ′ = ±1 on the spaces G := G × G where G = X, Y, Z the spaces defined in (2.11). One has that A is reversibility preserving if and only if for any σ, An operator B is reversible, i.e. maps X → Z if and only if The proof of Lemma 7.73 is similar to the proof of Lemma (4.36) in [31]. Clearly in this case the differences stands in the fact that we developed in Fourier coefficients using the exponential basis in x. By this Lemma and an explicit computation, we have that Ψ 1 is reversibility preserving since B is reversible. Now we can define the map and R + as the form (7.140). Note that we have defined Ω −1 + as infinite dimensional matrix with index ℓ ∈ Z d and j ∈ Z + . It is an operator one the space of sequences {z j } j∈Z . But by our condition of reversibility we work on sequences such that z j = −z −j . Hence we can rewrite the matrix Ω (−1) + as an operator acting on the space of "odd" sequences as a diagonal matrix and r (0) j is real by the reversibility of the field B. Hence, setting S = S 2 • S 1 , the Lemma is proved.
Remark 7.74. The terms r j 0 are of order O(|ξ|). In particular they are the integrable terms that cannot be cancelled through a Birkhoff transformation. Moreover such terms are the corrections of order O(|ξ|) to j 2 that we have considered in (9.5) of Section 9.
The following Lemma is the last important abstract result we will use in order to invert the operator of the type L in (7.134).
(7.174)
Assume that |m + − 1| γ ≤ C|ξ|, and |r j + | γ ≤ C|ξ|. Fix parameters κ 4 = 7τ + 3, κ 5 = 7τ + 5, m 2 = m 1 + κ 5 , (7.175) p 1 , p 2 as in 6.1 and take an arbitrary N > 0 large. Assume that with G 0 in (6.1). There exists a constant C 0 = C 0 (p 2 , d) > 0 such that, if and ǫ = ǫ(d, p 2 ) is small enough then the following holds. There exists a sequence of purely imaginary numbers defined on O 0 and such that for any ξ ∈ Λ 2γ N , defined as there exists a bounded, reversibility preserving, linear operator Φ N = Φ N (ξ) depending on θ ∈ T d s and acting on ℓ a,p such that Before giving the proof of the Lemma we make some remarks. This Lemma essentially can be applied to operators L + of the form (7.140). Indeed our strategy is to use Proposition 7.70 as a preliminary step before using a KAM -like scheme in order to diagonalized the linear operator L. Lemma 7.75 provides an approximate diagonalization, but anyway the order of the approximation N is arbitrary large. The conditions on the parameters in (7.178) are the Second order Melnikov conditions. Clearly such conditions depends on N (see formula (7.178)). In particular to obtain a partial diagonalization one can ask for the conditions (7.178) only for |l| ≤ N . On the contrary in order to completely diagonalize one has to ask the the lower bounds in (7.178) for any l ∈ Z d . Our choice is less restrictive but it is sufficient we are just looking for an approximate inverse of L. Lemma (7.75) is the equivalent result of Theorem 4.27 in Section 4 of [31]. The proof of the Lemma above is based on the following Iterative Lemma. Take L as in (7.172) and define with m 2 in (7.175) then, for any ν ≥ 0, one has: (S1.) ν Set Λ γ 0 := O 0 and for ν ≥ 1 For any ξ ∈ Λ γ ν = Λ γ ν (L), there exists an invertible map Φ ν−1 of the form Φ −1 = 1 and for ν ≥ 1, Φ ν−1 := 1 + Ψ ν−1 : H s → H s , with the following properties.
195)
Proof of Lemmata 7.75 and 7.76. The proof is the same that the one of Theorem 4.27 in [31] which is based on the iterative scheme in Lemma 4.38 proved in [31]. Here Lemma 7.75 follows from Lemma 7.76. The proof of Lemma 7.76 is similar to the one of Lemma 4.38 in [31]. Indeed by hypothesis the operator L in (7.172) has the same class of operators defined in Definition 4.37 of [31] and moreover smallness condition in (7.176) is the equivalent of the smallness condition of γ −1 ε in Theorem 4.27 in [31]. One difference is that here the frequency ω + depends on parameters ξ ∈ R d , while in [31] there is only one-dimensional parameters λ ∈ R that modulate ω. Anyway there are no differences in the proof since Kirszbraun's Theorem on Lipschitz extension of functions holds in R d (see Lemma A.2 in [1]). The proofs of items (S3) ν , (S4) ν of Lemma 7.76 are the same of those of items (S3) ν , (S4) ν of Lemma 4.37 in [31]. The difference is in the fact that in [31] one considers the same linear operator L that is the linearized of the same non linear operator on two different points u 1 and u 2 . Moreover the difference of L(u 1 ) and L(u 2 ) is given in terms of the difference of u 1 and u 2 . In other words the operators are close to each other if the points u i are close. Here one gives the estimates on the differences of r ν σ,j (L 1 ) and r ν σ,j (L 2 ) directly in terms of the differences of the two operators L 1 and L 2 . Another difference is that in Section 4 in [31] one gets a complete diagonalization. This is obtained by applying infinitely many changes of coordinates that approximatively diagonalize.In this case, to prove formula (7.179) it is enough to consider Φ N the composition of a finite number of changes of coordinates. That is why the set of parameters in (7.178) is defined for |l| ≤ N . The last difference is that here the sites j ∈ S c instead of Z + .
Remark 7.77. Approximate eigenvalues In Theorem 4.27 in [31], given an operator L one construct the eigenvalues µ ∞ σ,j as limit of some "approximate" eigenvalues µ ν σ,j , for ν ≥ 1. Here we have that we stop the sequence of µ ν σ,j after the number of steps one need in order to get the approximation of order N in (7.180),(7.181) and the one defines µ N σ,j as the last term of such sequence. Moreover in [31] the operator L is the linearize operator of a field F 0 in some point u. Theorem 4.27 provides also Lipschitz dependence of the approximate eigenvalues µ ν σ,j (u) with respect the point u. Here the situation is different. As we will see the operator L comes form the linearization in zero of some vector field F 1 . Hence while in [31] one has to control the difference between the eigenvalues of L(u 1 ) and L(u 2 ), i.e the linearized operator of F 0 in two different functions u 1 , u 2 , here we need to control the differences between the eigenvalues of the linearized operators of two different fields F 1 , F 2 . If one knows that L 1 is "close" to L 2 (clearly one has to explain the meaning of "close") then the bounds on the eigenvalues follows trivially.
We collect the results of Section 7 in the following Lemma.
Lemma 7.79. Consider the operator L in (7.135) and assume bounds (7.137) with p 1 = p 0 + m 2 with m 2 defined in (7.175). Fix any N ≥ 1 and for ξ ∈ Λ 2γ N ∩ P 2γ N (see (7.178), (7.197)) consider the maps S, Φ N defined in (7.138) and (7.179) respectively and set M := S • Φ N . Then, the map M is reversibility preserving according to Def. (2.11), and for any f ∈ X one has that with ε 0 p defined in (7.183). Moreover, setting for any g ∈ X one has that h and Lh − g satisfy bounds like (7.198).
Proof. The result follows by collecting the results of Proposition 7.70, Lemma 7.75 and Corollary 7.78. More precisely, we apply Proposition 7.70 to the operator L in (7.135). Consider the operator L + in (7.140) and set By Remarks 7.56, 7.57, Lemma 7.53 and by bounds (7.141), (7.142) one has that hypothesis (7.176) hold. We set moreover S hence (7.173) holds, then we can we apply Lemma 7.75 to L + . By Corollary 7.78 we get the thesis.
The sets of "good" parameters
In this Section we conclude the proof of Theorem 1.3. In Sections 3 and 4 essentially we rewrite the (1.1) as a infinite dynamical system given by the vector field in (4.8). In this way we are allowed to apply Theorem (6.2) to the vector field F 0 defined in (4.8). The analysis performed in Section 5 guarantees that one can satisfies hypotheses (6.13) of the Abstract theorem. In order to apply Theorem (6.2) one need to identify the sequences of maps L n with properties (6.10),(6.9) and (6.11) and give a more explicit formulation of the sets of "good" parameters defined in (6.14) in order to estimate the measure of such sets.
On the field F 0 we cannot apply directly Lemma (7.67) just because N (2) is not "small enough" and we are not able to prove that L is close to the identity. We overcome this problem using an algebraic arguments. We strictly follows the strategy of Lemma (7.67), we will underline the fundamental differences. Roughly speaking the aim of the following Lemma is to conjugate F 0 to a vector field for which the term N (2) has constant coefficients of order O(|ξ|) plus terms of order at least O(|ξ| Lemma 8.80 (Preliminary step). Consider the field F 0 defined in (5.53). Consider K 0 as in 6.1, ε 0 in (6.1), ρ 1 of definition (6.8). and set v = (γ, O, s, a), v 1 := (γ, O, s− ρ 1 s 0 , a− ρ 1 a 0 ) and v 2 := (γ, O, s− 2ρ 1 s 0 , a− 2ρ 1 a 0 ).
for some C depending only on d, τ . Then, if ǫ is small enough, there exists a tame map that satisfies (6.10), (6.9) and (6.11) with κ 3 as in (7.120). We set and moreoverF is in E (see 5.30) and has the form On the set of ξ ∈ O 0 such that |ω (0) · l| ≥ γ/ l τ for |l| ≤ K 1 (see (4.11)) one has the following. The function h 1 satisfies bounds with a 2 ≡ a p has changed. Indeed in this case we have set δ (2) p ≈ C v,p (N (1) 0 ) without divide by γ 0 . This is due to the fact that γ −1 0 C v,p1 (N 0 ) = G 0 that is not small. In Lemma 7.67 we used the smallness of δ 0 ) in order to prove that L + is close to the identity. In this case to get the result we need to use different arguments. However we follows the same strategy used in Lemma 7.67 and we perform the same four steps of that Lemma. Concerning step 1 and step 2 we apply the same transformations defined exactly in the same way. In this case there is no small divisors in the equations that define transformation Φ 1 and Φ 2 . Hence the same estimates of Lemma 7.67 hold with the definition in (6.2). The step 3 has to be analyzed more carefully. Indeed if one looks at the equation (7.89) for β one sees that one has to control the inverse of the operator ω · ∂ θ ( ω in (4.11)). By using the diophantine condition (5.54), one get that that is not small. We need to estimates β in a different way. We first analyze the form of the coefficient a 2 . By equation (7.73) we have where a 2 = (1 + a 2 ) 2 − |b 2 | 2 − 1 , with a 2 and b 2 the coefficients of N (1) 0 . We have that we can write and we note also that for some constant C. Moreover by an explicit computation we can write (8.10) Clearly one has Roughly speaking this implies that in low norm p 1 one has m 2 ≈ a 2 + O(|ξ| 2 ). Now equation (7.73) becomes Now we have to estimate β. The critical term is obviously the term of O(|ξ|) because one cannot use estimate (5.54) since γ 0 ≈ |ξ|. One can use an algebraic arguments. First we recall that by (4.11) one has ω (0) = λ (−1) + λ (0) (ξ) with λ (−1) = j 2 , j ∈ S + . On the other hand the term of order ξ of β (8.13) depends only on the coefficients a 2 given in (5.52). Hence in formula (8.13) we need to estimate ω (0) · k with k ∈ (S + ) d but with only two components different from zero and not for k ∈ Z d as in (5.54). This implies, using Lemma 3.15, in the term a 2 z xx · ∂ z there are only trivial resonances, and hence for some constant C, that is a better estimates with respect to the one in (7.90). In this way we get that the transformation is ξ−close to the identity. Now the last step can be performed exactly as in Lemma 7.67 because there are no other differences and one can estimate the transformation Φ 4 as done in (7.108) and (7.109). Thanks to the perturbative arguments in (8.10) and (8.8) one can fix and (8.6) follows. Finally we defined where h 1 is defined as in (7.92) with h = 0 and β defined in (8.13). The estimates (8.5) follows by (8.14). The fact that L 1 is compatible, according to Definition 6.44, follows by (6.2) and Corollary 7.68. This concludes the proof.
Remark 8.81. Note that the coefficient m 0 in (8.6) gives the correction of order O(|ξ|j 2 ) to the eigenvalues j 2 as we will see in Section 9 (see equation 9.5). This term will remain the same at each step of our iteration since all the further correction will be of higher order in |ξ|.
The main result of this Section is the following: There exists a sequence of maps L n , n ≥ 1 that satisfies (6.10), (6.9) and (6.11) with κ 3 , κ 1 , κ 2 , p 1 , p 2 , µ, ε 0 given in 6.1 and (6.1), such that the n−th vector field F n is defined on O 0 and on O n in (6.14) satisfies bounds (6.15). Moreover F n is in E (see Definitions 5.29, 5.30) which satisfies (6.15) and can be written in the form (7.38) as where m 0 is defined in (8.6) and In particular N (1) n , N n have the form (7.41) and (7.42) and, using the same notation as in (7.46), the following estimates hold: We also have that the coefficients a and c 21) for N n ∼ N 0 + C|S|n. In particular one has that Proof. We prove the Lemma by induction on n. If one assume that we already constructed the map L n such that all the properties above are satisfied then we proceed as follows. First of all by (8.19) one note that hypotheses (7.48) of Lemma 7.67 are satisfied. Then we apply the Lemma to the field F n . We set L n+1 the map given by the Lemma 7.67. It satisfies (6.10), (6.9) and (6.11) thanks to bound (7.50) (recall the definition of κ 3 in (7.120) in Corollary 7.68). We setF n := (L n+1 ) * F n (see (7.52)) that has the form that clearly has all the bounds (7.54),(7.55), (7.56), (7.57), (7.58), (7.59) and (7.61) hold and these bounds together the inductive hypotheses implies that the fieldF n satifies bounds like (8.16)-(8.21) except for (8.19). Actually we prove better bounds onN (1) n ,Ĥ (θ,0) n . Indeed we haveN , ω n+1 ∈ R d is diophantine (see (7.55)) and (8.24) and, by (7.56), (7.57), (7.58), (7.59) and bounds (6.15) for F n , that are bounds even better than (8.19). Reasoning in the same way, bounds (7.60) and (7.61) togheter with the inductive hypotheses (7.44)-(8.20) imply bounds (8.22) with K −κ2 n instead of K −κ2+µ+4 n . We recall the following. By Proposition 7.67, we have that the rank ofK n is increased proportionally to the cardinality of the set |S| (hence we set N n+1 ∼ N n + C|S|). Now, by the definition in Theorem 6.2, we have that the field F n+1 is given by F n+1 = (Φ n+1 ) * Fn , where the map Φ n+1 is generated by the field g n+1 in (6.15) for n n + 1. We have to show that the map Φ n+1 does not change the size of the coefficients ofN (1) n , N (2) n in such a way the estimates on F n+1 remains essentially the same of those onF n . First we note that, by the form of the map Φ n+1 one has Let us study first the term that does not contains the constant coefficients termN (n) 0 . Again by the form of the map Φ n+1 , that is generated by g n+1 ∈B, we have that Φ n+1 preserves the pseudo-differential structure of the vector fields. By setting we have that the coefficients of (Π N F ) (w) comes form the term Π N (Φ n+1 ) * (N n +N (2) n + Π NĤn ) or from the term Π N (Φ n+1 ) * (Π N ⊥Ĥ n ). Obviously the first coefficient satisfies (8.19) using (8.26) and the fact that, in the low norm p 1 , Φ n+1 ≈ 1 + O(δK −κ2+µ n ) . The second terms satisfies (8.19) because one has Now we use the definition of g n+1 and by item (iii) in Definition 6.43 we obtain that where r n satisfies bounds (6.7) and Π Kn+1 Π AFn = (1 + h n+1 )Π AĤn . Equation (8.19) simply follows by applying Lemma 6.50 and the inductive hypotheses. The bounds (8.22) follows similarly recalling the first estimate in (7.14) of Lemma 7.58. In order to prove the inductive basis we reason as follows. First we note that if n = 0 then we cannot apply Lemma 7.67 in order to define the map L 1 and the fieldF 0 . On the other hand we apply Lemma 8.80 which provides the same result. Then on can reason as done in above using the map Φ 1 .
one has that satisfies the Mel'nikov conditions (see 6.43) with (F n , K n , γ n , O 0 , s n , a n , r n ). HereF n is the vector field defined in (8.23).
Then in Proposition 8.82 we may choose Proof. We proceed by induction, assuming that our claim holds true up to n, We shall systematically use the bounds on F n ,F n given in Proposition 8.82. We show that for any parameter ξ ∈ Λ 2γn n+1 ∩ P 2γn n+1 ∩ S 2γn n+1 we can construct an approximate inverse W. As we have seen explicitely in (7.122)-(7.125), the operator N = Π Kn+1 Π X ad(Π NFn ) is block diagonal and decomposes in four equations, so that also W is made of four blocks. We have the trivial multiplication by 1/(1 + h n+1 ) and the following operators: which is an approximate inverse of Π Kn+1 (ω n+1 + H (θ,0) n )∂ θ . This is used for the inversion of the first two equation in (7.125); ; note that this is a linear operator acting on H p0 (T; ℓ a,p ) ∩ H p (T; ℓ a,p0 ); * which is a linear operator acting on H p0 (T; ℓ a,p ) ∩ H p (T; ℓ a,p0 ). Note that in fact the adjoint act on the much bigger dual space, however we need to find an inverse on the space of regular vector field. Hence we need this stronger notion of invertibility.
We show that W defined above satisfy all the properties of Definition 6.43. First of all, in order to deal with (i), we need that ω n+1 is a diophantine vector of R d . Then we set W (n) 0 := (ω n+1 · ∂ θ ) −1 Π Kn . This is possible since by Lemma 7.67 the size of H (θ,0) n is so small that it is possible to put it inside the remainder term u, see formula (6.7). This choice of W (n) 0 is possible since ξ ∈ S 2γn n+1 . We will see that this approximation is sufficient to get a good approximate solutions that satisfies the requirements in Definition (6.43). In (ii) we ignore H (θ,0) n exactly for the same reason as above. Moreover, recalling (F n ) (w,w,) (θ) = (N (n) 0 ) (w,w) + N (1) n +N (2) n + (Π NĤn ) (w) , we ignore the termN (2) n again due to Lemma 7.67 (see bound (8.29)). We study equation with f ∈ Z and g ∈ X (see Definition 7.69) and wherê The method we use to invertL n is to approximately diagonalize it. Hence we get the approximate solution in item (iii) following the same diagonalization procedure, since the operators have the same eigenvalues. In particular this ensure that W (n) − acts on H p0 (T; ℓ a,p ) ∩ H p (T; ℓ a,p0 ) and not only on its dual space. We claim that, by the construction of the fieldF n , the operatorL n satisfies the hypotheses of Proposition 7.70. Indeed it has the form j , i = 0, 1 and j = 1, . . . , N n+1 . Hence the smallness conditions in (7.137) follows.
By applying Proposition 7.70 to the operatorL n in (8.39) we get a change of coordinates S (n) := 1 + Ψ (n) (given in (7.138)) such that the operator of the form (7.172) where We have that L + n in (8.42) satisfies the hypotheses of Lemma 7.75. In order to prove a bound like (6.7) we fix the number N > 0 in Lemma 7.75 in such a way one has N −κ4 ≤ K −κ2 n . (8.45) Using (8.34) may set that N = K ( 3 2 ) n * 0 . Lemma 7.75 is a KAM-like scheme, the point is that, if we are at step n of the abstract algorithm in Theorem 6.2, we have to perform n * Kam steps in Lemma 7.75. With this reduction procedure we approximately diagonalize L n up to a remainder which is so small that it is negligible in the construction of the approximate inverse W n .
We have the following Lemma. in (8.44). One has that the transformation S (n) which conjugateL n in (8.41) to L + n in (8.42) is generate by the function s (n) given in (7.146). The bounds (8.59) implies the same bounds on the the difference (s (n) − s (n−1) ). Hence, by triangular inequality and the the form of the coefficients a
Measure estimates
In this last Section we prove that the measure of the set of "good" parameters is large as ξ → 0. In particular in Section 8 we have seen that Theorem 1.3 holds in the set with ν defined in (8.34) (see Lemma 8.83). Before performing such measure estimates we first prove that the map which link the parameters ξ to the frequency ω(ξ) and ξ → µ σ,j (ξ) is a diffeomorphism.
The invertibility of M (2) relies on the invertibility of the matrix 1 + R := 1 + α λ A + β λ V 2 AV −2 . We have that R has at most rank 2, hence has at most two eigenvalues different from zero. Say that µ 1,2 = µ 1,2 (v i ) is such eigenvalues that in principle depends on the v i . Now one has that 1 + R has d − 2 eigenvalues equals to 1 and two equals to 1 + µ 1,2 (v i ). One must have that 1 + µ 1,2 (v i ) = 0. Hence if µ(v i ) is not a trivial polynomial in the variables v i then one get the invertibility of M (2) as generiticity condition on v i . Otherwise one has to exclude some values of α λ and β λ by imposing a generiticity condition on a 2 , a 3 , a 6 , a 7 (as done in equations (9.9) and (9.10)) and then taking a generic choice of v i . This second option does not occur. Indeed one note that the vector w 1 := (1, . . . , 1) ∈ R d is orthogonal to the kernel of the matrix α λ A. Moreover the vector w 2 := v, where v := (v 2 1 , . . . , v 2 d ), is orthogonal to the kernel of the matrix yV 2 AV −2 . Hence the range of the matrix R is generated by { w 1 , w 2 }. One can note that (9.11) The 2 × 2 matrix which represent the matrix R has eigenvalues given by The dimension of the range of R, for any α/λ = 0 and β/λ = 0, depends only on the v i for i = 1, . . . , d.
Lemma 9.88. For all non-resonant choices of (a 1 , a 2 , a 3 , a 4 a 5 , a 6 , a 7 , a 8 ) there exists a no-trivial polynomial in the v i such that for all choices of (v 1 , . . . , v d ) with v i "generic" with respect to the polynomial the following holds. For all ℓ, j, k, σ 1 , σ 2 such that: if σ 1 = σ 2 then (l, j, k) = (0, j, j) and moreover i l i + σ 1 = σ 2 the affine map is not identically zero. Proof.
Remark 9.89. Just to fix the ideas we give some examples of cubic non linearity (see (1.6)) for which the extraction of parameters give the twist condition on the tangential sites. The classical cubic NLS with a 1 = 1, a i ≡ 0 for i = 2, . . . , 8. The derivative NLS a 3 = 1, a i = 0 for i = 1, . . . , 8 (this case has been studied in [21]).
The estimates of "good" parameters
We prove the following Proposition.
The previous results implies that one has |O 0 \C ε | ≤ Cγε 2(d−1) ≤ Cε 2d c, (9.39) where we have used the definition of γ in (5.54) and (5.55). In particular one gets that the relative measure of ε −2d |O 0 \C ε | ≤ O(c). This implies that the relative measure of the cantor set C ε is positive if c is small.
Proof of Theorem 1.3
Consider the vector field F in (4.8). By Lemma 5.38, the choices of parameters in (6.1) and Lemma 6.41, we have that F satisfies all the hypotheses of Theorem 6.2. Hence in the set O ∞ given by Theorem 6.2 the result of Theorem (1.3) holds. It remains to prove that O ∞ satisfies the measure estimate in (1.7). Proposition 8.83 guarantees that the set C ε in (9.1) is contained in O ∞ . We choose C ε as the set on which 1.3 holds. In particular Proposition 9.90 implies that C ε satisfies (1.7). This concludes the proof.
Proof of Theorem 1.5
Concerning Theorem 1.5 in which the nonlinearity f is merely differentiable, we just give a sketch of the proof since it is very similar to the one of Theorem 1.3.
One can repeat word by word the arguments of Sections 3 and 4. One gets that the vector field in (4.8) is defined in the domain (4.4) with s 0 = a 0 ≡ 0. This implies that the norm · H p (T d s ×Ta) is the Sobolev norm · H p (T d ×T) ∼ · 0,0,p see Remark 5.19 and (5.14). Again, by Lemma 5.38, we have that F satisfies all the hypotheses of Theorem 6.2 which implies that in the set O ∞ given by Theorem 6.2 the result of Theorem (1.5) holds. We now give a sketch of the proof that O ∞ satisfies (1.9). The reasoning we follow is very similar to the one used to prove Theorem 1.3. The main difference is that we set L n = 1 for n ≥ 0, recall that L n are the compatible changes of variables introduced in Definition 6.44. This is due to the fact that the diffeomorphisms of the torus (form which the L n are chosen in the analytic case) are not close to identity in Sobolev class, i.e. we do not have the second formula in (7.18).
One can show by induction that linearized operator of the vector field F n , for n ≥ 0, has the form in (7.38)-(7.44), with δ (1) p1 , δ (2) p1 ∼ O(|ξ|) (see equation (7.45)). Again recall that in the analytic case we choose the map L n so that the size of δ (1) p1 , δ (2) p1 decreases as n go to infinity. We claim that Proposition 8.83 holds also when δ (1) p1 , δ (2) p1 → 0. Indeed such condition is only used in order to prove that the sequence of analiticity radiuses a n does not go to zero. Indeed the proof of Proposition 8.83 relies on the existece of changes of variables which approximately diagonalize the linearized vector field. Such changes of variables are defined in Sections 7.2 and 7.3 and work also in Sobolev class.
|
2017-05-20T10:39:34.000Z
|
2017-05-20T00:00:00.000
|
{
"year": 2017,
"sha1": "980cf1112b2d1cf334fe4320c7bbbf2b7f6232ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "980cf1112b2d1cf334fe4320c7bbbf2b7f6232ba",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
245118715
|
pes2o/s2orc
|
v3-fos-license
|
Factors Associated With Use of Telemedicine for Follow-Up of SLE in the COVID-19 Outbreak
Objective: To investigate the factors associated with telemedicine (TM) use for follow-up of Systemic Lupus Erythematous (SLE) patients in the COVID-19 pandemic. Methods: This was a single-centered cross-sectional study conducted in Hong Kong. Consecutive patients followed up at the lupus nephritis clinic were contacted for their preference in changing the coming consultation to TM in the form of videoconferencing. The demographic, socioeconomic, and disease data of the first 140 patients opted for TM and 140 control patients preferred to continue standard in-person follow-up were compared. Results: The mean age of all the participants was 45.6 ± 11.8 years, and the disease duration was 15.0 ± 9.2 years. The majority of them were on prednisolone (90.0%) and immunosuppressants (67.1%). The mean SLEDAI-2k was 3.4 ± 2.4, physician global assessment (PGA) was 0.46 ± 0.62 and Systemic Lupus International Collaborating Clinics (SLICC) damage index was 0.97 ± 1.23. A significant proportion of the patients (72.1%) had 1 or more comorbidities. It was found that patients with higher mean PGA (TM: 0.54 ± 0.63 vs. control: 0.38 ± 0.59, p = 0.025) and family monthly income > USD 3,800 (TM: 36.4% vs. control: 23.6%; p = 0.028) preferred TM, while full-time employees (TM: 40.0% vs. control: 50.7%; p = 0.041) preferred in-person follow-up. These predictors remained significant in the multivariate analysis after adjusting for age and gender. No other clinical factors were found to be associated with the preference of TM follow-up. Conclusion: When choosing the mode of care delivery between TM and physical clinic visit for patients with SLE, the physician-assessed disease activity and patient's socio-economic status appeared to be important.
INTRODUCTION
Since coronavirus disease 2019 (COVID- 19) was declared a pandemic, the rapidly increasing number of cases and deaths overwhelmed the health care system worldwide. Systemic lupus erythematosus (SLE) is a chronic remitting-relapsing disease that affects multiple organ systems. Patients with SLE are at heightened risk of infection due to the underlying disease and the use of immunosuppressive therapies (1). The increased prevalence of comorbidities, such as hypertension and cardiovascular diseases, have been reported to be poor prognostic factors of COVID-19 (2,3). During this extraordinary time, lupus patients face the difficult choice between risking COVID-19 exposure during a clinician visit and postponing needed care. Patients with SLE typically require regular follow-up (FU) visits to ensure early detection of flares and to monitor the toxicity of immunosuppressive therapy. The unattended patients are at risk of sub-optimal disease control which will lead to damage accrual and high costs (4,5). An alternative option would be to adopt telemedicine (TM) or telehealth, the use of telecommunication technologies to provide medical information and services. In fact, the use of TM to reduce potential exposure to COVID-19 has been recommended by international rheumatology societies (6,7). Communication via telephone or video consultations are recommended over emails because of privacy concerns (8). Specific statements on the scope and limitations of the use of video consultations in rheumatology patients have also been published (9).
Despite being widely used during the pandemic, evidence of TM in rheumatology is sparse. According to a systematic review in 2017, there is no good evidence in supporting the use of TM for the management of rheumatic diseases (10). In total, 2 studies done during the COVID-19 outbreak reported moderate acceptance of TM as the mode of care in patients with rheumatic diseases (11,12). However, there is no data on the clinical factors associated with the use of TM in patients with SLE. We hypothesized that the decision of choosing TM as the mode of FU could be predicted by certain patient profile.
In this study, we aimed to examine the demographic, socio-economic, psychological, disease, and treatment factors associated with the patient's preference of use of TM for FU of SLE.
Study Design and Patients
This was a single-center, observational, cross-sectional study. The study was performed at the lupus nephritis clinic of a regional hospital in Hong Kong where most of the patients reside nearby. From May 1 to November 30, 2020, all consecutive adult patients with SLE, according to the 2019 EULAR/ACR classification criteria, were invited to participate in the study (13). Patients (or their carers) needed to possess the technology required to conduct a TM visit (a smartphone, tablet or computer with audio and video capabilities and internet connection) via a real-time video conferencing software ZOOM (Zoom Video Communications Inc, California, US). Patients on intravenous cyclophosphamide were excluded. All patients who had given written informed consent were asked for their interest in changing the coming scheduled FU to TM-based in the form of a videoconference. The first 140 patients agreed to use TM care were recruited. Another 140 consecutive patients who preferred to continue standard FU were enrolled as controls. All participants were asked to complete a set of questionnaires including the LupusQoL, Health Assessment Questionnaire Disability Index (HAQ-DI), and Hospital Anxiety and Depression Scale (HADS). The LupusQoL (0-100, 100 worst) is a disease-targeted patient reported outcome measure that was developed and validated for SLE patients (14). It consists of both 8 health-related and 4 non-health related domains to enable an understanding of the broader burden of the disease. HAQ-DI (0-3, 3 most disabled) covers various common daily activities to assess disability (15). HADS (>8 denotes anxiety or depression) was used to assess anxiety and depression in medical patients (16). The socio-economic status of the patients was also collected through questionnaire which has been used in studies of local patients with autoimmune rheumatic diseases (17,18). The study was approved by the local research ethics committee (The Joint Chinese University of Hong Kong -New Territories East Cluster Clinical Research Ethics Committee, No. 2020-0254) and conducted according to the principles of the Declaration of Helsinki.
Assessments
The disease variables recorded included disease duration, comorbidities, nephritis class, ever presence of rash/ joint pain, proteinuria, medications, disease activity, and Systemic Lupus International Collaborating Clinics/American College of Rheumatology (SLICC/ACR) Damage Index (SDI) (19). SLE disease activity was assessed by the Systemic Lupus Erythematosus Disease Activity Index 2000 (SLEDAI-2k) and physician global assessment [PGA (0-3, 3 most active)] (19). Disease remission was defined as absence of clinical activity with no use of systemic glucocorticoids (GC) and immunosuppressive agents; and lupus low disease activity state (LLDAS) as a SLEDAI 2k ≤ 4, PGA ≤ 1 with GC ≤ 7.5 mg of prednisone daily and well tolerated standard maintenance doses of immunosuppressive agents (20). All investigations and assessments were performed within 1 month before or after the patients were recruited. The clinical assessments of the control group were done face-to-face while those for the TM group were by either face-to-face or videoconferencing. All patients were required to come to the hospital for blood and urine tests prior to the scheduled FU. The clinical data were retrieved from the electronic health record (EHR) manually and were documented into a computer database for analysis.
Statistical Analysis
The overall demographic and clinical characteristics of the recruited patients were reported as mean values with standard deviations for continuous variables and as numbers and percentages for categorical variables. The patients in the TM and control groups were compared by chi-square test or fisher exact test and student t-test where appropriate. Binary logistic regression was conducted for analysis of independent predictors with respect to preference over TM FU. Age, gender, and other predictors with p < 0.1 in the univariate analyses were put into the regression model. A 2-tailed probability value of p < 0.05 was considered statistically significant. Statistical analyses were performed using the Statistics Package for Social Sciences V.26.0 (IBM Corporation, Armonk, NY, USA).
RESULTS
A total of 332 patients with SLE were screened and 34 were excluded due to the lack of required equipment. The Regarding the anxiety and depression scales, 32.9% and 29.6% of the patients had score equal to or greater than 8, respectively. The socio-economic profile of the patients is presented in Table 2. Univariate analyses showed that higher PGA (TM: mean 0.54± 0.63 vs. control: 0.38 ± 0.59, p = 0.025) and family monthly income > USD3, 800 (HKD30, 000) (TM: 51/140, 36.4% vs. control: 33/140, 23.6%; p = 0.028) were associated with the preference of TM use, while fulltime employment (TM: 56/140, 40.0% vs. control: 71/140, 50.7%; p = 0.041) was related to physical FU. There was no statistically significant difference in the objective parameters of disease activity. There was also no other difference in the demographics, socio-psychological factors, lupus manifestations, disease damage, co-morbidities, and pharmacotherapies between the 2 groups of patients. Binary logistic regression analysis revealed higher PGA, family monthly income > USD3, 800, and non-fulltime employment status remained independently associated with TM care (OR 1.05 95% CI 1.01-1.09 p = 0.027, OR 1.90 95% CI 1.24-3.79 p = 0.007, OR 1.89 95% CI 1.13-3.17 p = 0.015, respectively) after adjustment for age and gender.
DISCUSSION
As we define the new normal for ambulatory care in the COVID era, we need a new approach to provide FU for our SLE patients, and TM is an obvious option. While there was conflicting evidence about the effectiveness of TM in an early systematic review, subsequent individual studies on specific disease entities have been promising (21). When compared with in-person care, TM resulted in greater reductions of severity in patients with depression, equally reliable outcome assessment in patients with low back pain, and similarly improved skin score in patients with psoriasis (22)(23)(24). It was also suggested that TM helped to balance the healthcare workforce and to address manpower insufficiency (25,26). A randomized controlled trial in 2018 found that a TM FU could achieve similar disease control as conventional care in rheumatoid arthritis patients with low disease activity or remission (27). The COVID-19 pandemic compelled the rapid adoption of TM in rheumatology. For instance, 1 Italian study reported a smooth switch of 80% of the outpatient appointments to TM (28). Another study done in the US noted that TM peaked at 92% of the total visits and was accompanied by a large shift in provider EHR utilization (29). A research letter reported that 52.7% of patients with predominantly arthritis in a rheumatology department in Spain considered phone consultation to be useful, and no specific patient profile was associated with this opinion (30). It was also commented by the authors of a study reporting the experience of a rheumatology teleclinic in the UK that old age or presence of comorbidities were not reasons for not offering TM FU (31). However, data relate specifically to patients with SLE are scarce.
It might appear intuitive that TM is more suitable and easily accepted by patients with milder disease. On the other hand, the benefit of offering TM FU in patients with major organ involvement who might just omit FU for the fear of COVID-19 infection would be more pronounced. In a representative population of patients with significant lupus disease mostly requiring systemic glucocorticoid and immunosuppressive agents, we found that higher physicianassessed disease activity was associated with the preference of TM FU. This could be due to the fear of infection exposure during clinic visits in patients with more active disease, as we have previously found that choice of TM FU was associated with the perception that TM FU would reduce the risk of infection while routine care would increase that risk (10). In fact, a survey distributed to patients with SLE during the outbreak showed that their median fear of COVID-19 was 8 out of a maximum scale of 10 (32). Interestingly, in a study done before the COVID-19 outbreak, when offered as an option, video TM was also more likely to be used by rheumatoid arthritis patients with higher disease activity (33). Another possible explanation for the higher PGA in the TM group could be the perceived less stable disease when the patients were assessed virtually. It should also be noted that the small difference in PGA might not be clinically meaningful. Other clinical factors such as lupus manifestations, objective activity, disease damage, pharmacotherapies, disability, and depression/anxiety symptoms did not seem to affect patients' choice of mode of FU.
In this study, we also found that higher monthly family income favored TM use. Cavagna et al. reported the results of a survey on the propensity for adopting TM in 175 patients with connective tissue disease of whom 49 had SLE (11). It was found that a college degree and distance from the hospital were independent predictors for the acceptance of TM. It might seem conceivable that patients who are socio-economically more privileged would be keener to use TM (34). The issue needs to be addressed before universal integration between TM and standard care in order not to exacerbate health care disparities. On the other hand, we found no association in the distance from hospital with the preference of TM. This could be related to the fact that most of our patients were residing close to the hospital.
Another intriguing finding of the study is the association of fulltime employment status with standard in-person visit. Complete society lock-down or prohibition of social mobility was not in place in Hong Kong which meant patients with fulltime employment still had to go to work. As a result, the increased infection risk associated with attending the scheduled clinic FU might seem to be negligible.
There are several limitations in this study. First, the results should be interpreted in the context of the local outbreak status and mitigation measures implemented. Second, the study was conducted in a lupus nephritis clinic with mainly Asian patients having major organ involvement. The results might not be generalizable to the entire SLE population, which would include more patients with mild disease. Lastly, the suitability, mainly reflected by safety and efficacy, of TM FU in patients with SLE was not evaluated in the current study.
To conclude, when offered as an option in SLE patients, preference for TM FU was associated with non-fulltime employment, higher physician-determined disease activity, and better family income. With the availability of vaccines and gradual loosening of containment measures, the results could provide information on the factors to consider when we choose the mode of care delivery during and after the COVID-19 outbreak. The physician global assessment and socio-economic status, rather than other clinical factors such as treatments or comorbidities, appeared to be the important determinants of mode of follow-up in lupus patients.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Joint Chinese University of Hong Kong -New Territories East Cluster Clinical Research Ethics Committee, No. 2020-0254. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
HS, C-CS, and L-ST: study design. HS, EC, IC, S-LL, and TL: data collection and data analysis. HS and L-ST: drafting of manuscript. All authors critically revised the manuscript for important intellectual content.
|
2021-12-13T14:12:15.190Z
|
2021-12-13T00:00:00.000
|
{
"year": 2021,
"sha1": "222d7fec4db323eb486658598c03c36f6578a66b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.790652/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "222d7fec4db323eb486658598c03c36f6578a66b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214700653
|
pes2o/s2orc
|
v3-fos-license
|
Pre-stability and functional tests of a leave-on hair emulsion developed with Aloe vera extract
1Curso de Farmácia, Centro Universitário Cesmac, Rua Cônego Machado, 984, 57051-160, Farol, Maceió – AL, Brazil 2Faculdade de Farmácia, Laboratório de Cosméticos, Universidade Federal do Rio Grande do Norte, Rua General Gustavo Cordeiro de Faria, s/n, 59012-570, Petrópolis, Natal – RN, Brazil 3Mestrado Profissional em Biotecnologia em Saúde Humana e Animal, Centro Universitário Cesmac, Rua Angelo Neto, 51, 57151-530, Farol, Maceió – AL, Brazil
Introduction
Brazil is among the major world consumers of cosmetics. Hair is the number one factor when it comes to women's beauty care. They want to change color and style and, at the same time, they care about hair moisturizing and reconstructing. 1 Biologically, hair works as a protection for the human body to stop scalp from being exposed to sun. Hair is often used as a symbol of femininity and beauty. 2 A woman with damaged hair may have her self-esteem greatly affected, which makes her seek different products that appeal to hair moisturizing. 3 Conditioners are cationic emulsions that improve hair appearance and manageability, providing volume, shine and softness. They make hair stronger, soften the cuticles and reduce friction during humid hair brushing, allowing the comb to slide through the hair, and reducing future mechanical damage. Washing hair on a daily basis contributes to dryness of strands. Hair conditioner is a great aid with recovering hair shine and volume. 4 Combing emulsions were created as conditioners with a lesser concentration of active agents. Today they are seen as complements to the comb, as moisturizing agents which give hair the same effect as conditioners. They also facilitate hairstyle, even hours after washing. Such cosmetic mechanism tends to improve hair texture and softness. Its purpose is to enhance hair appearance after it is cleaned. 5 The history of Aloe vera is ancient, and it is in the literature of several cultures. Its name probably derives from the Arabic word alloeh, which means a bitter and shiny substance. The first register of Aloe vera use was on a clay surface in Mesopotamia dated 2100 B.C. Known in ancient Egypt as "the plant of immortality", it may have been used by Cleopatra for hair and skin care. 6 In Brazil Aloe vera is popularly known as "babosa folha miúda", "babosa folha grande" and "erva babosa". The parts of the plant which are used are the dried latex of leaves and the mucilage obtained from them, known as "babosa gel". It is used a lot in cosmetics for its emollient and moisturizing properties. For its high moisturizing properties, glycerin is added to Aloe vera to enhance its potential in cosmetic formulations. 7 It is constituted mainly of water -about 96%-98%, and the rest of its composition includes complex molecules of carbohydrates, enzymes, proteins, amino acids, vitamins, minerals, among others. 8 The confirmation of the efficacy of cosmetics is very important for consumers, who want to see the appealing results to which they were drawn, when they purchased the product. Thus, new methodologies to evaluate such products have been continuously developed, with focus on scientific confirmation of the real benefits proposed. Besides testing products, it is extremely important to assess their effects on hair, in order to observe performance. 5 The test on strands of hair shows how the product acts in human hair, and predicts the result that can be expected. A lot of factors affect the way each type of hair reacts to the product. After the test, the product is directed to applicability. 4 In the phase of product development, along with the test on strands of hair, quality control trials should be conducted, in order to evaluate the physicochemical and microbiological characteristics of the product. 9,10 The product's stability is the period of time during which the product maintained -within the specified limits and throughout the storage and use period -the same properties and characteristics it had during manipulation. The pre-stability test is carried out as a preliminary test for the formulation continuance or improvement, depending on the results. There is concern with stability since the beginning, in ingredient selection. 11 Based on the already known moisturizing characteristics of the Aloe vera plant, and with the increasing market for hair products and the appeal of natural products, it is interesting to research and test different products with Aloe vera extracts, and then define the product for the best hair result.
The purpose of this paper was to develop a leave-on emulsion containing Aloe vera extract, obtained by two different extraction methods, and one extract acquired commercially. We also aimed at evaluating the formulation pre-stability and activity in hair strands.
Materials and Methods
The Aloe vera plant was collected at Farmácia Viva of the Pharmacy Department at Centro Universitário Cesmac, located in the town of Marechal Deodoro -Alagoas, in August 2018. The leaves were cut near the base of the plant.
The Aloe vera extracts were prepared on the same day they were collected. After that, the extracts were incorporated into the base formulation of the leave-on emulsion.
After the Aloe vera plant was collected, the leaves were opened for extracting the gel. The extracts were prepared by maceration; 70% alcohol was added to extract A with 300 g Aloe vera mucilage. To extract B with 300 Aloe vera, 270 mL glycerin and 30 mL cereal alcohol were added. They were left in room temperature for 8 days, in hermetically sealed containers, protected from light. After maceration, the extracts were filtered and stored in PET amber containers, in room temperature, for addition to formulation later.
The raw materials for the formulation of the leaveon emulsion were acquired directly at a prescription pharmacy and donated by qualified suppliers. The emulsion (Table 1) was prepared according to Good Practices of Manipulation. 11 The sample (leave-on base formulation) was divided into 4 equal parts (250 g). The parts were named A, B, C, D. Emulsion A had no extract; 5% extract A was added to emulsion B; 5% extract B was added to emulsion C. Emulsion D was incorporated into 5% commercial extract acquired in a prescription pharmacy. All samples were stored in room temperature, in PET containers.
Physicochemical Analyses
The following tests were conducted: organoleptic characteristics -color, odor and aspect; pH direct reading at 10% solution; and centrifugation 300 rpm for 30 minutes. 10,12 Spreadability is defined as the expansion of a semi-solid formulation on a surface, after a certain period of time. 13 Pre-stability Test The purpose of the pre-stability test 14 is to analyze the period of time during which the product maintained -throughout the storage and use period -the same properties and characteristics it had during manipulation.
Twenty grams of each sample were weighed and stored in 4 different containers, totaling 16 containers. The storage conditions were: refrigerator (5ºC), heated chamber (45ºC), exposure to sunlight, and room temperature (25ºC).
After 15 days, the aforementioned physicochemical analyses were repeated.
Tests on Strands of Hair
The tests were adapted from protocols of companies that carry out security and efficiency tests in cosmetics. The hair strands were weighed (approximately 16 g each) and divided into 5 equal parts. 15 Later all strands were shampooed with a standardized amount of 2 mL neutral shampoo for 2 minutes and rinsed for 1 minute. For removing excess water, the strands were dried with 2 paper towels.
They were named as Control (shampooed only), and samples A (leave on base; B: leave on with extract A; C: leave on with extract B; D: leave on with Aloe vera commercial extract); 0.50 g of base formulation was added to samples A, B, C and D.
Combability Test
After the samples were applied to hair, a fine-tooth comb was slid through hair length in order to observe the slip point. The distance covered by the comb was measured in centimeters, after 10 combing strokes. The procedure was conducted by the same person, to eliminate interferences.
Appearance Test on Hair Strands
After the combability test, the strands were detangled and placed in controlled temperature and relative humidity, 23°C and 33%, respectively, for 24 hours. After that the appearance of hair was analyzed. The results were photographed.
Results and Discussion
The samples, 24 hours after manipulation and the prestability test were analyzed in association and presented the following results: as shown in Table 2 for sample A (base emulsion); in Table 3 for sample B (emulsion with extract A), Table 4 for sample C (emulsion with extract B); and Table 5 for sample D (emulsion with commercial extract). The white color in the base formulation was maintained in the three samples with the added extract. When the prestability test was applied, the color was slightly changeda little yellowish in all samples exposed to direct sunlight. There was a possible problem with the antioxidant used in the formulation (BHT). A supplementary antioxidant agent could have been added to the formulation, such as vitamin E. The odor is characteristic of the essence in the base formulation, and there was no alteration when the extracts were added. In the pre-stability test, there was alteration in all samples under direct sunlight, specially sample A (only the base formulation), where the odor was intensely modified, which is typical of ammonia. When in the heated chamber (45°C), sample B was slightly altered. The antioxidant agent did not support the stress, possibly. We confirmed that the antioxidant system should be reviewed for this formulation.
As for appearance, the samples presented homogeneous aspect in their base formulation, and after the extracts were added. After the pre-stability test, the samples remained homogeneous. Only the ones subjected to high temperatures (heated chamber at 45°C) became heterogeneous, maybe is due to the emulsifier/ wax used in the formulation (cetostearyl alcohol).
The pH is fundamental for formulation stability and it should be compatible with the location of the application. The ideal pH for hair emulsion is 4.0-5.5. The pH value of the samples remained 5.5 in all samples. In the prestability test, there were small alterations, but it was acid all the time and within ideal patterns for hair.
The spreadability test -a physical test -indicates how much spreadability the samples have, 13 when applied to strands. Among the samples, there was no considerable difference. After the pre-stability test, the samples showed some alterations, and their spreadability was increased after exposure to heat and direct light. The formulation had physical problems when it was subjected to some kind of stress, and, therefore, its spreadability was also altered. The centrifugation test is used to assess the physical stability of emulsions. Hence, when subjected to centrifugation, the components are likely to be separated, in case the emulsion does not have good stability. It is one of the mandatory tests for emulsions, after preparation. 16 In centrifugation there was no phase separation in the manipulated samples. In the pre-stability test, there was no separation in the samples kept in refrigerator, nor in the ones exposed to direct light and in room temperature. The phases were separated (coalescence) in the samples kept in the heated chamber (45°C), which demonstrates problems in their physicochemical stability.
The stability of formulations is linked to product safety because toxic and irritating by-products can be generated. 17 In the results obtained from the combability test, as shown in Table 6, there was no significant difference in how much the comb slid in all strands containing Aloe vera extract, when compared to the strands to which only the leave-on base formulation was applied. When compared to the sample that was washed only with shampoo, there was considerable improvement. Sample "D" showed 100% improvement. The incorporation of Aloe vera extract contributed very little to better hair combability.
After the strands were washed and the leave-on emulsion samples were applied, they were left for 24 hours in 23°C temperature and 33% relative humidity. After such period, the visual analysis of the strands was conducted, and photographs were taken (Figures 1 and 2).
The samples showed improvement in their visual aspect and softness to the touch, when compared to the strands washed only with shampoo. Nevertheless, there was no significant difference among the strands with base formulation and the strands with the added extracts. Therefore, a new study on extracts would be relevant, with a view to improving this aspect.
Conclusion
The results of the physicochemical tests just after the manipulation met the standards which were set, but on the physicochemical results in the pre-stability tests, all the samples that were exposed to direct sunlight demonstrate changes on the color and smell, and those exposed 15 days to 45ºC presented coalescence (heterogeneous aspect and phases separation on the centrifugation test).
According to the data presented on the functional tests (strands of hair and combability), it was concluded that the base emulsion formulation showed relevant improvement when added to hair strands, when compared to the sample control. The extracts added on the base emulsion formulation did not show influence when the tests on hair strands were conducted, suggesting interference of the solvent polarity on the hair moisturizing action of Aloe vera already reported in the literature. Therefore, the need for further studies is evident, so is the enhancement of the formulation used, either for altering it for improving the combability test or for the stability test, as such tests are important requirements.
|
2020-03-29T12:07:09.799Z
|
2019-12-24T00:00:00.000
|
{
"year": 2019,
"sha1": "6082fc45ef50015065c0d0d3227beee475e2bbbb",
"oa_license": null,
"oa_url": "https://ijpni.org/PDF/ijpni-6-12.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6082fc45ef50015065c0d0d3227beee475e2bbbb",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
115618378
|
pes2o/s2orc
|
v3-fos-license
|
Neutrino Mixing and Leptogenesis in $\mu-\tau$ Symmetry
We study the consequences of the $Z_2$-symmetry behind the $\mu$--$\tau$ universality in neutrino mass matrix. We then implement this symmetry in the type-I seesaw mechanism and show how it can accommodate all sorts of lepton mass hierarchies and generate enough lepton asymmetry to interpret the observed baryon asymmetry in the universe. We also show how a specific form of a high-scale perturbation is kept when translated via the seesaw into the low scale domain, where it can accommodate the neutrino mixing data. We finally present a realization of the high scale perturbed texture through addition of matter and extra exact symmetries.
Introduction
Flavor symmetry is commonly used in model building seeking to determine the nine free parameters characterizing the effective neutrino mass matrix M ν , namely the three masses (m 1 , m 2 and m 3 ), the three mixing angles (θ 23 , θ 12 and θ 13 ), the two Majorana-type phases (ρ and σ) and the Dirac-type phase (δ). Incorporating family symmetry at the Lagrangian level leads generally to textures of specific forms, and one may then study whether or not these specific textures can accommodate the experimental data involving the above mentioned parameters ( [1] and references therein). The recent observation of a nonzero value for θ 13 from the T2K [2], MINOS [3], and Double Chooz [4] experiments puts constraints on models based on flavor symmetry (see Table 1 where the most recent updated mixing angles are taken from [5]). In this regard, recent, particularly simple, choices for discrete and continuous flavor symmetry addressing the non-vanishing θ 13 question were respectively worked out ( [6] and references therein). The µ-τ symmetry [7,8] is enjoyed by many popular mixing patterns such as tri-bimaximal mixing (TBM) [9], bimaximal mixing (BM) [10], hexagonal mixing (HM) [11] and scenarios of A 5 mixing [12], and it was largely studied in the literature [13]. Any form of the neutrino mass matrix respects a (Z 2 ) 2 symmetry [14], and we can define the µ-τ symmetry by fixing one of the two Z 2 's to express exchange between the second and third families, whereas the second Z 2 factor is to be determined later by data or, equivalently, by M ν parameters. The whole (Z 2 ) 2 symmetry might turn out to be a subgroup of a larger discrete group imposed on the whole leptonic sector. In realizing µ-τ symmetry we have two choices namely (S − , S + , as explained later), and thus we have two textures corresponding to µ − τ symmetry. It is known that both of these textures lead to a vanishing θ 13 (with S − achieving this in a less natural way), and thus perturbations are needed to get remedy of this situation [15]. In [16] we studied the perturbed µ − τ neutrino symmetry and found the four patterns, obtained by disentangling the effects of the perturbations, to be phenomenologically viable. Table 1: Results for the neutrino mixing angles taken from the global fit to neutrino oscillation data [5]. (NH, IH) denote respectively Normal and Inverted Hierarchies.
Parameter
Best fit 1σ range sin 2 θ 12 /10 −1 (NH or IH) 3.08 2.91 -3.25 sin 2 θ 13 /10 −2 (NH) 2.34 2.15 -2.54 sin 2 θ 13 /10 −2 (IH) 2.40 2.18 -2.59 sin 2 θ 23 /10 −1 (NH) 4.37 4.14 -4.70 sin 2 θ 23 /10 −1 (IH) 4.24 5.94 -6.11 In this work, we re-examine the question of exact µ − τ -symmetry and implement it in a complete setup of the leptonic sector. Then, within type-I seesaw scenarios, we show the ability of exact symmetry to accommodate lepton mass hierarchies. Upon studying its effect on leptogenesis we find, in contrast to other symmetries studied in [6] and [17] that it can account for it. The reason behind this fact is that fixing just one Z 2 in µ-τ symmetry leaves one mixing angle free which can be adjusted differently in the Majorana and Dirac neutrino mass matrices (M R and M D ), thus allowing for different diagonalizing matrices. For the mixing angles and in order to accommodate data, we introduce perturbations at the seesaw high scale and study their propagations into the low scale effective neutrino mass matrix. As in [16], we consider that the perturbed texture arising at the high scale keeps its form upon RG running which, in accordance with [18], does not affect the results in many setups. As to the origin of the perturbations, we shall not introduce explicitly symmetry breaking terms into the Lagrangian [19], but rather follow [16], and enlarge the symmetry with extra matter and then spontaneously break the symmetry by giving vacuum expectation values (vev) to the involved Higgs fields.
The plan of the paper is as follows: In Section 2, we review the standard notation for the neutrino mass matrix and the definition of the µ-τ symmetry. In Section 3, we introduce the type-I seesaw scenario. In Subsection 3.1, we address the charged lepton sector, whereas in Subsection 3.2 we study the different neutrino mass hierarchies. In Subsection 3.3, we study the generation of lepton asymmetry, and in Subsection 3.4 we examine the mixing angles in a particular perturbed texture describing approximate µ − τ -symmetry. In Subsection 3.5 we present a theoretical realization of the perturbed texture. We end by discussion and summary in Section 4.
Notations
In the Standard Model (SM) of particle interactions, there are 3 lepton families. The charged-lepton mass matrix linking left-handed (LH) to their right-handed (RH) counterparts is arbitrary, but can always be diagonalized by a bi-unitary transformation: Likewise, we can diagonalize the symmetric Majorana neutrino mass matrix by just one unitary transformation: with m i (for i = 1, 2, 3) real and positive. The observed neutrino mixing matrix comes from the mismatch between V l and V ν in that If the charged lepton mass eigen states are the same as the current (gauge) eigen states, then V l L = 1 (the unity matrix) and the measured mixing comes only from the neutrinos V PMNS = V ν . We shall assume this saying that we are working in the "flavor" basis. As we shall see, corrections due to V l L = 1 are expected to be of order of ratios of the hierarchical charged lepton masses, which are small enough to justify our assumption of working in the flavor basis. However, one can treat these corrections as small perturbations and embark on a phenomenological analysis involving them [19].
We shall adopt the parametrization of [20], related to other ones by simple relations [1], where the V PMNS is given in terms of three mixing angles (θ 12 , θ 23 , θ 13 ) and three phases (δ, ρ, σ), as follows. For a non-degenrate mass spectrum, the form of the Z 2 -matrix S is given by [17]: where the two S's correspond to having, in diag(±1, ±, ±1), two pluses and one minus, the position of which differs in the two S's (the third Z 2 -matrix, corresponding to the third position of the minus sign, is generated by multiplying the two S's and noting that the form invariance formula Eq. (6) is invariant under S → −S).
In practice, however, we follow a reversed path, in that if we assume a 'real' orthogonal Z 2 -matrix (and hence symmetric with eigenvalues ±1) satisfying Eq. (6), then it commutes with M ν , and so both matrices can be simultaneously diagonalized. Quite often, the form of S is simpler than M ν , so one proceeds to solve the eigensystem problem for S, and find a 'real' orthogonal diagonalizing matrixŨ : This matrixŨ † can 'commonly' be identified with, or related simply to, the matrix V satisfying the 'Takagi' decomposition of Eq.(2) * . In this case, and in the flavor basis, the V PMNS would be real and equal to U = R 23 (θ 23 )R 13 (θ 13 )R 12 (θ 12 ). Determining the eigenvectors of the S matrices helps thus to determine the neutrino mixing angles. The µ-τ symmetry is defined when one of the two Z 2 -matrices corresponds to switching between the 2 nd and the 3 rd families. We have, up to a global irrelevant minus sign (see again Eq.6), two choices, which would lead to two textures at the level of M ν . The Z 2 -symmetry matrix is given by: The invariance of M ν under S − (Eq.6) forces the symmetric matrix M ν to have a texture of the form: Since S − and M ν commute, they have common eigenvectors. The normalized eigen vectors of S − are: v corresponding respectively to the eigenvalues (1, 1, −1). Since the eigenvalue 1 is two-fold degenerate, then there is still a freedom for rotation by angle ϕ in its eigenspace to get the eigen vectors: We have three choices as to how we order the eigenvectors forming the diagonalizing matrix U .
• Eigenvalues (1, −1, 1) The matrix U − which diagonalizes S − can be cast into the form: (12) * In fact, up to an irrelevant sign and dropping the trivial S = 1 (identity) case, one can restrict the study to an S with eigen values (−1, +1, +1). The eigenvector of S corresponding to the eigenvalue (−1) is an eigenvector of Mν , and the 2−dim eigenspace of S corresponding to the multiple eigenvalue (+1) is 'stable' under Mν . The restriction of the symmetric Mν to this eigenspace is symmetric (sinceŨ is orthogonal and soŨ −1 MŨ is symmetric) and assumed to be diagonalizable by a 'real' rotation with no complex phases. Thus we end up with a real orthogonal matrix diagonalizing both S and Mν having a free rotation angle defined by the Mν parameters.
In order to enforce U − to be a matrix which diagonalizes M ν as given in Eq.
(2), we need the free parameter ϕ to be expressed in terms of the mass parameters as follows, Comparing Eq. (12) with Eq.(4) we find that the µ-τ symmetry forces the following mixing angles: We can get, as phenomenology suggests, a small value for θ 13 assuming and then the mass spectrum turns out to be: Inverting these relations to express the mass parameters in terms of the mass eigenvalues we get We can easily see using Eq.(16) that all mass spectra can be accommodated by properly adjusting the parameters A, B, C and D. To get a glimpse of a better analytical understanding, we show that all kinds of possible mass hierarchies can be generated as follows: We see that this choice is not viable phenomenologically. Whereas the value of θ 23 is acceptable corresponding to maximal atmospheric mixing, and that one can assume a hierarchy in the mass parameters to accommodate the small mixing angle θ 13 , and that the neutrino mass hierarchies can be accounted for, however the vanishing value of the angle θ 12 is far from its experimental value ≃ 33 o .
• Eigenvalues (−1, 1, 1): This choice leads again to a maximal mixing for θ 23 = π/4, and an adjustable value for θ 13 = ϕ, but the value of θ 12 is predicted to be π/2 far from its experimental value ≃ 33 o : • Eigenvalues (1, 1, −1): One can check that this choice will lead to a free adjustable mixing angle θ 12 = π + ϕ and either to (θ 23 = −π/4, θ 13 = 0) or to (θ 23 = 3π/4, θ 13 = π), in that we have respectively: Whereas one might argue that this choice is viable phenomenologically, however, we shall not use the phase ambiguity to put all mixing angles in the first quadrant. Rather, we prefer to find a symmetry leading directly, dropping all the phases, to mixing angles in the first quadrant. This can be carried out in the second texture expressing the µ-τ symmetry materialized through S + .
2.2
The µ-τ symmetry manifested through S + : (M ν 12 = −M ν 13 and M ν 22 = M ν 33 ) The Z 2 -symmetry matrix is given by: The invariance of M ν under S + (Eq. 6) forces the symmetric matrix M ν to have a texture of the form: As S + and M ν commute, they have common eigenvectors. The normalized eigen vectors of S + are: corresponding respectively to the eigenvalues {−1, −1, 1}. Since the eigenvalue −1 is two-fold degenerate, then there is still a freedom for rotation by angle ϕ, and we define the eigenvectors v 1 (ϕ) and v 2 (ϕ) as in Eq. (11). We can, as in the last subsection, discuss different forms for the S + -diagonaling matrix U + shuffling through its columns and taking various values for ϕ, but to fix the ideas we fix the order of the eigenvalues as mentioned above, and cast U + in the form: Again, one can express the free parameter ϕ in terms of the neutrino mass matrix parameters by forcing U + to diagonalize M ν satisfying Eq. (2), so to get: and the mass spectrum is given by: Comparing Eq.(25) with Eq.(4) we find that the µ − τ symmetry forces the following mixing angles: These predictions are phenomenologically viable, and furthermore do not need a special adjustment for the parameters A ν , B ν , C ν , D ν which can be of the same order, in contrast to Eq. (15), and still accommodate the experimental value of θ 12 ≃ 33 o . The various neutrino mass hierarchies can also be produced as can be easily seen from Eq. (27) where the three masses are given in terms of four parameters. Again, for the sake of a better analytical understanding, we show that all kinds of possible hierarchies can be be obtained as follows (with ϕ fixed around its phenomenologically acceptable value leading to s 2 (ii) Inverted hierarchy (m 2 > m 1 , m 3 ≪ m 2 , m 1 ). It is sufficient to have 3 The seesaw mechanism and the µ-τ symmetry We impose now the µ − τ -symmetry, defined by the matrix S = S + , at the Lagrangian level within a model for the Leptons sector. Then, we shall invoke the type-I see-saw mechanism to address the origin of the effective neutrino mass matrix, with consequences on leptogenesis. The procedure has already been done in [17] for other Z 2 -symmetries.
The charged lepton sector
We start with the part of the SM Lagrangian responsible for giving masses to the charged leptons: where the SM Higgs field φ and the right handed (RH) leptons ℓ c j are assumed to be singlet under S, whereas the left handed (LH) leptons transform in the fundamental representation of S: Invariance under S implies: and this forces the Yukawa couplings to have the form: which leads, when the Higgs field acquires a vev v, to a charged lepton squared mass matrix of the form: As the eigenvectors of M l M † l are 0, 1/ √ 2, 1/ √ 2 T with eigenvalue 2v 2 |a| 2 + |b| 2 + |c| 2 and 0, 1/ √ 2, −1/ √ 2 T and ( 1, 0, 0 ) T with a degenerate eigenvalue 0, then the charged lepton mass hierarchy can not be accommodated. Moreover, the nontrivial diagonalizing matrix, illustrated by non-canonical eigenvectors, means we are no longer in the flavor basis. To remedy this, we introduce SM -singlet scalar fields ∆ k coupled to the lepton LH doublets through the dimension-5 operator: This way of adding extra SM-singlets is preferred, for suppressing flavor-changing neutral currents, than to have additional Higgs fields . Also, we assume the ∆ k 's transform under S as: Invariance under S implies, and thus we have the following form when the fields ∆ k and the neutral component of the Higgs field φ • take vevs ( ∆ k = δ k , v = φ • ) we get a charged lepton mass matrix: In Ref. [17], a charged lepton matrix of exactly the same form was shown to represent the lepton mass matrix in the flavor basis with the right charged lepton mass hierarchies, assuming just the ratios of the magnitudes of the vectors comparable to the lepton mass ratios.
Neutrino mass hierarchies
The effective light LH neutrino mass matrix is generated through the seesaw mechanism formula where the Dirac neutrino mass matrix M D comes from the Yukawa term upon the Higgs field acquiring a vev, whereas the symmetric Majorana neutrino mass matrix M R comes from a term (C is the charge conjugation matrix) We assume the RH neutrino to transform under S as: and thus the S-invariance leads to This forces the following textures: where Λ R is a high energy scale characterizing the heavy RH Majorana neutrinos. The resulting effective matrix M ν will have the form of Eq. (24) with Constraints imposed on the mass parameters of (M D , M R ) can find their way through the seesaw formula to similar constraints on the mass parameters of M ν , which can be those stated in the preceding section. This helps to generate all neutrino mass hierarchies assuming their origin resides in the seesaw high energy domain as follows.
• Normal hierarchy: Assuming we find in accordance with Eq.(29).
• Inverted hierarchy: Assuming in accordance with Eq.(30), and one can arrange to have also A ν < 6B ν (while keeping A ν ≃ B ν ) in order to impose m 2 > m 1 .
Leptogenesis
In this kind of models, the unitary matrix diagonalizing M R is not necessarily diagonalizing M D . In fact, the Majorana and Dirac neutrino mass matrices have different forms dictated by the S-symmetry and the angle ϕ in Eq.(26) depends on the corresponding mass parameters. This point is critical in generating lepton asymmetry, in contrast to other symmetries [17] where no freedom was left for the mixing angles leading to the same form on M R and M D with identical diagonalizing matrices. This is important when computing the lepton asymmetry induced by the lightest RH neutrinos, since it involves explicitly the unitary matrix diagonalizing M R : withM D is the Dirac neutrino mass matrix in the basis where the RH neutrinos are mass eigenstates.
where V R the unitary matrix, defined up to a phase diagonal matrix, that diagonalizes the symmetric matrix M R , and F 0 is a phase diagonal matrix chosen such that the eigenvalues of M R are real and positive. In our case where the S-symmetry imposes a particular form on M R (Eq. 48), we can take V R as being the rotation matrix U + of Eq.(25) corresponding to , and θ 13 = 0.
As to the diagonal phase matrix, F 0 = diag e −iα1 , e −iα2 , e −iα3 , it can be chosen according to Eq.(27) to be (59) Note here that had the matrix V R diagonalized M D , which would have meant that N = V † R M D V R is diagonal, then we would have reached a diagonalM D †M D equaling a product of diagonal matrices, and no leptogenesis:M In contrast, we get in our case: We deduce that by adjusting the phase difference (α 1 − α 2 ), one can generate enough lepton asymmetry, to be transformed later, via sphelarons, into the observed baryon/antibaryons asymmetry observed in the universe [17].
Neutrino mixing
We saw that exact µ − τ -symmetry implied a vanishing value for the mixing angle θ 13 . Recent oscillation data pointing to a small but non-vanishing value for this angle suggest then a deviation on the exact symmetry texture in order to account for the observed mixing. We showed in [16] how "minimal" perturbed textures disentangling the effects of the perturbations can account for phenomenology. We shall consider now, within the scheme of type-I seesaw, a specific perturbed texture imposed on Dirac neutrino mass matrix M D , and parameterized by only one small parameter α, and show how it can resurface on the effective neutrino mass matrix M ν , which is known to be phenomenologically viable. We compute then the "perturbed" eigenmasses and mixing angles to first order in α, whereas we address the question of realizing the perturbed texture of M D in the next subsection. Thus, we assume a perturbed M D of the form The small parameter α affects only one condition defining the exact S-symmetry texture, and can be expressed as: Applying the seesaw formula Eq.(43) with M R given by Eq.(48) then we get: where M 0 ν is the 'unperturbed' effective neutrino mass matrix (corresponding to α = 0) and thus can be diagonalized by U + of Eq.(25) corresponding to the following angles, and θ 13 = 0.
We see that M ν has exactly the following form: where the perturbation parameter χ is given by: The two parameters χ and α are of the same order provided we do not have unnatural cancelations between the mass parameters of M D and M R . In order to compute the new eigenmasses and mixing angles of M ν , we write it in the following form working only to first order in α: where the matrix M α is given as, and the non-vanishing entries of M α are found to be, Note here that M ν (1, 1) get distorted by terms of order α and α 2 , however this will not "perturb" the relations defining µ-τ symmetry, which are expressed only through M ν (1, 2) , M ν (1, 3) , M ν (2, 2) and M ν (3, 3).
We seek now a unitary matrix Q diagonalizing M ν , and we write it in the form: where ε is an antihermitian matrix due to the unitarity of Q. Imposing the diagonalization condition on M ν , knowing that U + diagonalizes M 0 ν we have: If we restrict to the real case for the matrix ε, then we get the condition: One can solve analytically for ε 1 , ε 2 , ε 3 to get: The new eigenmasses are given as Computing now Q = U + (1 + ε) and comparing to Eq.(4) we find the new mixing angles These formulae show that the deviations from the mixing values predicted by the exact µ-τ symmetry depend on the perturbation parameter α. We assumed α real for these formulae, but one can extend the study to the complex case in order to investigate any effect on the phases. From this simple analysis, the parameter α (or equivalently χ) -when it is real-is responsible for producing the correct mixing, while the phases of the RH Majorana neutrino fields are responsible for producing the lepton asymmetry. Introducing complex-valued α(χ) can have an effect on the lepton asymmetry.
Realization of perturbed texture
As we saw, perturbed textures are needed in order to account for phenomenology. We have two ways to seek models for achieving these perturbations. The first method consists of introducing a term in the Lagrangian which breaks explicitly the symmetry [19], and then of expressing the new perturbed texture in terms of this breaking term. The second method is to keep assuming the exact symmetry, but then we break it spontaneously by introducing new matter and enlarging the symmetry. We follow here the second approach in order to find a realization of the forms given in Eq.(63) for M D and in Eq.(48) for M R , while assuring that we work in the flavor basis. However, for the sake of minimum added matter, we shall not force the most general forms of M R and M D , but rather be content with special forms of them leading to an effective mass matrix M ν of the desired perturbed texture (Eq. 67). In [16] a realization was given for a perturbed texture corresponding to the S − -symmetry, whereas here we treat the more phenomenologically motivated S + -symmetry (we shall drop henceforth the +suffix). We present two ways, not meant by whatsoever to be restrictive but rather should be looked at as proof of existence tools, to get the three required conditions of a "perturbed" M D , non-perturbed M R and diagonal M l M † l . Both ways add new matter, but whereas the first approach adds just a (Z 2 ) 2 factor to the S−symmetry while requiring some Yukawa couplings to vanish, the second approach enlarges the symmetry larger to S × Z 8 but without need to equate Yukawa couplings to zero by hand. Some "form invariance" relations are in order: We denote L t = (L 1 , L 2 , L 3 ) with L i 's,(i = 1, 2, 3) are the components of the i th -family LH lepton doublets (we shall adopt this notation of 'vectors' in flavor space even for other fields, like l c the RH charged lepton singlets, ν R the RH neutrinos, . . .).
flavor symmetry • Matter content and symmetry transformations
We have three SM-like Higgs doublets (φ i , i = 1, 2, 3) which would give mass to the charged leptons and another three Higgs doublets (φ ′ i , i = 1, 2, 3) for the Dirac neutrino mass matrix. All the fields are invariant under Z ′ 2 except the fields φ ′ and ν R which are multiplied by −1, so that we assure that neither φ can contribute to M D , nor φ ′ to M l . The fields transformatios are as follows.
• Charged lepton mass matrix-flavor basis The Lagrangian responsible for M l is given by: The transformations under S and Z 2 , with the "form invariance" relations Eqs. (78-81), lead to: where f j ik is the (i, k) th -entry of the matrix f (j) . Assuming (v 3 ≫ v 1 , v 2 ) we get: where B = 0, 0, −B 3 T , D = D 1 , D 2 , 0 T and C = C 1 , C 2 , 0 T , and where the dot product Under the reasonable assumption that the magnitudes of the Yaukawa couplings come in ratios proportional to the lepton mass ratios as |B| : |C| : |D| ∼ m e : m µ : µ τ , one can show, as was done in [16], that the diagonalization of the charged lepton mass matrix can be achieved by infinitesimally rotating the LH charged lepton fields, which justifies working in the flavor basis to a good approximation.
• Majorana neutrino mass matrix
The mass term is directly present in the Lagrangian The invariance under Z ′ 2 is trivially satisfied while the one under S × Z 2 is more involved. The symmetry S constrains M R to satisfy whereas the restrictions due to Z 2 are imprinted in the bilinear of ν Ri ν Rj determining their transformations under Z 2 as: which means: Thus the symmetry through Eqs. (78,90,91) entails that M R would assume the following form, which is of the general form (Eq. 48) with B R = 0.
• Dirac neutrino mass matrix The Lagrangian responsible for the neutrino mass matrix is Because of the fields transformations under S and Z 2 we get: where g (k) is the matrix whose (i, j) th -entry is the Yukawa coupling g k ij . Then, the "form invariance" relations (Eqs.78-81) lead to: Upon acquiring vevs (v ′ i , i = 1, 2, 3) for the Higgs fields (φ ′ i ), we get for Dirac neutrino mass matrix the form: which can be put into the form, If the vevs satisfy v ′ 3 ≪ v ′ 2 and the Yukawa couplings are of the same order, then we get perturbative parameters α, β ≪ 1.
The deformations appearing in the Dirac mass matrix as described in Eqs.(97-99) would resurface in the effective light neutrino mass matrix M ν through the seesaw formula (Eq.43) with M R given in Eq.(93). The resulting deformations in M ν can be described by two parameters: One can repeat now the analysis of the last subsection in order to compute χ, ξ in terms of α, β and other mass parameters to get: We note here that we do not get in general the desired pattern (Eq. 67) corresponding to disentanglement of the perturbations (ξ = 0). However, for specific choices of Yukawa couplings, for e.g. E 3 = 0 leading to β = 0 and hence ξ = 0, we get this form, in which case M D is of the form (Eq.63) and χ of Eq.(101) would also be given by Eq. (68) with B R = 0.
Note here that we have the following transformation rule forφ ′ ≡ iσ 2 φ ′ * : • Charged lepton mass matrix-flavor basis The symmetry restriction in constructing the charged lepton mass Lagrangian as given by Eq. (86) is similar to what is obtained in the case of (S × Z 2 × Z ′ 2 ). The similarity orginates from the fact that the charges assigned to the fields (L, l c , φ) corresponding to the factor Z 2 (of S × Z 2 × Z ′ 2 ) and that of Z 8 (of S × Z 8 ) are the same. Thus we end up, assuming again a hierarchy in the Higgs φ's fields vevs (v 3 ≫ v 2 , v 1 ), with a charged lepton mass matrix adjustable to be approximately in the flavor basis. Note also here that the symmetry forbids the term L i φ ′ k l c j since we have: • Dirac neutrino mass matrix The Lagrangian responsible for the Dirac neutrino mass matrix is given by Eq. (94). By means of fields transformations we have: where g (k) is the matrix whose (i, j) th -entry is the Yukawa coupling g k ij . Then, the "form invariance" relations impose the following forms: When the Higgs fields (φ ′ i ) get vevs (v ′ i , i = 1, 2, 3, 4), we obtain: which is of the form of Eq.(63) with E D = 0: where If the vevs satisfy v ′ 4 ≪ v ′ 2 and the Yukawa couplings are of the same order then we get a perturbative parameter α ≪ 1.
• Majorana neutrino mass matrix
The mass term is generated from the Lagrangian Under Z 8 we have the bilinear: ν Ri ν Rj Z8 ∼ ω 2 ω 4 ω 4 ω 4 ω 6 ω 6 ω 4 ω 6 ω 6 Eq.105 =⇒ (114) If we call h (k) the matrix whose (i, j) th -entry is the coupling h k ij then we have (the cross sign denote a non-vanishing entry): Then the "form invariance" relations lead to: S t h (k) S = h (k) , Eqs.78,115 =⇒ Thus when the Higgs singlets ∆ acquire vevs ∆ 0 1 , ∆ 0 2 we get the following form for M R , which of the form of Eq.(48) with B R = 0. The analysis of the last subsection shows then that the deformation α in M D resurfaces as a 'sole' perturbation χ in M ν which would get the desired form of Eq.(67) with χ given by Eq.(68) after putting B R = E D = 0: .
Before ending this section, we would like to mention that having multiple Higgs doublets in our constructions might display flavor-changing neutral currents. However, the effects are calculable and in principle one can adjust the Yukawa couplings so that to suppress processes like µ → eγ [21]. Moreover, the constructions are carried out at the seesaw high scale, but the RG running effects are expected to be small when multiple Higgs doublets are present, and so we expect the predictions of the symmetry will still be valid at low scale.
Discussion and summary
We studied the properties of the Z 2 symmetry behind the µ − τ neutrino universality. We singled out the texture (S + ) which imposes naturally a maximal atmospheric mixing θ 23 = π/4 and vanishing θ 13 . The remaining mixing angle θ 12 remains free, and the other Z 2 necessary to characterize the neutrino mass matrix can be used to fix it at its experimentally measured value (∼ 33 0 ). We showed how the S + -texture accommodates all the neutrino mass hierarchies. Later, we implemented the S + -symmetry in the whole lepton sector, and showed how it can accommodate the charged lepton mass hierarchies with small mixing angles of order of the 'acute' charged lepton mass hierarchies. We computed, within type-I seesaw, the lepton asymmetry generated by the symmetry and found that the phases of the RH Majorana fields may be adjusted to produce enough lepton asymmetry. The fact that the µ-τ symmetry does not determine fully the mixing angles, but leaves θ 12 as a free parameter able to take different values in M R and M D is crucial for obtaining leptogenesis within type-I seesaw scenarios. We found also that "real-valued" perturbations on Dirac neutrino mass matrix can account for the correct neutrino mixing angles. However, introducing "complex-valued" perturbations on M D can have an effect on lepton asymmetry as well. Finally, we presented a theoretical realization of the perturbed Dirac mass matrix, where the symmetry is broken spontaneously and the perturbation parameter originates from ratios of different Higgs fields vevs.
|
2015-05-21T18:14:38.000Z
|
2014-08-21T00:00:00.000
|
{
"year": 2014,
"sha1": "0382b92109746686bdad9218f5e204afc064e491",
"oa_license": null,
"oa_url": "http://www.archipel.uqam.ca/8308/1/Lashin_et_al_PhysReviewD_2015_91_113014.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0382b92109746686bdad9218f5e204afc064e491",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
14299371
|
pes2o/s2orc
|
v3-fos-license
|
Nondialytic Therapy for Elderly Patients in a Critical Care Setting
It is frequently necessary to admit critically ill elderly patients to intensive care units (ICUs) due to their physiological impairments and co-morbidities. Several life-sustaining therapies such as mechanical ventilation are performed as necessary treatment in these ICUs. Sometimes renal replacement therapy (i.e. dialysis) is considered for elderly patients with complicating serious renal insufficiency. However, although the necessity for dialysis is recognized, some elderly patients may not benefit from this care because of their limited life expectancy. Until recently, life-sustaining support for critically ill elderly patients in Japan has been used routinely, regardless of the medical futility. The issue of providing better end-of-life care for elderly patients even in the ICU is now being raised frequently. We therefore wish to highlight the issue of end-of-life care and decision-making in the ICU, focusing on nondialytic therapy (NDT). The aim of this article was to assess whether NDT is an acceptable optional care for critically ill elderly patients with serious kidney diseases, even in the ICU. We hope our experiences may be helpful to physicians with an interest in decision-making and end-of-life care.
Introduction
Like many other countries, the elderly population in Japan is increasing [1]. As a result, physicians need to diagnose and manage more elderly patients. It is frequently necessary to admit elderly patients with critical illnesses to intensive care units (ICUs) due to their physiological impairments and co-morbidities [2]. In that setting, several life-sustaining therapies such as mechanical ventilation are performed as a necessary treatment. Sometimes renal replacement therapy (i.e. dialysis) is considered for elderly patients with complicating serious renal insufficiency. However, although we recognize the necessity for dialysis, there are some elderly patients who may not benefit from this care as a result of their limited life expectancy [3]. As nephrologists, we would like to provide optional comfort care for such patients. In that context, we focused in this article on nondialytic therapy (NDT) as a better end-of-life care for patients [4]. The aim was to assess whether NDT is an acceptable optional care for critically ill elderly patients with serious kidney diseases, even in the ICU. The following is a case report of a patient with these characteristics.
Case Presentation
An 86-year-old man with chronic heart failure and atrial fibrillation was followed by a cardiologist as an outpatient and subsequently admitted to the ICU at our hospital because of septic shock associated with a urinary tract infection. His Charlson morbidity score was 7 [5]. After admission, he suffered from respiratory distress and required intubation for mechanical ventilation. Initially, his cardiologist, who was the attending doctor, preferred to use advanced care for the patient. However, the patient was unable to agree to this option because of his impaired level of consciousness and therefore the doctor obtained consent from the patient's family to initiate life-sustaining therapy. As a consequence of this intensive care that included antibiotic therapy and management of mechanical ventilation, the physical status of the patient recovered temporarily. However, on the 36th hospital day, the doctor confirmed a further reduction in his level of consciousness, and magnetic resonance imaging showed that a brain infarction had occurred. The patient's family was therefore informed of his poor prognosis. On the 48th hospital day, the patient developed acute panperitonitis caused by a gastrointestinal perforation. A surgeon proposed either an operation or conservative care for this condition to the family who subsequently selected conservative management.
After 49 hospital days, the patient's physical condition worsened, with the development of oliguria and progression of renal insufficiency. The attending doctor consulted us to discuss renal replacement therapy. After a careful diagnosis of the patient, we concluded that hemodialysis (HD) should be started immediately. However, his systolic blood pressure was barely maintained at approximately 80 mm Hg by the administration of a continuous intravenous vasopressor. Moreover, sepsis became uncontrollable, associated with multiple organ failure, indicating that the patient was in the final stage of a life-limiting illness. A marked thrombocytopenia due to sepsis was also detected. As a consequence, it was considered that the insertion of the dialysis catheter for the initiation of HD may have resulted in uncontrollable bleeding. We therefore discussed with other cardiologists, nephrologists, and nurses in the ICU whether or not HD was a suitable care. We concluded that dialytic therapy would not be effective and that we could not practice HD safely because of the patient's poor condition. We also recognized that conservative or palliative care was more acceptable as end-of-life care for the patient.
Under such circumstances, we explained to the patient's family that HD would not contribute directly to the recovery of sepsis and that it may also be a further burden for him. In addition, we had also referred him to comfort or palliative care (i.e. NTD) as an optional endof-life treatment. However, the family became disturbed and withheld their decision from the first meeting, although one day later they agreed to NTD. We found that the preference for end-of-life care of the family changed considerably during this difficult period for the patient.
Two days after obtaining consensus for this treatment, the patient died peacefully surrounded by his family in the ICU. Fortunately, the family confirmed that they were satisfied with our care.
Discussion
Until recently, life-sustaining support for critically ill elderly patients in Japan has been used routinely, regardless of the medical futility [6]. The issues regarding end-of-life care for elderly patients even in the ICU are currently being raised with greater frequency [7,8]. We therefore wish to highlight the issue of end-of-life care and decision-making in the ICU focusing on NDT.
Several studies have referred to NDT as an optional conservative management for patients with end-stage renal disease. Nephrologists need to manage life-threatening symptoms associated with uremia such as respiratory distress, pain, nausea, and sleep disturbances caused by medication used in NDT including opioids. Therefore, NDT can also be classified as a type of palliative care [4,9].
The concept of NDT has been advanced as a result of respecting the patients' decisions regarding end-of-life care. Nevertheless, while the appropriateness of NDT for critically ill elderly patients in the ICU is debated, it is rarely practiced [10]. We suggest that the concept of NDT in the ICU should possibly be applied to the care of critically ill elderly patients with severe kidney diseases in order to provide better end-of-life care [11]. While NDT for elderly patients with stable end-stage renal disease is usually based on comprehensive discussions to obtain the informed consent of patients or their families regarding the will to live [3], it may be practiced under different situations for critically ill elderly patients with severe kidney diseases in the ICU setting.
First, we need to assess whether HD is merited in critically ill patients with a limited life expectancy. Second, under these circumstances, it can be difficult to confirm the patients' preference for care, especially end-of-life care, because of their impaired level of consciousness or the use of mechanical ventilation. Furthermore, it may be considerably more important that available guidelines are followed when practicing NDT.
If possible, we attempt to complete the process of shared decision-making on NDT by referring to the recommended guidelines [12]. As a consequence, we practice 'familycentered decision-making at the end of life', a concept that is preferred in the Japanese society [13]. We would like to emphasize that even after being resuscitated, both patients and their family have a second chance to consider or refine their preference for further care. In our opinion, we also need to avoid over-treatment that may prolong the suffering of patients.
Recently, patient-centered medicine has been widely reported [14]. We propose that NDT in ICUs based on sufficient shared decision-making may also be consistent with this concept. However, according to a survey, both physicians and patients in Japan are still reluctant to discuss issues of end-of-life care, although there exist guidelines supporting this [15][16][17]. We expect that education on end-of-life care for both health care providers and the public will be promoted [18].
Although we would like to discuss the preferable aspects of patient-centered medicine, it is also necessary to refer to the disadvantage of this care [19]. In a situation where patients are unable to state their preference for advanced care, as described in our case report, it is not possible to exclude the possibility that the treatment strategy may be influenced by the subjective assessments of health care providers. As a result, in some cases the decision-making may be contrary to the patients' wishes. In contrast, disease-based medicine can provide standardized evidence-based management for patients, although it is important to be aware of its limitations. In clinical practice, we observed that treatment focusing on disease control does not necessarily contribute to a better management of life-threatening symptoms such as respiratory distress or continuous pain. In cases of severely ill elderly patients with such symptoms and a limited life expectancy, we consider it necessary to prepare a palliative care plan as optional treatment.
In our opinion, it is necessary to maintain a balance between disease-based and patientcentered care, especially in severely ill elderly patients. Therefore, initially it is important to take sufficient time to assess the disease from a medical perspective with other health care providers such as doctors or nurses before withdrawing intensive care. Following this assessment, a plan of palliative care needs to be prepared based on the informed consent of the patients or their families.
Currently there is only limited evidence and established knowledge to support treatment decisions in patient-centered medicine. We therefore recommend that studies investigating the objective assessment of the decision-making process in patient-centered care should be conducted in the near future.
In conclusion, although the concept of patient-centered medicine has limitations, our results show that NDT may be worth considering for critically ill elderly patients with serious kidney diseases as an optional end-of-life care treatment, even in the ICU. We hope that our experiences are helpful to physicians who are interested in the practice of decisionmaking and end-of-life care.
|
2017-11-08T18:15:39.663Z
|
2014-06-14T00:00:00.000
|
{
"year": 2014,
"sha1": "762183e6e5e5588a0ce3aa2c050dd78ec622396f",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/363733",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "762183e6e5e5588a0ce3aa2c050dd78ec622396f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219444913
|
pes2o/s2orc
|
v3-fos-license
|
Mental Health Study ’ : The Rural Population Fares Better than the Medical Staffs & Students of the Area ; A Cross Sectional Analysis
Badri Narayan Mishra, Mudit Kumar Gupta 1Professor, Community Medicine, Ruxmaniben Deepchand Gardi Medical College, Ujjain, Madhya Pradesh, India. 2Ex. Intern. PIMS, Loni, Maharashtra, India. DOI: https://doi.org/10.24321/2394.6539.202001 Mental health is a perennially neglected domain; more so, it is a comparative evaluation across different accessible strata. Aim: The reporting cross-sectional study aimed at enumerating and comparing different stress factors amongst teachers, students, clerical staffs and local residents in a rural medical university and its field practice area. Result: A total of 400 participants were studied over a period of 3 months belonging to four groups with equal number (50 each). Out of these 221(55.25%) recorded mild stress and 10(2.5%) recorded stress scores needing urgent intervention. For the student, new entrants (< 3 years of residential experience) had higher stress prevalence at 36(72%) as compared to their seniors at 27(54%) who had a history of > 3 year residential experience. For faculty, 38(76%) recorded moderately high stress score irrespective of their campus residency. For clerical and paramedical staffs, duration of residence was detrimental in stress generation. Moderate and severe stress was apparent in the less than 3 years resident category at 6(12%) and 1(2%) as compared to 1 (2%) and 0 (0%) in over 3 year groups. The natives of the rural area experienced lowest stress level, with an average stress score of 169.44, for males and 170.42 for female which was less than the cut off value of 178. Conclusion: Rural residency with nativity status and longer duration of stay for working and student class were associated with less mental stress level where as new students and faculties of the health university experienced a higher score. The result points at initiating intervention strategy at work place for stress management.
Aim:
The reporting cross-sectional study aimed at enumerating and comparing different stress factors amongst teachers, students, clerical staffs and local residents in a rural medical university and its field practice area.
Result: A total of 400 participants were studied over a period of 3 months belonging to four groups with equal number (50 each). Out of these 221(55.25%) recorded mild stress and 10(2.5%) recorded stress scores needing urgent intervention. For the student, new entrants (< 3 years of residential experience) had higher stress prevalence at 36(72%) as compared to their seniors at 27(54%) who had a history of > 3 year residential experience. For faculty, 38(76%) recorded moderately high stress score irrespective of their campus residency. For clerical and paramedical staffs, duration of residence was detrimental in stress generation. Moderate and severe stress was apparent in the less than 3 years resident category at 6(12%) and 1(2%) as compared to 1 (2%) and 0 (0%) in over 3 year groups. The natives of the rural area experienced lowest stress level, with an average stress score of 169.44, for males and 170.42 for female which was less than the cut off value of 178.
Conclusion: Rural residency with nativity status and longer duration of stay for working and student class were associated with less mental stress level where as new students and faculties of the health university experienced a higher score. The result points at initiating intervention strategy at work place for stress management.
Introduction
Out of the four major components of health i.e. physical, social, mental and spiritual, the last two have faced isolation over last couple of centuries. 1,2 National Mental Health Survey of India 2015-2016 found out that, "Every sixth Indian needs mental health help;" whereas WHO (World Health Organization) in its recent report puts this figure at 7.5%. 3,4 As per a latest study reported in Lancet Psychiatry (2020), in 2017 India had 197·3 million (95% UI 178·4-216·4) people with mental disorders, which includes, 45·7 million (42·4-49·8) with depressive disorders and 44·9 million (41·2-48·9) with anxiety disorders. 5 Furthermore, as per a 1917 data, mental disorders were the second leading cause of disease burden in terms of Years Lived with Disability (YLDs) and the sixth leading cause of Disability-Adjusted Life-Years (DALYs) in the world in 2017, posing a serious challenge to health systems, particularly in low-income and middle-income countries. 6 Stress is the condition that results when person-environment transactions lead the individual to perceive a discrepancy, whether real or not, between the demands of a situation and the resources of the person's biological, psychological or social systems. In medical terms, stress is the disruption of homeostasis through physical or psychological stimuli. Stressful stimuli can be mental, physiological, anatomical or physical. 7 In the present study we tried to evaluate different stress factors and the levels of stress experienced among the faculty, staffs, students of a rural Medical university situated in Western Maharashtra, and the permanent residents in its practice area.
Aim
The study was aimed at assessing the prevalence and compares the causes of mental health stressors in different study groups.
Objectives
• Assessment of prevalence of anxiety and depression among different study groups. • To find out their different contributors and associate them with socio-economic, academic, and health factors.
• To suggest measures to improve the situation.
Methodology
A pilot tested cross sectional study was carried out under the stewardship of ICMR-STS project bearing reference no.
-21/509/08-BMS over a period of 3 months in a medical university in Western Maharashtra and its adjacent villages. 400 consenting participants were randomly selected from the study population. For maintaining equanimity, the participants selected were 100 each from the major groups and each group had equal participants (50 each) for their subgroups. The major groups were students, faculties, staffs and rural residents and their sub groups were duration of residence in campus( < or > 3 years) for university residents, and males and females for local village residents.
A self-administered questionnaire consisting of 4 components and scored on Likert scale of 1 to 4 (1=never, 2=rarely, 3=sometimes, 4=often) was administered to access general socio-demographic factors, behavioral components, physical signs and stress prone characters. The mean of the total score demonstrated normal Gaussian distribution was calculated. The standard deviation of ± 1, ± 2 and ± 3 were used to grade the severity of mental stress. The range for normal was between mean ± 1 SD (Standard Deviation), for borderline was between mean ± 2SD, for moderate was between mean ± 3SD and sever was > 3SD. Inferential Data analysis was done by Z test, through calculation of standard error (SE), where SE (p 1 -p 2 ) and degrees of freedom was 1.
Result
The participants score ranged from 76 to 304, with the mean at 177.5. Out of 400 participants 221 (55.25%) recorded mild stress and 10 (2.5%) recorded stress scores needing urgent intervention.
The break ups of the different grades of stress scores for studied participants are depicted in Table 1.
On subgroup analysis 27 (54%) of students with more than 3 years of campus residency showed moderate stress level in comparison to 36 (72%) moderate stress score in < 3year resident students. Whereas in case of faculty with campus residency over 3 years, 38 (76%) reported moderate stress scores and only 12 (24%) had a normal scores level. The detailed break ups of subgroup analysis for teachers with over 3 years residency is presented in Figure 1.
Discussion
A Z-test value of 4.2899 at 1% level of significance i.e. p < 0.01 suggests that students were at greater stress and may be more prone to stress related diseases than the natives of Rural area. Similar observation were made by other researchers. [8][9][10][11][12] Highly significant difference between stress levels of teachers (doctors) and rural natives were apparent from the Z-value at 6.583 (p < 0.01). Similar observations were documented by other studies where doctors were as one of the most stressed groups. [13][14][15][16][17] On comparison of mild (grade 2) stress scores of teachers and students the Z-value was at 2.0155 (p < 0.05); implying thereby the existence of a significant difference between stress levels of teachers and students; hence putting them at increased risk of mental stress and its associated conditions.
High level of stress among teachers of both the study group was reported. Inter group evaluation showed no significant variation. Studies indicating doctors and teaching professionals as high stressed groups are aplenty. [18][19][20][21][22] High levels of severe and very severe stress in recently employed (< 3 yrs) paramedics and clerics of the institute may be due to change of place, new working environment and difficulty in cultural and work adjustments. Similar observations were recorded by other researchers, putting recent employment and change of place as stress precipitators. [23][24][25] Though the average stress score for both males and females among local natives were below the mean (cut off value=178), the females recorded a slight high average than the males thus putting them at more risk of developing stress and its related disorders. 20-22
Conclusion
There is increased level of mental stress in individuals who are away from family for less than 3 years as they are still in a phase of adoption regardless of age and gender and also profession. Depressive symptoms and perceived stress are public health concerns more so among faculty, doctors, students and paramedics. Stress at work is also associated with very high educational level and high occupational position. Common stressors documented were change of place,work environment, difficult people, daily hassles, medical illness and discrimination.
Employers should take note of these findings and draft policies encorporating better work culture, scopes for recretions, enhanacment of interpersonal relationship. Employees, and seniors should also take initiatives in creating work place conducive for new entrants. A mentally healthy work force and student commune can bring stability to the institute and help built its reputaion.
Figure 1.Bar diagram showing stress score of Teachers residing over 3 years in campus
Unlike the case with students, there were no differences in stress category for faculty with respect to residency. Both groups showed equal percentages in the different sub grades of mental stress i.e. 2 (4%) were in normal grade, 10 (20%) at borderline and 38 (76%) at mild stress with an average stress score of 182.28, which is above the cut off value of 178.
|
2020-06-08T02:06:27.835Z
|
2020-05-16T00:00:00.000
|
{
"year": 2020,
"sha1": "d1bf13f0d4cab1d285cbd2cccc45dd4e518c8c0e",
"oa_license": "CCBYNC",
"oa_url": "https://medicaljournalshouse.com/index.php/Journal-MedicalSci-MedTechnology/article/download/346/227",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1c05a79322b2efb64a2be31e1e1b6a30c90b4a8c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
146811562
|
pes2o/s2orc
|
v3-fos-license
|
‘Tiny Iceland’ preparing for Ebola in a globalized world
ABSTRACT Background: The Ebola epidemic in West Africa caused global fear and stirred up worldwide preparedness activities in countries sharing borders with those affected, and in geographically far-away countries such as Iceland. Objective: To describe and analyse Ebola preparedness activities within the Icelandic healthcare system, and to explore the perspectives and experiences of managers and frontline health workers. Methods: A qualitative case study, based on semi-structured interviews with 21 staff members in the national Ebola Treatment Team, Emergency Room at Landspitali University Hospital, and managers of the response team. Results: Contextual factors such as culture and demography influenced preparedness, and contributed to the positive state of mind of participants, and ingenuity in using available resources for preparedness. While participants believed they were ready to take on the task of Ebola, they also had doubts about the chances of Ebola ever reaching Iceland. Yet, factors such as fear of Ebola and the perceived stigma associated with caring for a potentially infected Ebola patient, influenced the preparation process and resulted in plans for specific precautions by staff to secure the safety of their families. There were also concerns about the teamwork and lack of commitment by some during training. Being a ‘tiny’ nation was seen as both an asset and a weakness in the preparation process. Honest information sharing and scenario-based training contributed to increased confidence amongst participants in the response plans. Conclusions: Communication and training were important for preparedness of health staff in Iceland, in order to receive, admit, and treat a patient suspected of having Ebola, while doubts prevailed on staff capacity to properly do so. For optimal preparedness, likely scenarios for future global security health threats need to be repeatedly enacted, and areas plagued by poverty and fragile healthcare systems require global support.
Global health; prevention and control; public policy; qualitative evaluation; emergency responders; communicable diseases; emerging; fear Background On 8 August 2014, the World Health Organization declared the Ebola epidemic in West Africa as a Public Health Emergency of International Concern (PHEIC) under the International Health Regulations (IHR) [1]. All three of the worst affected countries were to address the emerging epidemic challenge without staff, stuff, space and systems [2][3][4]. With the epidemic seemingly out of control, and a proportionately high number of doctors, nurses, and midwives succumbing to Ebola [5], there was a growing fear of transmission beyond the region. In breach of WHO recommendations and guidelines [6], flights were cancelled and cross-border movement curtailed [7]. The epidemic caused public concern outside West Africa [8], as fear and racism found fertile ground [9][10][11], and in an effort to stop the international spread of the disease, all states were advised to be prepared to detect, investigate, and manage Ebola cases [1].
Preparedness as part of disaster risk reduction is defined as 'the knowledge and capacities developed by governments, response and recovery organizations, communities and individuals to effectively anticipate, respond to, and recover from the impacts of likely, imminent or current disasters' [12]. Yet, preparedness is also enveloped in and influenced by the socio-cultural dimension at the individual, organizational, and national levels, and measures to manage outbreaks are not always accepted or accommodated by the communities to which they are applied [13]. An analysis of eight European countries' preparedness plans since 2009 for countering a future influenza A (H1N1) pandemic revealed that the way plans were framed varied considerably, and '[told] us something about how the different countries want pandemics and preparedness to be understood by the public' [14]. More research was encouraged into cultural and social structures in the respective countries.
In Iceland, information about the Ebola epidemic in West Africa came from several sources. The Directorate of Health (DH) first reported on the epidemic on 8 April 2014 [15]. In Icelandic media, the rapid progress of the Ebola epidemic in West Africa was increasingly highlighted, and exported Ebola cases to Spain, USA, and elsewhere, were widely covered. Fear of a global epidemic was rife, and in media and online discussions, doubts were raised about the Icelandic health system´s capacity to take care of a patient with Ebola [16][17][18], despite its ranking as one of the best in the world [16].
On 11 August 2014, three days after WHO declared PHEIC because of Ebola, DH encouraged Icelandic citizens to avoid visits to the area, if possible, and reported that the national epidemic preparedness plan was being activated for Ebola [19]. It was elaborated by a team that involved the Chief Epidemiologist at the DH, Landspitali University Hospital (LSH), the Department of Civil Protection and Emergency Management (DCPEM), and the seven Primary Healthcare Regional Organizations in the country at the time. Key external partners were the European Centre for Disease Prevention and Control (ECDC) and WHO, in addition to Nordic collaborators in epidemic preparedness [20]. At the same time, it was regarded as highly unlikely that Ebola Virus Disease (EVD) would spread in the country [21]. Recognized scenarios included the possible appearance of an infected person in need of treatment, who could be either an Icelandic citizen who had visited or worked in one of the affected West African countries, or a person with signs of EVD on a trans-Atlantic flight in the navigation area controlled by Icelandic authorities [22][23][24][25]. On 3 November 2014, the plan was put to the test when a foreign airline made a non-scheduled landing at Keflavík International Airport due to fear of EVD in one passenger from South Africa. Parked in a closed-off area, a physician in full Personal Protective Equipment (PPE) entered the plane, but quickly ruled out Ebola [26].
Irrespective of good or bad overall performance, health systems are tested in times of crisis, such as epidemics. Here, the aim is to describe and analyse the process of establishing preparedness plans for Ebola in Iceland, with a specific focus on the perspectives and experiences of managers and frontline health workers involved in the process.
Methods
This study is part of a larger study on the impact that the global threat of the Ebola epidemic had in Iceland [16,27]. Qualitative case study methodology was applied, perceiving the preparedness planning and training process as the case with clear boundaries of the initiation, process, and wrap-up of preparedness planning and training. The study was conducted in April-May 2016, and the interviewed participants were administrators and frontline health professionals central to the case, so as to explore their perspectives and experiences concerning Ebola preparedness [28,29]. Staff in managerial positions were contacted by one of the authors (GG) for permission to interview them based on their role in the preparedness plan. To identify potential interviewees in the Ebola Treatment Team (ETT), the director of the team listed relevant email contacts. Those who responded positively were subsequently invited for an interview, conducted in Icelandic by one of the authors (ÍEH), a physiotherapist. In case interviewees suggested other potential participants, they were invited through email to participate. A similar methodology was applied to identify participants from the Emergency Room (ER). They were included in order to represent frontline health workers who worked in the only ER in Reykjavík, where persons exposed to EVD were most likely to first seek care in case of acute illness.
Three separate interview guides were developedone each for managers, ETT, and ER respectively (see supplementary material). The interviews included open questions probing the role of their institution in preparedness, the experience of the training process, challenges encountered or expected, and any dilemmas that they may have experienced in relation to the preparedness plan. The recruitment of participants was concluded when saturation was reached. Each interview was recorded and took about 20 to 60 minutes; they were then transcribed and analysed using thematic analysis. The data material was read through repeatedly, sorted, and categorized, based on the participants' priorities in the representation of their views. From this exercise, three broad themes were inductively identified that corresponded to critical perspectives introduced by the participants.
Permission to conduct the study was granted by Iceland's National Bioethics Committee (VSN- and Landspitali University Hospital (LSH 13-16, 4 February 2016). Reporting on the results was guided by the COREC guidelines [30]; however, to ensure anonymity of the respondents within the small community of staff who took part in the preparedness activities, participant information is not associated to quotations.
Theme 1getting the job done
The Icelandic Ebola Preparedness Plan included the establishment of an ETT within LSH [31], and the preparatory activities engaged more than two hundred staff across all of its departments. The ETT consisted of about 50 healthcare professionals who had volunteered to participate, including 11 doctors and 28 nurses, a few laboratory technicians, radiologists, and auxiliary nurses. They attended special training sessions focused on protocols for admission and treatment of a patient with EVD, the donning/doffing of PPE, and personal protective measures during patient care. A new provisory unit was designed to be set up on the ground floor to minimize the risk of infection spreading to other units within the hospital, with two rooms specifically identified for the care of a patient with EVD [31].
Managers' accounts of this period elaborated the complexity of preparedness planning in terms of the involved institutions, actors, procedures and requirement of the plan. One manager concluded: You get no discount. You can never go the shorter way. There was always something that surprised you. We thought this was a lot like a three headed monster, so when you chopped off one of its heads, three other emerged, every solution was followed by more problems.
The health professionals who volunteered to join ETT did so for different reasons. Ebola preparedness was 'a job that had to be done', and 'someone had to do it'. Some referred to ethical or professional obligations: This is just a part of being a nurse, to encounter situations that can be dangerous to you or someone else, but you have made this decision and you deal with it. Some connected their decision to their 'action gene' or 'addiction to taking risks', while others said they had already raised their kids and had years of experience, including work with other epidemics, such as HIV. Yet, the practice of volunteering in the preparation was questioned. One participant said: We learned that we could not rely on volunteers … when you work in an infectious disease department you cannot choose what infections you want to work with.
ER staff indicated that for them working in the ER was enough of a risk to take, no reason to expose oneself even more by joining the ETT, and appreciated that others had volunteered.
All participants noted that co-operation and communication had generally functioned well during the preparedness planning, with information flowing both ways. Short communication lines within the healthcare system were perceived as both a strength and a weakness; a strength, insofar as people knew each other, but a weakness because of the uneven burden of workload. Staff of the ETT and in the ER felt they had been well-informed, and that openness and honesty had characterized the planning and diminished their initial fear. Those in managerial positions had listened and taken their opinions into consideration. One said: They were honest, no one was hiding anything, everything was on the table, no one tried to make things more appealing and say that everything would be OK, they just told us about things as they were.
Theme 2trust, doubt and fear
Both management and participants from the ETT and ER expressed their ambiguity in terms of trust, doubt, and fear. Participants conveyed trust in the health system and their own role as health professionals, while at the same time admitting to facing formidable challenges during the elaboration of the preparedness plan. Facilities for isolation and treatment of patients with Ebola were less than perfect: We assessed how we could use the department … and change it in just a few hours into some kind of an isolation unit that we could possibly use.
Some compared this short-term isolation facility to a 'camping site', as the facilities were too provisional and not comparable to those found elsewhere. There was also doubt about how many Ebola patients LSH would be able to care for: 'Maybe one or two patients, barely more'.
Respondents believed that the training and education of the members of the ETT and ER had been satisfactory. They felt that it had been proportionate to the risk, while some were concerned about the lack of staff. Nonetheless, there were contradictions on the division of labour among the professionals, exemplified by different ideas on how to proceed if a patient suspected of having an EVD came in an ambulance to the LSH for treatment. Almost all participants stated that they were ready to do their part in the Ebola response, or 'as ready as [we] could be'.
There were diverse opinions on what it meant to be ready: to treat one confirmed case of Ebola, one suspected case, or more EVD patients? When asked if Ebola was a real threat to the country, participants usually referred to how easy it was to travel the globe: 'Yeah, why not, the world is getting smaller'. Although Ebola was thought of as a real danger by many, some participants expressed difficulty in taking their training seriously, doubting that Ebola would ever reach Iceland. One respondent said: People were dedicated in the beginning, but when the news appeared that Ebola was receding, that diminished, and I never felt like this formally ended.
Participants described their relief that nothing really happened, while emphasizing the need to experience a real situation to evaluate the preparedness efforts. One participant said that 'a little bit more seriousness [would have been] needed in the PPE practices'.
It was taken as a manifestation of fear that some of the staff in the communicable disease department of the LSH refused to take part in the ETT. When describing their fears, ETT members frequently connected it to their working conditions. Many of them were afraid that they would not get the best PPE, others that they would not do the donning/doffing correctly and, lastly, they were worried about work performance while in the PPE. One participant said: What bothered most of us was how uncomfortable the PPE was and I think that made people nervous: "How will I manage working in this for hours?" Another described the donning/doffing process like a 'complicated ballroom dance'. Moreover, participants were afraid of 'unknown territories', that is, they did not know the hospital ward, they were supposed to work in, and some team members had no recent experience of clinical work. One participant said: I didn't think these [non-clinical] people belonged in the team, because this is a very clinical environment in addition to having to be in this costume [PPE] with the risk of becoming infected by mistake.
Those with non-clinical background were, however, aware of their limitations: I realized that I would not be the one in the front, I would not be managing patients directly.
The importance ascribed to teamwork was evident in relation to fear. Participants described fear of working with people they had not worked with before: The weakest link in the preparation was that even though I knew their faces, I had never worked with them.
Another issue was no-show by some team members in training sessions or in lectures: This is team-work, one does this and the other one does this, [we] help each other. Then you don't want to be working with someone who didn't show up.
Another one said:
There were a lot of doctors who just dropped in, dropped out, and then dropped in again. I asked myself: Are these individuals … ready to take this on?
Participants in the ETT mentioned the precautions they took or intended to take to cope with their feelings of fear, should Ebola emerge in Iceland. A major precaution was planning to avoid contact with the family while working with Ebola patients. One participant said: 'You thought … about your children at school … parents in the neighbourhood …' if they knew (s)he was working with an Ebola patient. For them, it was important they would have access to special accommodation in case of clinical EVD work 'so I wouldn't be exposing anyone or creating hysteria'. ETT members mentioned the extra insurance offered as a prerequisite for taking part in the team. 'The normal insurance for LHS staff would not cover everything if we were to become sick or even lose our lives.' Amongst ER staff, the matter of insurance did seem to be less of an issue compared to the ETT. One respondent said: 'You are used to being at risk by many disease threats'. Furthermore, the issue of higher salaries and risk commission came up in the interviews, but overall did not matter as much to the participants as the insurance, or assurance of accommodation in case of need.
Theme 3the Icelandic way
Characteristics associated with Iceland and the Icelandic people were referred to repeatedly by participants. The concept 'Tiny Iceland' was often mentioned and emerged with positive and negative connotations. 'Tiny Iceland' referred to the size of the country and population and its perceived capability to still 'get the job done'. even though compromises had to be made. Comparing how Iceland handled its responsibilities differently from other countries of a larger size was often brought up, both with pride in Iceland as a strong independent nation, and with insecurities about its capacity in comparison to other countries. It was pointed out that since the preparedness process was in the hands of a few people, everyone knew their role. As one administrator said: This little hospital system, as complicated as it might seem every day, gives you the chance to just pick up the phone and call the one in charge.
Being a small population presents challenges regarding resources, infrastructure, and specialized medical training to comply with standards of international actors. Notions of Icelanders as resilient in spite of shortcomings were common; referring to the experience of preparedness planning and training, one health staff said: It was very much the Icelandic way, we'll manage, we'll work it out, and there was so much ingenuity. This notion of a particular Icelandic approach to coping, in spite of shortcomings, was also detected more generally, as in the statement: Would it have worked? Yes, it would have worked. Would it have been optimal? We cannot say, it would have been optimal; we can say, it would have been sufficient.
In contrast to this, there were concerns about whether Icelandic aid workers falling ill in Ebolaaffected countries should be transferred to Iceland or to hospitals in other Nordic countries with better isolation units. Some of the participants trusted that patients with EVD would not be transferred to Iceland. One participant stated: You heard that Norwegians were criticized for transferring their aid worker from Africa to Norway. We don't know what would have happened if they would have transferred an Icelander into the country.
Another participant said:
We don't have good enough isolation unitsyou are not supposed to send patients to a hospital that is less than 100%. I thought there was assurance in that.
Discussion
During the devastating Ebola epidemic in West Africa that spread to neighbouring sub-Saharan countries, North America, and Europe [32], preparedness plans were widely elaborated and later evaluated. Evaluations have, for example, been conducted in 11 African countries close to the epidemic [33], in the EU region [34,35], and the US [36]. Here we present data from a qualitative case study on the process, and experiences with establishing a preparedness plan for Ebola in Iceland in 2014. Interviews with staff who were engaged, either as administrators or frontline healthcare workers, alert us to the manner in which geographic, demographic, cultural, and organizational characteristics shaped the response. The results show that the process of establishing and training for preparedness was permeated by ambiguities of pride and pragmatism, trust, doubts, and fear.
'Getting the job done' (theme 1) refers to the multitude of tasks and considerations that surrounds and feeds into the preparedness plan itself and are necessary for successful planning and implementation. Using the metaphors of 'hard core' and 'soft periphery', Langley and Denis [37] emphasize the importance of relatively 'peripheral' concerns and processes for planning and implementation of new interventions. The hard core represents the actual intervention or goal, e.g. implementation of a preparedness plan. The soft periphery refers to all the contextually important networking, negotiations, and agreements necessary to deliver the hard core. If the soft periphery is neglected, it will cause multiple challenges in the implementation process, and the benefit of the hard core, the intervention itself, may not transpire as anticipated. Due attention to the soft periphery may, however, considerably promote the delivery of an innovation, and secure support from important stakeholders. In our data, one manager speaks of the preparedness process as dealing with a three-headed monster where every solution was followed by new problems. The data indicate that the process of dealing with 'the three headed monster' was given due attention as a means to successfully develop Iceland's preparedness plan. Comprehensive consultations and the involvement of many associated institutions were mentioned. Still ambiguity remained with some staff in terms of division of responsibilities and taskse.g. when transporting a patient potentially infected with Ebola from the airport to the hospital, and other such activities.
During epidemics, rumours, gossip, and unreliable information on the news and social media spread rapidly, resulting in so-called 'infodemics' [38]. The West African Ebola epidemic was covered widely by media [39], and the fear of Ebola reached every corner of the world, exemplified by travel bans from affected countries, and trade barriers [40], in contrast to the ongoing epidemic in the Democratic Republic of Congo [41,42]. In our second theme, trust, doubt, and fear of health workers were represented. Although all intentions were good, concerns remained about the suitability and safety of the isolation ward, the PPE, and other tools, as well as adequate engagement of colleagues who might potentially work alongside them, in case an Ebola patient came to Iceland. The foreignness of putting on, removing, and working from within a PPE and the trustworthiness of available PPE were mentioned. In preparedness efforts in other countries, scarcity of resources in relation to manpower demand and problems with training and protocols involving PPE were common challenges [35]. Similar problems were encountered in Iceland. Provisory treatment facility had to be designed, called 'camping site' by some, in contrast to facilities found elsewhere [43]. Further, the ETT was established based on voluntary recruitment rather than on the staff's assigned roles within the healthcare system, a procedure that was deemed less than optimal. The members of the ETT pointed out that they had never worked together as a team under circumstances that demanded strict adherence to infectious control procedures. This eroded trust, compounded by the laissez-faire attitude of some of its members during the preparation exercises, possibly due to other competing tasks in a busy hospital and insufficient resources that hampered full participation [44]. Further, it was a constraint that simulation exercises were not an option, found to be an important element in preparation for epidemics [35]. This might have resulted in less than optimal staff protection for those who would have been in direct contact with an infected patient, as reported during the SARS epidemic in Canada [45,46].
Anthropological work on emergency preparedness emphasizes the connectedness between health professionals, technological devices, and knowledge as a prerequisite for successful preparedness. Wolf and Hall present preparedness efforts as a form of governance that involves human bodies (those of health professionals), clinical architectures (e.g. isolation wards), and technical artefacts (gloves, protective suits, disinfectants, etc.) [47]. During preparedness training and implementation, 'nursing bodies are transformed into instruments of preparedness', and become part of infrastructural arrangements. Health professionals are, here, both vulnerable and powerful tools in the management of contamination. The authors argue that successful planning, training, and implementation of a preparedness plan require such intrinsic connectedness. In the case of Ebola preparedness in Iceland, health professionals draw our attention to dilemmas of connectedness, and their assessment of the fact that these shortcomings might hamper the mobilization of 'preparedness within the human body'that is, the embodied experience, routine, and tacit knowledge which Wolf and Hall state are key to successful implementation. Repeated enactment of receiving and treating a patient with Ebola within experienced and trustful teams would probably enhance such embodiment, provided that there is justified trust in the involved technology. In addition, repetition would also strengthen the 'soft periphery' of preparedness, and divisions of responsibilities would be clearer manifested.
In the third theme, we observe how notions of the 'Icelandic way' help participants make sense of ambiguities about Ebola preparedness. Loftsdóttir explored how people negotiated the imagination of the local and the global during the 2008 economic crisis in Iceland [48]. Notions of the intrinsic character of Iceland, and of being Icelandic, serve to underscore certain points and explain positive and negative experiences with the preparedness plan. Iceland is far away from the continents, but still connected through global needs for policy, risk of contamination, and dependency in terms of collaboration, in emergencies emerging from elsewhere. In our study, participants highlighted the importance of believing in oneself and the 'Icelandic way of doing things,' summed up in the paraphrase 'þetta reddast' (things always have a way of working out in the end). The preparedness plan had to be completed, and adapted to Iceland's particular global situation.
In the 21st century, the world has faced new epidemic threats, such as SARS, and old scourges such as the plague have resurfaced [38]. One of the main findings on Ebola preparedness measures in the EU was that measures taken were based on past preparedness and experience of other epidemics, such as SARS and H1N1 [35]. Further, key stakeholders within each country found their measures to have been adequate for dealing with a single case of Ebola, as was the case in Iceland. A preparedness plan for pandemic influenzae in Iceland was elaborated in 2006activated in response to the H1N1 epidemic in 2009and revised in 2016 [49]. During the elaboration of these plans, communication among the different levels of the healthcare system and supporting agencies, such as the DCPEM, had been clearly defined, and proved to be useful in the preparedness for Ebola. Further, as found important in preparedness activities for pandemic influenzae elsewhere [44], honesty, transparency in communication, and sharing of information from managers to front-line health professionals, was found to be critical. It gave a feeling of being involved, and mitigated the fear that is so frequently encountered during epidemics [38].
Conclusions
Iceland was far away from the epicentre of the Ebola epidemic in West Africa. Yet this case study shows that health professionals felt the strain of possibly having to treat one or more patients with EVD. Their situation stands in sharp contrast to the situation in the three worst affected West African countries that lacked staff, stuff, space, and systems to effectively address the challenge of EVD. Although Icelandic health professionals had trust in the national healthcare system, and in their own capacity, doubt and fear influenced the reflections on preparedness planning of both administrators and healthcare staff. References to national identity and the characteristic of an 'Icelandic approach' to handling challenges assisted participants in coming to terms with the experienced shortcomings of the preparedness plan, and underscored the pride in the ingenuity applied in the process. These references negotiate the role and character of the nation of Iceland, and its role in a globalized world, as both a small and isolated nation on one hand, and a central and capable one, on the other.
The experienced ambiguity needs attention in a health system and among healthcare staff that have to act resolutely and unfailingly, should they be placed in charge of containing contamination. This study points to the necessity of repeatedly re-enacting, as realistically as possible, the likely scenarios of receiving and treating one or more patients infected with Ebola (or other contagious global health threats) as a routine matter. This would assist in the identification of overlooked 'soft periphery' concerns, and promote embodied preparedness among teams of health care staff on the frontline. Geir Gunnlaugsson conceptualized the study, and took part in all necessary steps towards its completion, such as analysis and interpretation of data, and writing the manuscript for submission. Íris Eva Hauksdóttir collected and analysed the data as part of a master thesis work conducted under the supervision of all three co-authors, revised the manuscript, and approved the final version. Ib Bygbjerg took part in the interpretation of data, revision of the manuscript, and approved the final version. Britt Pinkowski Tersbøl took part in designing interview tools and in the thematic analysis of interview data, interpretation, revision of the manuscript, and approved the final version.
Disclosure statement
Dr. Gunnlaugsson reports he was the Chief Medical Officer (CMO) for Iceland, Directorate of Health, in the period 2010-2014. Other authors report no conflict of interest.
Ethics and consent
The study was reported to the Data Protection Authority and approved by the National Bioethics Committee in Iceland (number VSI- ). Subsequently, the study was approved by the University Hospital Ethical Committee on 4 February 2016 (number LSH [13][14][15][16]. Participants signed an informed consent form before taking part in the study.
Funding information
Not applicable.
Paper context
The manuscript builds on the work of Íris Eva Hauksdóttir towards a MSc in Global Health, Section of Global Health, Department of Public Health, Copenhagen University, Denmark.
|
2019-05-08T13:27:21.254Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "723cf0c9f0c4790809bcf9afc57561e2afdda085",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16549716.2019.1597451?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efd94d1135c5ee11c2af624b344881e079a5ce7a",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine",
"Political Science"
]
}
|
8942444
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of IVOCT imaging of coronary artery metallic stents with neointimal coverage
Accuracy of IVOCT for measurement of neointimal thickness and effect of neointima in the appearance of metallic struts in IVOCT images was investigated. Phantom vessels were constructed and coronary stents were deployed and covered with thick (250–400 μm) and thin (30–70 μm) phantom neointima. High resolution Micro-CT images of the stent struts were recorded as a gold standard. IVOCT images of the phantom vessels were acquired with various luminal blood scattering strengths and measured neointimal thicknesses from IVOCT and Micro-CT images were compared. In transparent lumen, comparison of IVOCT and Micro-CT neointima thickness measurements found no significant difference (p > 0.05) in the thick neointima phantom but a significant difference (p < 0.05) in the thin neointima phantom. For both thick and thin neointima, IVOCT neointimal thickness measurements varied from Micro-CT values by as much as ±35 %. Increased luminal scattering due to presence of blood at concentrations <5 % did not interfere with measurement of thin neointimas and was validated by ANOVA analysis (p = 0.95). IV-OCT measurement of strut feature size with an overlying thin neointima match true values determined with Micro-CT (p = 0.82). Presence of a thick neointima resulted in lateral elongation or merry-go-rounding of stent strut features in IVOCT images. Phantom IVOCT images suggest that thick neointimal layers can result in more than 40 % lateral elongation of stent strut features. Simulated IVOCT images of metallic stent struts with varying neointimal thickness suggest that neointimal light scattering can introduce the merry-go-round effect.
Introduction
Formation of neointima after stent deployment is an important indicator of the vascular healing process. While the presence of neointima is desired in the healing response, if the neointima thickens excessively re-stenosis results-a frequent complication with bare metal stents. To prevent restenosis, drug-eluting stents limit neointimal formation by releasing immunosuppressant pharmaceuticals which inhibit smooth muscle cell proliferation. Cypher (Cordis Corp., Miami Lakes, Florida) and Taxus (Boston Scientific Corp., Natick, Massachusetts) stents resulted in delayed neointimal formation when compared with bare metal stents of similar implant duration [1]. These findings have been extended to second-generation drug eluting stents [2].
Human autopsy studies suggest that the lack of neointimal strut coverage due to delayed vascular healing is associated with acute stent thrombosis [3]. Therefore, accurate in vivo assessment of neointimal formation after stenting during long term follow-up may aid in the identification of patients at risk for late stent thrombosis. Intravascular Optical Coherence Tomography (IVOCT) with high axial resolution (15 lm) and tissue penetration depth of 1.5-2.0 mm offers the best imaging technology to assess the neointimal thickness compared to e.g. Intravascular Ultrasound [4][5][6]. However, clinical interpretation of a neointimal thickness that is or near the axial resolution of the IVOCT system is not well developed. Moreover, impact of luminal scattering due to residual blood on neointimal thickness measurement has not been investigated. In cases where struts may have partial tissue coverage, some experts consider the strut as covered, whereas others suggest that these struts be classified as having incomplete coverage [7,8]. A study on the accuracy of IVOCT in analyzing the neointimal response to several drug-eluting stents, showed significant variation in the estimation of strut coverage between IVOCT and histology when the neointimal thickness was between 20 to 80 lm [9] which is the range of thicknesses corresponding to thin neointimas. Due to the known artifact of tissue shrinkage in histology, phantoms will provide a more accurate standard for neointimal thickness measurement than human autopsy specimens. One objective of the present study was to evaluate the accuracy IVOCT thickness measurements of thick and thin neointimas in a phantom vessel.
Luminal scattering caused by residual blood that was incompletely flushed and/or formation of micro-bubbles when contrast is injected into the lumen can impact clinical IVOCT images. As increased luminal scattering reduces intensity of detected light and results in loss of fine image details, the effect could impact measurement of thin neointimal layers. For thicker neointimal layers, increased luminal scattering may influence apparent strut feature size. A second objective of this study is to evaluate the accuracy of neointimal thickness measurements by IVOCT in presence of varying luminal scattering strengths due to residual blood contamination. The impact of thick neointima on the longitudinal size of strut features is also characterized as a new etiology for the merry-go-round artifact [10].
Phantom vessel fabrication
A mold consisting of a 3 mm inner-diameter brass cylinder positioned inside an outer cylindrical aluminum housing (6 mm diameter 9 25 mm length) was constructed to make phantom vessels. To register IVOCT and Micro-CT images (see below) a 125 lm diameter glass optical fiber oriented parallel to the long axis was embedded in the phantom vessel and used as an azimuthal reference marker. Two phantom vessels were made by pouring polydimethylsiloxane (PDMS) mixed with titanium dioxide into the mold. After curing, the phantom vessel was removed from the mold and 3 9 8 mm TAXUS Ò Liberte Ò stents were deployed at a balloon pressure of 16 atm for 30 s (Fig. 1).
Addition of phantom neointimal coverage
Neointima was added to phantom vessels, giving a thin and a thick layer. A two-piece aluminum cylinder, coated with Teflon, was inserted into each phantom vessel and a mix of PDMS and titanium dioxide was injected into the space between aluminum cylinder and vessel wall to form a neointima. For thin neointima where thickness was \50 lm, a scattering coefficient of 12.7 cm -1 was taken for early after stenting-\1 day-which is similar to the cellular epidermal layer in skin [11]. For thick neointima where thickness was \450 lm, a scattering coefficient of 8.1 cm -1 was taken for long-term stenting-more than 2 weeks-similar to fibrous dermal layer in skin [11]. After curing, the two halves of the aluminum cylinder were removed, leaving a phantom neointima covering the deployed metallic stent.
Micro-CT imaging
Micro-CT images of TAXUS Ò Liberte Ò stents deployed in the phantom vessels with neointima were acquired and utilized as ''gold standard'' to determine stent struts' size and neointimal thickness. For each slice, 1,000 views were recorded and field size of image reconstruction was 6 9 6 mm 2 . Each image slice was rendered at 1,024 9 1,024 pixel resolution, resulting in an in-plane resolution of 5.86 lm per pixel. When imaging thin neointima, higher power was used to achieve a resolution of 3.5 lm (both in-plane and interslice).
IVOCT imaging
Images were acquired using a frequency domain IVOCT system (CorVue, Volcano Corporation) while the phantom vessel was flushed with saline. The CorVue system operated at a 1,310 nm center wavelength with a 12 lm axial resolution at an A-scan rate of 20 kHz. Catheter was pulled IVOCT/Micro-CT image registration Image registration between IVOCT and Micro-CT data sets was performed in MATLAB (Mathworks, R2013a). Since IVOCT frames were recorded at 50 lm intervals, a single IVOCT frame corresponded to 9 Micro-CT sequential images of thick neointima and 15 sequential images of thin neointima. Longitudinal registration was completed by comparing the subset of Micro-CT images corresponding to a given IVOCT image. The 125 lm diameter glass optical fiber oriented parallel to the long axis was visible in both IVOCT and Micro-CT images and used as a marker for azimuthal registration. Following image registration, neointima thickness and strut size feature measurements derived from IVOCT and Micro-CT images were performed manually in MATLAB (Mathworks, R2013a). For both thin and thick neointima, thickness taken from IVOCT images was measured from the middle of the strut feature to the luminal wall. Neointimal thickness determined from Micro-CT images was measured from the proximal edge of the strut feature to the luminal wall.
Statistical analysis
Statistical analysis was performed using paired t test to compare the neointimal thickness measurements of IVOCT and Micro-CT images as well as the strut feature size measurements of IVOCT and Micro-CT images for both thick and thin neointima phantoms. For both paired t-tests, the null hypothesis is that the IVOCT and Micro-CT measurement means are equivalent. Analysis of variance (ANOVA) was performed to compare strut feature size for four luminal blood scattering strengths (0.5, 1.0, 2.0 and 5.0 %) in the thin neointima phantom. Twenty independent measurements of neointimal thickness in IVOCT images for each luminal blood scattering strength were measured. For ANOVA tests, the null hypothesis is that mean neointimal thickness for each luminal blood scattering strength are equivalent.
Simulation of neointima coverage
Elahi et al. [12] has described a computational model of an IVOCT catheter and a metallic stent strut using optical design software (ZEMAX, Radiant, Seattle, WA). In the present study, the same IVOCT catheter model was employed and the strut was covered with a 50 lm scattering layer simulating thin neointimal coverage; where refractive index (n) of 1.42 and scattering coefficient (l s ) of 12.7 cm -1 were assumed for early neointima (epidermis). For the thick neointima (400 lm) n = 1.37 and l s = 8.1 cm -1 corresponding to late neointima (dermis) [11,13].
Accuracy of neointimal thickness measurement
Measurements of neointimal thickness from IVOCT images were compared to values obtained from Micro-CT images which show the true neointimal thickness. Examples of thickness measurement for thick and thin neointimas are given in Fig. 2 where we were able to measure the thickness of neointima in IVOCT images as thin as 30 lm accurately.
Thickness measurement was completed for 141 IVOCT frames of thick neointima and 25 IVOCT frames of thin neointima (in some cases phantom neointima did not cover the entire stent) and the corresponding Micro-CT images, given in Table 1. To obtain IVOCT neointimal thickness, measured IVOCT optical pathlengths were divided by refractive index of PDMS (n = 1.405) [14].
A Bland-Altman plot was constructed to display differences between thickness measurement of thin and thick neointima using IVOCT and Micro-CT (Fig. 3). Mean difference between neointimal thickness measured with IVOCT and Micro-CT for both thick (15 lm) and thin (11 lm) neointima exhibit a non-negative offset.
Paired t-test rejected the null hypothesis for thin neointima (p = 0.003) and failed to reject the null hypothesis for thick neointima (p = 0.08).
Effect of blood luminal scattering on measurement of thin neointimal layers IVOCT images of the phantom vessel with a stent deployed with thin neointimal coverage were acquired where luminal scattering strength was varied by different blood-saline mixes (0.5, 1.0, 2.0 and 5.0 %). As suggested in Fig. 4, increased blood concentrations increased luminal scattering, however, the presence of up to 5 % blood does not appear to affect thickness measurement of thin neointimas. ANOVA analysis failed to reject the null hypothesis (p = 0.96) which suggests that mean neointimal thicknesses measured with IVOCT are independent of luminal scattering strength over the investigated blood concentrations.
Merry-go-round effect due to neointima
Scattering by thick neointimas resulted in elongation (''merry-go-round'') of strut features in IVOCT images. Figure 5 illustrates examples of strut feature size (length of red double-sided arrows) in presence of thick and thin neointimas. (Table 2). Relatively high standard deviation in measurement results from variation in strut size. No correlation was observed between the strut size and neointimal thickness in the range of 200-400 lm, consistent with marked ''merry-go-round'' artifact with thick neointima.
A Bland-Altman plot was constructed to investigate IVOCT measured strut feature size with overlying thin and thick neointima (Fig. 6). Mean difference (4 lm) between strut size measured with IVOCT and Micro-CT for thin neointima is nearly zero and less than IVOCT lateral resolution. For thick neointima, mean difference (60 lm) between strut size measured with IVOCT and Micro-CT suggests a significant broadening of strut feature size in the presence of an overlying thick neointima.
Paired t-test for thin neointima measurements failed to reject the null hypothesis (p = 0.82) which suggests that IVOCT measurement of strut feature size in the presence of thin neointima on average match Micro-CT values. Paired t-test for thick neointima measurements rejected the null hypothesis (p \ 0.001) which suggests that IVOCT measurement of strut feature size in the presence of thick neointima do not match Micro-CT values.
Simulated neointima
Optical simulations were performed to investigate the effect of light scattering by neointima using user-defined more realistic optical properties; for example PDMS used to construct phantoms is isotropic (g & 0) whereas tissue is forward scattering (g & 0.8). Figure 7 illustrates effect of thin and thick neointima on appearance of the strut feature. Increase in neointimal thickness results in a merrygo-round artifact with an artifactual increase in the strut feature size of nearly 50 lm due to an overlying thick (400 lm) neointima. In this case incoming and reflected IVOCT light is scattered inside the neointima and then collected by neighboring A-scans adjacent to the actual strut thereby artifactually increasing strut feature size by 66.7 % in the IVOCT image.
Discussion
IVOCT is the leading imaging modality for neointima detection and thickness measurements in vivo owing to the high resolution images it provides. The CorVue system utilized in this study is similar to existing commercial OCT systems (i.e., same center wavelength and similar imaging resolution) only operated at a slower A-scan rate (20 vs. The impact of luminal scattering on thickness measurement of thin neointimas was examined by using different concentrations of blood mixed with saline. Experimental data suggest that increased luminal scattering caused by residual blood concentrations\5 % in the lumen does not affect thickness measurement of thin neointimas. ANOVA statistical analysis confirms no significant variation (p = 0.95) exists between IVOCT neointimal thickness measurements at the four tested luminal blood scattering concentrations. The results suggest that in the clinic when residual blood (\5 %) remains in the lumen due to an incomplete flush, IVOCT measured neointimal thickness is not dependent on blood concentration.
The merry-go-round artifact in the presence of neointima was studied by phantom vessel experiments as well as optical simulations. In the case of thinner neointimas, metallic strut feature size measurements were not affected and IVOCT values match true values determined with Micro-CT. A paired t-test p-value (p = 0.82) supports the null hypothesis that on average IVOCT strut feature size Fig. 7 Simulated B-scans in presence of different neointimal thicknesses: a 50 lm, b 400 lm measurements match Micro-CT values. Thicker neointimas, however, clearly introduce a relatively large (43 %) merrygo-round effect and a paired t-test p-value (p \ 0.05) confirms IVOCT measured strut feature sizes in this case are significantly different than true values. Optical simulations support that neointimal scattering can introduce the merrygo-round effect; as neointimal layer becomes thicker, light reflecting from the strut surface undergoes multiple forward scattering events and are collected in adjacent neighboring A-scans. In this circumstance, metallic struts in the IVOCT image can appear elongated and the arterial wall may be observed behind the artifactually formed edges while shadowing is confined to the mid portion of the strut feature. Although the results presented here suggest size of the strut feature in IVOCT images may be difficult to interpret, ratio of size of the strut shadow to strut feature size in the absence of luminal scattering may provide a coarse measure of the composite scattering strength of the overlying neointima. Further studies will be required to investigate the potential diagnostic utility of analyzing this effect.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
2017-08-02T22:06:46.859Z
|
2014-11-14T00:00:00.000
|
{
"year": 2014,
"sha1": "87337eb37b2b090ad090bbb835ccdaba0f241300",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10554-014-0569-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "87337eb37b2b090ad090bbb835ccdaba0f241300",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6153965
|
pes2o/s2orc
|
v3-fos-license
|
Composition and structure of fish assemblage from Passa Cinco stream , Corumbataí river sub-basin , SP , Brazil
The aim of this work was to determine the composition of the fish assemblage of Passa Cinco stream and verify changes in their structure on the altitudinal gradient. Six samples were performed at five different sites in Passa Cinco stream (from the headwater, at order two, to its mouth, at order six), using an electric fishery equipment and gill nets in May, July, September and November of 2005 and January and March of 2006. The indices of Shannon’s diversity, Pielou’s evenness and Margalef’s richness were quantified separately considering the different fishery equipment (nets versus electric fishery equipment). An ANOVA was used to compare samples collected in relation to values of abundance, diversity, evenness and richness. The representativeness of the species was summarised by their average values of abundance and weight. We captured 5082 individuals distributed into 61 species. We observed a trend of increasing diversity, richness and evenness of species from site 1 to 3, with further decrease in sites 4 and 5. The values found for habitat diversity also followed this pattern. Significant differences were found for all three indices considering the electric fishery samples. For individuals caught with nets, only the richness index showed a significant difference. Characidium aff. zebra was an important species in the headwater and transition sites and Hypostomus strigaticeps in middle-lower course sites. Despite the small extension of the Passa Cinco stream, environments structurally well defined were evidenced by the species distribution and assemblage composition along the gradient.
Introduction
Fish distribution in an environment is rarely caused by a single factor.Changes in the fish species composition from the headwaters to the lower parts is a common phenomenon and conceptual models based on temperate rivers seek to explain the mechanisms responsible for these processes (Matthews, 1998).Geomorphology is an important factor affecting the structure of fish communities in lotic environments (Allan, 1997), because from the headwaters to the mouth, the river goes through different terrain features, leading to changes in limnological characteristics and structural environment.
According to the river continuum concept (Vannote et al., 1980), we expect a gradual increase in species richness along the gradient, and the area of the middle section as the most diverse.These changes usually are associated with habitat changes along the gradient (Gorman and Karr, 1978).Matthews and Styron (1981) also suggested that the physical and chemical conditions in the headwaters are more stressful than in the lower portions, so that few fish species can colonize these areas.
Among the main patterns of longitudinal variation in stream fishes, there are the species additions and replacements (Gilliam et al., 1993;Petry and Schultz, 2006).Species additions are generally correlated with less severe environmental gradients, leading to smooth changes in abiotic and/or structural factors, while the species replacements occur as a result of abrupt changes in stream geomorphology or are related to abiotic conditions (Balon and Stewart, 1983;Winemiller and Leslie, 1992;Edds, 1993;Jackson et al., 2001;Wilkison and Edds, 2001;Ferreira and Petrere, 2009).
The Passa Cinco stream is one of the main rivers of the Corumbataí river sub-basin, that belongs to the Piracicaba river basin, one of the last options to provide good water quality to several municipalities.Situated next to major urban, agricultural, technological and scientific centres of southeastern Brazil, it has been degraded over a century, through soil use and occupancy, and by excessive withdrawal of water for human consumption and agriculture.Given the regional importance of these water bodies, the aim of this study was to determine the composition of the fish assemblages in Passa Cinco stream and measure the changes in their structure along the longitudinal gradient.
Material and Methods
This study was accomplished in the Passa Cinco stream, with headwaters localised in the Serra da Cachoeira a component of the complex of Serra de Itaqueri, in the municipal district of Itirapina.The drainage area of Passa Cinco stream is 525 km 2 , covering about 60 km from its headwaters (with about 1000 m) to its confluence with the Corumbataí river (to 480 m) (Garcia et al., 2004).Currently, it has 51.72% of its area occupied by pastures, 14.13% by sugar cane, 15.67% by native forest and 0.74% by savanna (Valente and Vettorazzi, 2002).
In each sample site, the predominant substrate type was recorded, as well as presence/absence of riparian vegetation, degree of shading, type of current and mean depth, as shown by Rondineli and Braga (2009).From these data, the habitat heterogeneity at each site was estimated using the Shannon diversity index (Gorman and Karr, 1978).
The fishery equipment used was electric fishery and gill nets.The electric fishery equipment (which consists of a generator that provides power -110 V -for a rectifier current that has the capacity to increase the voltage -up to 1500 V -and reduce the amperage -to 2 A) was used in the first three sites (1, 2 and 3).In these sites we performed a downstream-upstream pass at 50 m stretches without using contention nets.Gill nets (with mesh sizes varying from 3 to 9 cm between adjacent knots) were used in sites 3, 4 and 5.The sequence of gill nets was determined at random and placed in the afternoon (between 15 and 18 hours) remaining until the following morning.After each collection, fish were put into plastic bags, fixed in 10% formalin for 2 days and then transferred to 70% ethanol until the analysis was accomplished.In the laboratory, the fishes were identified to species level, measured for total length (cm) and weighed (g).Voucher specimens were deposited in the Ichthyology Laboratory, Department of Zoology of the Universidade Estadual Paulista, in Rio Claro.
The indexes of Shannon's diversity, Pielou's evenness and Margalef's richness were calculated for each sample site, separating the individuals captured by electric fishery from those captured by gill nets.To determine mean differences, ANOVAs were performed for each index to compare the sites sampled with the same fishery equipment (P1, P2 and P3 -electric fishery and P3, P4 and P5 -gill nets).
The representativeness of the species were summarised by their mean values of abundance (N -= N/F 0 ) and weight (P -= P/F 0 ), where N is the number of individuals, P the
Results
The substrate type changed gradually from site 1 to site 5, the marginal vegetation (presence of grasses along the streams bank) occurred in all sampling sites, except for site 1.The degree of shading decreased from the headwater to the mouth.The types of predominant current were: riffle in site 1, riffle and run in site 2, riffle, run and pool in site 3 and run in sites 4 and 5.The mean depth increased from upstream to downstream (Table 1 and Figure 2).Site 3 showed the highest diversity of habitats, followed by sites 5 and 4 (Figure 3).
We captured 5082 individuals belonging to 61 species.Characidium aff.zebra was the most abundant species (Table 2).The number of individuals, species richness and the indexes of Shannon's diversity, Margalef's richness and Pielou's evenness are shown in Table 3 for each site and fishing gear.The largest catches occurred in sites 3, 2 and 1 sampled with electric fishery.The greatest richness and diversity were obtained in site 3, regardless of the fishing gear used.The evenness index values for all sites were around 0.8.
For sites 1 to 3 (electric fishery), the ANOVA showed significant differences (p < 0.001, Table 4) in the indexes of richness, diversity and evenness.So we can accept that the Shannon diversity changed due to variation in either richness and evenness.For sites 3 to 5 (gill nets), only the richness index differed significantly (p = 0.047, Table 4).
Figure 4 presents the mean values, standard errors and 95% confidence intervals for the indexes of diversity, richness and evenness as well as for the number of individuals in the five sampling sites, considering separately the sites sampled with electric fishery (sites 1, 2 and 3) and gillnets (sites 3, 4 and 5).An increase in diversity, evenness and richness can be found from site 1 to site 3.Moreover, a decrease in richness was found from site 3 to site 5, with the lowest value found in site 4. Regarding the number of individuals, there is a greater abundance in places where electric fishery was used.However, the fishery equipment had more subtle effects on the indexes of diversity, evenness and richness, as shown in site 3, the only one in which it was possible to apply the two techniques.
When species composition was analysed in terms of mean weight and mean abundance per sample, it is possible to see which will best represent the sampled sites.When considering only the individuals captured with electric fishery Characidium aff.zebra, Trichomycterus sp.1, Imparfinis mirini and Cetopsorhamdia iheringi weight and F 0 the frequency of occurrence (Ferreira, 2007).Thus, N and P measure the local importance of each species sampled in each order.The species most and least important were highlighted.This analysis was performed for each sample site, considering the different fishery equipment.
Discussion
The Passa Cinco stream has a steep gradient, given its location in the cuestas of São Pedro and Analândia.In the headwaters, the water flow is faster, and the stream is more shallow and narrow.These characteristics directly influence the substrate composition.In such places, the rocks are slightly larger and little suspended matter or background particulates are verified.All these features will be inverted along the channel.The large amount of water received in a short time, resulting from heavy rains in summer, makes the environment very susceptible to runoff, that quickly raises the water level.The substrate, depth and current are some of the most important physical 2).
aspects in determining the distribution of stream fishes (Gorman and Karr, 1978;Argermeier and Karr, 1983).The combination of each aspects of these environmental characteristics produces a mosaic of microhabitats that changes along the gradient according to the physical conditions, which requires adjustments in the biological communities living there (Vannote et al., 1980).
In São Paulo state, the Alto Paraná system includes the major rivers and contains 38 families and 310 species of fish described (Langeani et al., 2007).In association with these major rivers, there is a large number of headwater streams inhabited mainly by species of small fishes with restricted geographic distributions, such as Bryconamericus turiuba, Astyanax bockmanni and Corumbataia cuestae.The Passa Cinco stream is one of these streams with large numbers of small-sized species, some with restricted distributions.
Considering species diversity, richness and evenness, there was an increase from site 1 to 3, with further decrease in sites 4 and 5, agreeing with the values of the habitat's diversity.These patterns are not due exclusively to the fishery equipment employed, since site 3 was sampled with different equipments.In this site, all values had comparable magnitudes except number of individuals, which was remarkably higher in the electric fishery sites.2).
The ANOVA confirmed the increasing pattern of diversity, evenness and richness from site 1 to 3.
For individuals caught with nets, significant differences were observed only for the richness index.This was because sites 4 and 5 had the lowest number of species.The upstream reaches had lower habitat diversity than the downstream ones (Uieda and Barreto, 1999), which may explain the common pattern of higher species richness at lower portions (Gorman and Karr, 1978).Harrel et al. (1967) apud Peres-Neto (1995) suggest that an increase in species diversity along the river course can occur not only due to an increase in suitable habitats, but also a decrease in environmental fluctuations.The more severe and variable conditions in the headwaters requires from the organisms specific adaptations and higher energy to move against the current and to lessen the likelihood of being swept downstream (Allan, 1997).In more stable sites downstream, where the current is slower or even at a standstill, less energy expenditure is necessary from the individuals to stay in their positions (Allan, 1997).
Although there are fewer species in headwaters sites, there are larger populations, whereas in the downstream sections, the number of species increases and their population densities decrease.The species abundance distribution becomes more homogeneous leading to a higher evenness.This pattern can be seen for sites 1 to 3 (electric fishery) and sites 3 to 4 (gillnets).The same patterns would be expected following the increase in water volume (Garutti, 1988), but in this case, it would be necessary that the environment had a variety of physical structures to provide suitable habitats (Gorman and Karr, 1978;Argermeier and Karr, 1983).On the other hand, when the physical structures inside the stream channel are simplified, the increase in water volume can lead to an environmental homogenisation, as was observed for sites 4 to 5 in the Passa Cinco Stream.
Besides the occurrence and number of individuals, the species importance can also be defined by weight.These three additional measures provide different information about the fish assemblage (Ferreira, 2007).When considering only the species captured with electric fishery, Characidium aff.zebra was the most important species in the first three sample sites, occurring in large numbers and high weight.In addition to C. aff.zebra, other species were important for these sites (Trichomycterus sp.1, Imparfinis mirini, Cetopsorhamdia iheringi, Bryconamericus stramineus, B. turiuba, Apereiodon ibitiensis, Parodon nasus e Hypostomus strigaticeps).Headwater stream fishes from different families have morphological traits that allow them to better explore these environments (Braga, 2004).When considering individuals caught with nets (middlelower portions), H. strigaticeps was the most important species in site 3, and remained important downstream, joining Odontostilbe aff.microcephala, Astyanax sp.1 and A. altiparanae.We suggest that the species mean abundance and weight are more or less related to population densities at each site.If so, we agree with Mazzoni (1998) that the structural components of the stream channel promote differential resource availability along the longitudinal gradient that, in turn, are correlated to these densities.
Conclusions
Considering the number of individuals and biomass, as well the changes in diversity, evenness and richness, it was found that, despite the short size of the Passa Cinco stream, the environments are well defined, and structured in headwater, transition (medium-low) and mouth portions.This was evidenced by the species distribution and assemblage composition along the gradient.
Figure 3 .
Figure 3. Values of Shannon diversity index for the habitat at each sample site.
Figure 4 .
Figure 4. Mean values (), standard error () and confidence interval 95% (bars) for the Shannon diversity index, Margalef's richness and Pielou's evenness and number of individuals, considering separately samples from electrofishing and nets.
Figure 5 .
Figure 5. Relationship between weight and number of individuals caught in site 1, considering the individuals caught using electric fishing (for codes see Table2).
Figure 6 .
Figure 6.Relationship between weight and number of individuals caught in site 2, considering the individuals caught using electric fishing (for codes see Table2).
Figure 7 .
Figure 7. Relationship between weight and number of individuals caught in site 3, considering the individuals caught using electric fishing (for codes see Table2).
Figure 8 .
Figure 8. Relationship between weight and number of individuals caught in site 3, considering the individuals caught using nets (for codes see Table2).
Figure 9 .
Figure 9. Relationship between weight and number of individuals caught in site 4, considering the individuals caught using nets (for codes see Table2).
Figure 10 .
Figure 10.Relationship between weight and number of individuals caught in site 5, considering the individuals caught using nets (for codes see Table2).
Table 1 .
Bottom type, marginal vegetation, degree of shading, current type and mean width found in each of the sampling sites of Passa Cinco stream.
Table 2 .
Fish species caught in the Passa Cinco stream for each sample site in decreasing order of abundance.
FV) Sorce of variation; df) Degrees of freedom; SQ) Sum of squares; QM) Mean square, F) F statistic, p) Probability value.
|
2017-06-11T04:13:45.312Z
|
2012-02-01T00:00:00.000
|
{
"year": 2012,
"sha1": "05c392dcc98d864d970a8e68cf6c5d21f4d7b299",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bjb/a/d84nSFqFvp9mmqRDxdKzLBy/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "05c392dcc98d864d970a8e68cf6c5d21f4d7b299",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
232147589
|
pes2o/s2orc
|
v3-fos-license
|
Gravity-capillary flows over obstacles for the fifth-order forced Korteweg-de Vries equation
The aim of this work is to investigate gravity-capillary waves resonantly excited by two topographic obstacles in a shallow water channel. By considering the weakly nonlinear regime the forced fifth-order Korteweg-de Vries equation arises as a model for the free surface displacement. The water surface is initially taken at rest and the initial value problem for this equation is computed numerically using a pseudospectral method. We study near-resonant flows with intermediate capillary effects. Details of the wave interactions are analysed for obstacles with different sizes. Our numerical results indicate that the flow is not necessarily governed by the larger obstacle.
Introduction
Waves excited by an external force is of great current interest due to the large number of physical applications. For instance, nonlinear eletrical lines, superconductive electronics, elementary-particle physics (Peyrard [1994]; Joseph [2016]) and hydrodynamics. Regarding the last one, we mention, flow of water over rocks, ship waves Baines [1995], and waves generated by storms Johnson [2012]. In water waves, the external force usually models a pressure distribution or a topographic obstacle.
In the absence of surface tension, the fundamental parameter used for describing the pattern of generated waves due to a current-topography interaction is the Froude number Here, U 0 is the velocity of the uniform stream, g is the acceleration of gravity and h 0 is the undisturbed depth of a shallow water channel. The Froude number is critical when F = 1, i.e., when the linear long-wave phase speed is equal to the mean flow speed. In the weakly nonlinear regime the forced Korteweg-de Vries (fKdV) model is valid to study near-resonant flows (F ≈ 1) over obstacles with small amplitudes. A detailed study considering the fKdV equation was first done by Wu & Wu [1982], later by Akylas [1984]; Grimshaw & Smyth [1986]; Wu [1987]; Milewski [2004]; Ermakov & Stepanyants [2019], and more recently by Flamarion et al. [2019] for a vertically sheared current. All these authors have considered only one obstacle. Regarding flow over multiple obstacles, Chardard et al. [2011] investigated numerically the stability of solitary waves. Lee & Whang [2015] studied trapped waves between two obstacles. They considered a bottom topography with two bumps and found numerical solutions for the fKdV which remained bouncing back and forth between the obstacles for a certain period of time. Grimshaw & Malewoong [2016] considered a near-resonant flow over two obstacles. They defined the development of the flow in three stages. The first stage is characterized by the formation of an undular bore over each obstacle. The second is the interaction of the generated waves between the obstacles, and the third is the evolution at large times when the larger obstacle controls the flow. More recently, these authors studied the interaction of generated waves over two obstacles (bumps and holes) in the near-resonant regime describing the dynamic of the wave interactions in great details (Grimshaw & Malewoong [2019]).
When the surface tension is included in the problem, an additional parameter becomes fundamental in the study of generated waves, namely the Bond number where σ is the coefficient of the surface tension and ρ is the constant density of the fluid. Gravitycapillary waves can also be described by the Korteweg-de Vries (KdV) equation. However, the dispersion term in the quation vanishes when B is critical, i.e, B = 1/3. Thus, solitary wave solutions (sech 2 -like) are no longer appropriate since the length scale of these waves becomes zero (Falcon et. al [2002]). Studying flows over obstacles under gravity-capillary effects, Milewski et. al [1999] derived a fifth-order fKdV for F ≈ 1 and B ≈ 1/3. They showed that this equation has unsteady solitary waves solutions with small oscillating tails. A numerical investigation of solitary waves and collisions for the fifth-order KdV was done by Malomed & Vanden-Broeck [1996]. They found solitary waves with oscillatory tails, and when two of these waves interact some of them regain their shape while others split into several solitary waves of different types.
Recently, Hanazaki et. al [2017] used the body-fitted curvilinear coordinates to solve Euler's equations in the presence of an obstacle with a uniform flow and compared the results with the fKdV and the fifth-order-fKdV for the resonant flow (F = 1) and intermediate capillary effects (B ≈ 1/3). They observed short wave radiation when the effects of surface tension are lesser (B < 1/3). Besides, a train of solitary waves propagating upstream radiates short linear waves whose phase speed is equal to the upstream-advancing speeds of the solitary wave. The fifth-order-fKdV captured the wave train propagating upstream, however it predicted waves of longer length, which is natural since the KdV-models are based on a long wave approach.
In this paper we investigate numerically the interaction of excited gravity-capillary waves in the near-resonant flow over two obstacles for the fifth-order-fKdV. More precisely, we focus on the case in which the Froude number is near critical and the capillary effects are intermediate. The problem is studied for obstacles with different sizes. To the best of our knowledge there are no articles regarding the fifth-order-fKdV in the presence of two obstacles. From our experiments, we identify that there are regimes in which the flow is not necessarily driven by the larger obstacle, what is different from the case in which surface tension is neglected (Grimshaw & Malewoong [2016). Besides, we present in details the main features of the near-resonant flow.
This article is organized as follows. In section 2 we present the mathematical formulation of the non-dimentional fifth-order-fKdV equation. The numerical results are presented in section 3 and the conclusion in section 4.
The fifth-order forced Korteweg-de Vries equation
We consider a two-dimensional incompressible and irrotational flow of an inviscid fluid with constant density (ρ) in a shallow water channel of undisturbed depth (h 0 ) and in the presence of an uniform flow (U 0 ). Besides, the fluid is under the gravity force (g) and the surface tension (σ).
In the weakly nonlinear regime, the dimensionless forced fifth-order Korteweg-de Vries (5thorder-fKdV) is used to describe the flow over small obstacles (Zhu [1995]; Milewski et. al [1999]; Hanazaki et. al [2017]). Here, ζ(x, t) is the free-surface displacement over the undisturbed surface and h(x) is the obstacle submerged. The parameter f represents a perturbation of the Froude number, i.e, F = 1 + f , and b is a perturbation of the Bond number B = 1/3 + 1/2 b, where > 0 is a small parameter. The flow is supercritical, subcritical or near-resonant depending on whether f > 0, f < or f ≈ 0. Analogously, the capillary effect is strong, weak or intermediate whether The 5th-order-fKdV equation (1) is solved numerically using a Fourier pseudospectral method with an integrating factor. It avoids numerical instabilities due to the higher-order dispersive term. We consider the computational domain with a uniform grid. All derivatives in x are performed spectrally (Trefethen [2001]). In addition, the time evolution is computed through the Runge-Kutta fourth-order method (RK4). The initial wave profile is always taken at rest (ζ(x, 0) = 0). Since the fKdV fails when the Bond number is critical, we focus exactly on this case (b = 0).
Numerical results
In the same fashion as presented in Grimshaw & Malewoong [2019], we consider a bottom topography modelled by two localised obstacles where 1 and 2 are the amplitudes of the obstacles, w is the width of the obstacles, and x a and x b are their locations. We focus on the cases in which 1 and 2 are positive and let the parameter f vary. There are a long list of parameters to be considered so that we fix x a = −100, x b = 100, and w = 50. This choice of parameters let us observe waves being generated over the obstacles independently at early times, and later analyse their interactions. A sketch of the physical problem at t = 0 is depicted in Figure 1. Since the current is turned on at t = 0 + waves are immediate generated.
In the following subsection we present our results in the non-resonant and near-resonant regimes.
Non-resonant regime
In this regime we let |f | be sufficiently large and investigate how changes of the amplitude of the obstacles affect the flow.
Supercritical regime
In a presence of one single obstacle, the supercritical non-resonant flow is characterized by the formation of an elevation wave over the obstacle and depression solitary waves propagating downstream. Besides, when t → ∞, the steady state is reached after a train of downstream waves move away from the obstacle. Radiation of short waves is not observed (Milewski et. al [1999]).
We first consider two obstacles with same amplitudes 1 = 2 = 0.01. In this case, we observe depression solitary waves propagating downstream and the formation of two elevation waves over the obstacles. These elevation waves no longer reach the steady state because radiation of short waves is emitted from the right obstacle to the left one. Part of the radiation is reflected back remaining between the bumps and the other moves away from both obstacles. Since capillary effects are under consideration, short waves travel faster, therefore, the downstream solitary waves are constantly disturbed by the radiation. Figure 2 illustrates this flow motion. Now, we increase the amplitude of the second obstacle by considering 2 = 0.03. In this regime, the flow is governed by the larger obstacle as can be seen in Figure 3. At early times, an elevation wave forms above both obstacles, as time goes on a wave train emitted from the larger obstacle swallows the wave over the smaller one destroying the expected steady elevation wave. Differently from the previous case, depression solitary waves propagate downstream without radiation being observed. Waves are trapped between the obstacles with part of it being radiated upstream. Lastly, we choose 1 = 0.03 and 2 = 0.01. The dynamic is just a simple reflection from the previous case at early times. A elevation wave is generated over the two obstacles with depression solitary waves being emanated from both obstacles, being larger the ones that come from the larger obstacle. These waves pass the smaller obstacle which radiates short waves upstream. Once the radiation reaches the larger obstacle, it reflects part of it and the elevation wave is no longer steady. This situation is displayed in details in Figure 4. We see that radiation moves back and forth between the bumps with some portion moving away from the obstacles. Although the elevation wave over the smaller obstacle is not steady, it is not destroyed as in the previous case.
Subcritical regime
In the one-obstacle problem, the subcritical non-resonant flow is mainly described for the formation of a depression solitary wave over the obstacle and a wave train propagating upstream. Moreover, when t → ∞, the solution of (1) reaches a steady state, which in this regime is a freesurface depression wave above the obstacle (Milewski et. al [1999]).
In order to study the two-obstacle problem, we initially fix 1 = 2 = 0.01. In this scenario the dynamic of the free surface behaves as the following: at early times, an elevation wave is generated above both obstacles and wave trains are emanated upstream. Then the wave train generated from the right obstacle passes over the left one, gains kinect energy and interacts with the other wave train. The depression waves over the obstacles remain intact. The steady state is obtained when the upstream train waves are shed. Figure 5 illustrates this dynamic. We observe that the short wave train generated from the right obstacles overtakes the one from the left one. For 1 = 0.01 and 2 = 0.03 the flow is quite different as depicted in Figure 6. Depression solitary waves are now emitted periodically downstream from the larger obstacle. Steady waves no longer occur. An undular bore forms between the bumps with a wave train ahead. This wave train interacts with the elevation wave formed above the smaller obstacle destroying its shape. As time elapses, an elevation wave rises again, but it is no longer a steady wave because an undular bore remain between the obstacles. Now, we reverse the role of the obstacles by taking 1 = 0.03 and 2 = 0.01. As in the previous case the larger obstacle generates depression solitary waves periodically which propagates downstream. This generation radiates small short waves upstream. The depression solitary waves have enough energy to overcome the small obstacle. Although their shape is changed during the passage over the second obstacle, after the interaction it regains its form. This dynamic is displayed in Figure 7. The initial steady elevation waves above the obstacles no longer persist.
Near-resonant regime
In this regime we let f be closer to 0 and consider the same types of obstacles as before. As we will show the wave interaction is somehow more nonlinear, so differently from the non-resonant regime it is difficult to capture a pattern in the generated waves.
Supercritical regime
We start choosing 1 = 0.01 and 2 = 0.01. In this case, we observe a formation of an undular bore above the left obstacle. It propagates very slowly upstream led by a series of wave trains. In the region between the obstacles, at early times, wave trains propagate upstream. Part of the wave trains are reflected from the first obstacle and part of it radiates. So, radiation is observed between the bumps for large times. Above the right obstacle, we notice an unstable elevation wave. Figure 8 pictures this scenario. Although the dynamic between the obstacles is very unpredictable the formation of the undular bore is very clear.
When the second obstacle is larger, 1 = 0.01 and 2 = 0.03, the flow structure is very distinct. A formation of an undular bore is observed above both obstacles. The one over the small obstacle propagates upstream and downstream while the other upstream. As a consequence of that, wave trains accumulate between the bumps. Later, the accumulated wave trains is compressed and then the undular bore colapses. After a while, waves start moving upon the bore. Figure 9 depicts the situation described. Around t = 5800 the undular bore starts deteriorating and waves run over it. It is important to notice that it is not clear which obstacle controls the flow for large times. If we look at Figure 9 for t > 8200 we cannot point which obstacle is larger, what is different from the supercritical non-resonant regime (see Figure 3). Now, let us consider the case where the first obstacle is larger, namely 1 = 0.03 and 2 = 0.01. This case is much clear than the previous one. We see the formation of an undular bore above the larger obstacle and radiation between the obstacles which is expelled downstream. Figure 10 shows
Subcritical regime
Our study starts considering obstacles with the same amplitude ( 1 = 0.01 and 2 = 0.01). Figure 11 shows the free surface evolution for f = −0.1. At early times, the right obstacle emits depression solitary waves which travel downstream and the formation of an undular bore above the obstacle so does the left one. It is different from what happens in the non-resonant subcritical case, in which a steady state is reached (see Figure 5). We point out that from the collision between the depression solitary waves generated by the left obstacle and the bore over the right bump we have short waves being radiated upstream. Once the depression solitary waves reach the right obstacle a series of wave collision happens and eventually they escape out.
For bumps with amplitudes 1 = 0.01 and 2 = 0.03 an interesting scenario arises as can be seen in Figure 12. At early times, the left obstacle emmanates depression solitary waves which travel downstream, then this mechanism ceases. Meanwhile, depression solitary waves are being emitted frenetically from the second obstacle producing a series of collisions. It is worth noting that as the time elapses the depression solitary waves generated by the left obstacle remain trapped bouncing between the bumps. When 1 = 0.03 and 2 = 0.01 both obstacles generate depression solitary waves periodically with the larger one more rapidly. Since the flux of wave generation of the large obstacle is high, wave collisions highlights the dynamic. More details of this case is given in Figure 13.
Conclusion
In this paper we have investigated capillary-gravity flows over two obstacles. Through a pseudospectral numerical method, we showed that the flow is not necessarily governed by the larger obstacle as it is in the absence of surface tension. In the supercritical near-resonant case, the flow is mainly described by the formation of an undular bore. In certain regimes, the undular bore generated by both obstacles collide and its structure colapses. In the subcritical near-resonant case, the flow is mainly characterized by the generation of depression solitary waves periodically that propagate downstream. Besides, we also found cases in which depression solitary waves remain trapped between the bumps bouncing back and forth.
|
2021-03-09T02:16:14.304Z
|
2021-03-06T00:00:00.000
|
{
"year": 2021,
"sha1": "6b173a0dd8e7be557ef0fd8f64fbedfcc33f1355",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.04096",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6b173a0dd8e7be557ef0fd8f64fbedfcc33f1355",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
]
}
|
241569990
|
pes2o/s2orc
|
v3-fos-license
|
The Relationship Between Student Academic Achievements and Their Thinking Style
This research was carried out in the Pekanbaru city, Riau Province, Indonesia. The aim of this research is to see whether there is a correlation between thought styles and student achievement in Arabic classes. A questionnaire would be used to gather answers from 150 students at Riau Islamic University's Faculty of Islam's Department of Islamic Education. This reduction in achievement is a crucial indication of the need to develop teaching and learning methods in the classroom, which cover academic, social, and behavioral elements. Student accomplishments, as well as steps to improve student engagement, inspiration, and success, must be addressed right away. This indicates that personality distinctions and student abilities are less essential. Teachers have their own thought styles without considering the correlations of student cognitive styles, where thinking styles and learning are the outcomes of teachers' and students' mental styles. The secret to thinking style and success in the classroom is each specific teacher's and student's thinking style. The Sternberg-Wagner (1997) Thought Style Inventory was used to assess the thinking types of the students in this sample. To describe the respondent's profile and to address study questions, descriptive statistical analysis was used to measure the mean and standard deviation. Monarchical, Oligarchic, Global, Local, External, and Conservative thought patterns can be shown to be the most prevalent in the study. It suggests that the prevailing thought method is the one that is seen the most in the class. The understanding is that Sternberg's thought style is also not being used optimally in the classroom because students and teachers do not completely comprehend each individual's thinking style, necessitating fertilization in order to harmonies the teaching and learning thinking styles. According to the importance of the local thinking style's reliability, the inferior correlation value is quite prevalent, indicating that the local thinking style is the most commonly utilized in the class among the prevailing thinking styles.
Introduction
Changes in the school system that emphasize analytical and innovative reasoning abilities have resulted in students being dismissed in recent years. Excellent academic accomplishments are not indicative of a programmed that relies on the development of critical thought skills. So, what is the current state of the country's school system?
In the school, analytical and creative thinking skills, cognitive capabilities, scientific skills, and logical principles are also emphasized. The majority of these features are related to Sternberg's (1997) thought patterns, which were designed to help students better grasp their teaching and learning processes. For e.g., proficiency in the Arabic language method, which includes a systematic examination and proper procedures, necessitates a systematic analysis. Following the thought habits of instructors and pupils, or vice versa, will help improve these skills.
The current teaching and learning process focuses more on preparing students for exams, which has a negative effect, especially on student expectations of teaching and learning (Halim et al, 2002). According to Almulla (2017), Students require note-taking and lecture-style instruction from their Arabic language instructors. Students' and teachers' thought styles would be stunted, and they would be unable to adapt in order to cope with specific circumstances.
Despite the diversity of teacher education, the study's findings revealed that most teachers continue to rely on conventional teacher-centered teaching methods, especially when it comes to managing the teaching and learning process in the classroom (Phang, 2014;Meerah, 2009). Although Arabic teachers' ingenuity in different teaching techniques and methods is their power in aiding students' comprehension of Arabic (AlKhamisi, 2019). Teachers are encouraged to motivate students to understand. This conventional approach prevents students from applying their imagination and concepts to a topic while often reducing mastery of the Arabic learning mechanism. Teachers, strictly speaking, must attempt something new or innovative in order to change the circumstances in which students are less involved in Arabic classes and to raise student motivation to learn Arabic.
Concept of Thinking Style
"Thinking for a moment is better than praying circumcision for seventy years." (Prophet's Hadith). "... look at the moon, look at the sky, look at the stars and think ..." (The Word of Allah s.w.t.). The above verses from Allah Subhanahu Wata'ala's Word and the Hadith of the Prophet Muhammad Sallahu Alaihi Wassallam demonstrate that all mankind requires thinking activities. Thinking is something that is required in Islam, and it is something that humans should do in every activity and action they take. The privilege of humans (DeBono, 2018) stems from the fact that humans can solve problems by looking at a situation from various perspectives while following their logic. This human mind is a source of intellectual knowledge that generates knowledge through the process of thought (Omar, 2006).
There are some psychologists who define thinking by relating it to the problem-solving process (Meyer, 1977;Chaffe, 1988;Philips, 1997). Thinking is a unique and complex process that involves mental operations (Bourne et al., 1971), using the mind to understand a problem, expressing ideas or creation (Fraenkel, 1980), and making reasonable judgments to make decisions or solve the problem. Several psychologists, including Piaget (1896-1980Vygotsky (1896Vygotsky ( -1934Meyer (1977);Beyer (1988);Perkins (1988), discuss the concept of thinking, according to the literature review (2000).
Piaget established three levels in human development theory, according to Slavin (2006), namely motor sensory level (0-2 years), pre-operation level (2-7 years), concrete operation level (7-12 years), and formal operation level. (12 years old and up). Piaget claims that learning occurs as a result of these rankings based on maturity, discovery, and social communication, which occurs as a result of the assimilation and accommodation processes. In other words, through the interaction of the processes of assimilation, adaptation, accommodation, and equilibrium, as well as schema development, each individual constructs his own meaning. Human thinking develops in tandem with human development, which is linked to the processes mentioned above, according to his research.
According to Maree & De Boer (2003), who believes that social and cultural influences shape one's cognition, I believe that learning is formed by social and cultural influences. This means that when children learn through their own social and cultural interactions, socio-cultural influence is the most important factor in their development of intelligence. He has also presented the Socio-cultural Theory of Cognitive Development, which emphasizes the role of adults in providing assistance and guidance (scaffolding) to children in order to help them progress in their thinking stages. The process of assisting and guiding students follows a hierarchy that he refers to as the zone of proximal development, which refers to the gap between a student's abilities and the stages he or she can complete.
Figures like Piaget, Vygotsky, and Meyer have discussed the ranks of human development in conjunction with the development of their thinking in their explanations. In this study, the term "thinking" refers to the cognitive processes that occur in the minds of third-semester Arabic students. The goal of this study is to determine which thinking style is most prevalent among students and how it relates to student achievement in Arabic classes.
Thinking styles, according to Spearman (1927), are distinguished from the tendency of mental processes to apply continuously during long activities. These styles, he claims, can be understood at any time and in any situation. Then, through the activities carried out, that style can be nurtured and developed, as evidenced by its frequent application to solve problems.
A thinking style, according to Albrecht (1983), is a particular way of processing information, gaining knowledge, forming ideas, suggesting values, solving problems, and expressing oneself. The thinking style he recommends is based on mental processes from the concrete left brain, abstract left brain, concrete right brain, and abstract right brain, which are the four dimensions of cognitive tendencies.
Meyer, Berliner, and Calfee describe this statement by Spearman (1927) and Albrecht (1983) as a consistent action that will eventually be put forward as behavior.
Sternberg (1997) proposes a thinking style theory in which humans are viewed as creatures who have the ability to choose and organize their lives. Human thought exerts control, in the same way that the government regulates the way of life of individuals in an organization. The five dimensions of function, form, stage, scope, and tendency are used to classify 13 different types of thinking styles. Each person will act in accordance with his or her preferred mode of thought.
When learning, making and receiving things, responding, completing tasks, and making decisions, this behavior is displayed.
Individual thinking styles can be formed and strengthened through socialization, according to Sternberg (1997). Environmental factors have an impact on this look. The development of thinking styles is influenced by five variables.
a. Culture; a culture that promotes and appreciates a style will encourage it to be relatively developed in the eyes of the community's experts. Traditional values are valued in Japan, for example. Thus, executive and conservative styles have emerged.
b. Gender; it was customary for men to be the regulators and women to follow them. This usually encourages men to think legislatively and women to think executively. At times, however, this pattern may shift.
c. Age; as people get older, their thinking styles change. Children of a young age have a legislative style. When they are in middle school, however, they have a tendency toward an executive thinking style. This is because their environment is more structured, and they must follow the instructor's rules and directions. However, university rankings indicate a proclivity for legislative, judicial, and liberal thinking styles. Individuals' interactions with their environment can then influence their personal style.
d. Parents and teachers; children are more likely to imitate what they see rather than hear. Intentionally or unintentionally, parents and teachers use their style to influence their children. As a result, children's thinking styles are heavily influenced by the parenting and teaching styles. e. Religion; a person's thinking style is also influenced by their religion or beliefs. Judaism, for example, promotes question and answer. Owned trust has an impact on how a person's style develops.
Sternberg's statement is similar to Piaget and Vygotsky's Theory of Cognitive Development and Theory of Socio-Cultural Cognitive Development, which state that biological and environmental factors influence the development of human thought. They can cultivate or discriminate certain styles based on the activities or socialization process they go through. However, each style has its own set of strengths and weaknesses that determine whether it is appropriate for a given situation.
According to the preceding statement, thinking styles are conceptualized in various ways. There are some who believe it is linked to mental processes and information processing in order to promote the formation of ideas or actions. This study looks at the gender differences in thinking styles among third-semester Arabic students.
Method
Gay and Diehl (1992) believe that the more samples taken, the more representative the results become and the results can be generalized. The adjusted sample number based on a population of 240 people is 148 people, according to the sample size determined by Krejcie & Morgan (1970). The sample size can be as large as 150 people to allow for an even distribution of calculations across each class.
Descriptive analysis is a technique for analyzing and explaining quantitative data (Gaur, 2009). The frequency, mean, standard deviation, and percentage of each item carried out, as well as the respondents' backgrounds, were determined using descriptive analysis.
Correlations are used to look at how two variables are related in a linear way. A high "r" value indicates a strong relationship between the two variables being studied, and vice versa.
Dominant thinking style and thought style profiles of students
To determine the profile of the student's dominant thinking style and thinking style, descriptive analysis is used, which is expressed in the form of mean and standard deviation. The five dimensions of thinking styles, such as function, shape, stage, scope, and tendency, are used to describe the student's thinking style profile. Table 4.2. The mean of students' thinking styles is derived from their responses to items in the Thinking Style Inventory, and the mean of Sternberg's (1997) Dominant Thinking Styles is derived from the inventory manual. If a student's thinking style has a mean value that is higher than the defined mean of Dominance, that style is considered dominant. Monarchy, Oligarchy, Global, Local, External, and Conservative thinking styles are the most prevalent. Meanwhile, the average of Executive, Legislative, Judicial, Hierarchical, Anarchic, Internal, and Liberal thinking styles is not dominant. The Sternberg scale is used to make decisions in this Dominant style (1997).
Different ways of thinking Is there a gender divide among students?
Based on gender, there is a significant difference in Global Dominant Thinking Style (F = 8.661, sig = 0.016), External (F = 5.220, sig = 0.029), and Conservative (F = 4.189, sig = 0.044) students (p0.05). In each dominant thinking style, female students have a higher mean than male students: Global (mean = 3.57, sd = 0.531), External (mean = 3.84, sd = 0.634), and Conservative (mean = 3.76, sd = 0.620). As a result, the null hypothesis that there is no significant gender difference in students' global, external, and conservative thinking styles is rejected. Between male and female students, there is a significant difference in the mean of Global, External, and Conservative thinking styles.
There was no significant difference in the mean thinking style of students based on gender (F = 0.109, sig = 0.722), Oligarchy (F = 0.422, sig = 0.616), and Local (F = 0.942, sig = 0.342). In Oligarchy (mean = 3.42, sd = 0.610) and local (mean = 3.56, sd = 0.638), female students have a higher mean than male students. The null hypothesis, that there is no significant difference in gender between students' oligarchic and local thinking styles, is then accepted. The result is that there is no significant difference between male and female students' mean oligarchic and local thinking styles. The relationship between the Dominant thinking style and students' Arabic achievement was investigated using correlation analysis. According to Table 4.4, the Monarchical Thinking Style (r = 0.170, sig = 0.002), the Oligarchy Thinking Style (r = 0.227, sig = 0.000), the Global Thinking Style (r = 0.179, sig = 0.001), Local Thinking Style (r = 0.228, sig = 0.000), External Thinking Style (r = 0.142, sig = 0.016), and Conservative Thinking Style (r = 0.129, sig = 0.016) have the null hypothesis, that there is no link between dominant thinking styles and student achievement, is then dismissed.
Conclusion
The mean value obtained shows that each student has a tendency to use this style differently. Students performed all styles at various stages, according to the study. Students use Monarchy, Oligarchy, Global, Local, External, and Conservative thought styles, according to this report. Individuals with a high focus and commitment to one task are identified by monarchical dominant thinking style analysis. It means that these students are more likely to finish a task in a set amount of time in order to meet the teacher's expectations.
The learning world has an effect on the lives of various thought types. The practices used to incorporate this teaching and learning process will guide and affect student thought styles, such as cooperative learning activities in which the learning atmosphere offers room for students to socialize. It not only encourages students to communicate with one another, but it also allows them to produce thoughts, share their views, and develop something different.
Since Sternberg's thought style has not been applied in the classroom, the findings of this study show that a direct association is important and leads significantly to student success with a poor connection. It is necessary to work on implementing this thinking style in the classroom such that cohesion between students and teachers exists in the learning phase and teaching. As a result, lecturers participating in the teaching and learning phase, as well as the evaluation of student success, must use a variety of teaching methods and follow a convergent trend in order to assist students in developing more memorable learning types.
|
2021-08-12T23:11:35.390Z
|
2021-04-24T00:00:00.000
|
{
"year": 2021,
"sha1": "350de849eecd2ea5011a56178cfcc41a6d7218ed",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/9725/the-relationship-between-student-academic-achievements-and-their-thinking-style.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "350de849eecd2ea5011a56178cfcc41a6d7218ed",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
104399173
|
pes2o/s2orc
|
v3-fos-license
|
Purification and Partial Characterization of α-Amylase Produced by a Thermo-Halophilic Bacterium Isolate PLS 75
Bio-based industries require stable enzymes in a broad range of environmental conditions. Extremophiles have attracted more interests as the source of such enzymes, one of which is α-amylase. This study aimed to purify and characterize α-amylase produced by a thermo-halophilic bacterium PLS 75 isolated from underwater fumaroles. Ammonium sulfate precipitation results showed that the highest specific α-amylase activity (21.7 U/mg) obtained at 40-60% saturation level, with a purity of 7.7-fold of the crude extract with 16.2% yield. Further purification using DEAE Sepharose column chromatography increased the enzyme purity 11.1-fold of the crude extract with 7.1% yield. Specific activity after column chromatography purification was 31.3 U/mg. The pure enzyme had a low molecular weight of 14 kDa. The enzyme showed the highest activity at 80 °C and pH 5. The activity increased to 126% when in methanol, while decreased when in ethyl acetate and chloroform. The characteristics of α-amylase with low molecular weight, which was active in acidic condition, stable in polar and non-polar solvents, may be used for for specific industrial needs.
le on industrial scale.
Microorganism
The microorganism used in this study was a bacterial isolate, previously isolated from a shallow underwater fumarole in the Pria Laot Sabang area, Weh Island, Indonesia, collection number 75 (hereinafter referred to as PLS 75).The isolate was a stock culture of the Biochemistry Laboratory of the Faculty of Mathematics and Natural Sciences Syiah Kuala University.PLS 75 was initially screened on Thermus medium and stored in glycerol.Phenotypic identification showed that PLS 75 belongs to the genera of Bacillus.
Production of α-amilase α-amylase from PLS 75 was produced in a minimum medium with a composition of 0.2% NaCl, 0.4% yeast extract, 0.8% peptone, 0.25% glucose, and dissolved in sterile sea water.Also, 1% starch was added as the inducer.Incubation was carried out at 60 °C, 150 rpm for 30 hours.The crude enzyme was obtained from the supernatant by centrifugation of the fermentation broth at 10000×g for 20 minutes.
Determination of α-amylase activity and protein concentration
The activity of α-amylase was determined using the Dinitrosalysylic Acid method, referring to the standard procedure (Miller, 1959).The method measured the amount of reducing sugars released from 1% starch due to the enzyme activity.Calculation was based on a standard curve of glucose (1-5 mM).One unit (U) of α-amylase activity is defined as the amount of enzyme that produces 1 µmol of glucose per minute under reaction conditions.Determination of protein concentrations was carried out by the Bradford method (Bradford, 1976), using a standard curve of BSA (100-1000 mg/mL).The protein concentrations were used for the determination of the α-amylase specific activity, which is the ratio of α-amylase activity to protein concentration.
Purification of α-amilase
The crude enzyme in the supernatant was precipitated using ammonium sulfate of various concentrations (hereinafter referred to as Fraction), followed by DEAE Sepharose purification.Ammonium sulfate was added to the supernatant at cold temperature to reach saturation levels of 0-20%, 20-40%, 40-60%, 60-80% and 80-100% (Scope, 1982).The mixture was left overnight at
INTRODUCTION
Enzymatic bioprocess in industries has continued to develop in the last two decades because enzyme-catalyzed reactions are specific, selective and easily controlled.The products are also easily purified and the cost of waste processing is cheaper (Schmid et al., 2002).One of important enzymes used in industries is α-amylase, which contributes to 30% of the global enzyme market (Sivaramakrishnan et al., 2006).α-amylases catalyze the hydrolysis of α-1,4 glycosidic bonds in starch molecules and produce dextrins, oligosaccharides and glucose with various chain lengths (Kuriki and Imanaka, 1999).α-amylase has a very broad spectrum of applications, including starch, bioethanol, food, detergent, textile and paper processing industries (Rana et al., 2013).The enzymes can be produced from various sources such as plants, animals and microorganisms, but bacteria remain the main source of the enzymes (Sivaramakrishnan et al., 2006).
Industrial processes sometimes require extreme temperature and pH, thus the use of extremozymes that are active in extreme conditions is preferred (Elleuche et al., 2014).While the exploration of new microorganisms from extreme environments to produce more stable enzymes has been done for many years, studies to find new α-amylases continue being carried out (Krüger et al., 2018).Researches to obtain new α-amylase with better stability are often done by isolating extremophiles from various sources, such as hot spring (Sudan et al., 2018;Wu et al., 2018), deep sea (Jiang et al., 2015), and even honey (Du et al., 2018).Various studies have also been done to improve the stability of the enzyme through genetic engineering, immobilization, chemical modification and protein engineering (Dey et al., 2016).
Owing to the importance of finding new variants of the enzyme, here we purified and characterized α-amylase produced by an extremophilic bacterial isolate from underwater fumaroles in Pria Laot Sabang area, Weh Island, Indonesia.The isolate was thought to have unique metabolisms and produce stable metabolites, as it was isolated from an environment with high temperature and salt concentration.Purification was carried out by ammonium sulfate precipitation, followed by anion exchange chromatography, and checked for its molecular weight.The activity of the pure enzyme was characterized at various temperatures, pH values and organic solvents.The results of this study may be used as the basis of further research to make the enzyme applicab-cold temperature and subsequently centrifuged at 10000×g for 15 minutes.The precipitate was then dialyzed using 20mM tris-HCl buffer pH 8, which was replaced periodically until ammonium sulfate was not detected in the buffer solution with the addition of 1mL BaCl 2 0.5M and 1mL 0.1N HCl.Activities and protein concentrations of α-amylase in each fraction were determined as described below.The fraction with the highest specific activity was purified further.All fractions were subjected to SDS PAGE.
Further purification was carried out using DEAE Sepharose fast flow anion exchange chromatography to the fraction with the highest α-amylase specific activity.Before separation, the column was equilibrated with 0.01 M tris-HCl buffer pH 7. Approximately 5 mL of enzyme sample was put on to the column and eluted using 0.01 M tris-HCl buffer pH 7.0 containing 0.15 M; 0.3 M; 0.6 M and 1.0 M NaCl salt gradient.Elution was conducted with a flow rate of 0.9 mL/minute and each 2 mL eluent was collected as a fraction.Activity and protein concentration of the α-amylase in each fraction was then determined.
SDS-PAGE and zymography
Crude extract, ammonium sulfate precipite and DEAE Sepharose purification results with the highest specific activity were subjected to SDS-PAGE.Concentrations of separating gel and stacking gel were 12% and 5% (w/v), respectively.Electrophoresis was carried out in 0.05M phosphate buffer pH 7, for 2 hours at 120 volts and 20 mA.After separation, the gel was stained with Commasie Brilliant Blue R250 or Pierce® Silver Staining kit.
The enzyme with the highest specific activity after DEAE Sepharose purification was also subjected to zymography by renaturating it (without staining) using 2.5% Triton-X in 0.05M phosphate buffer pH 7.0 for 1 hour.The gel was then soaked in 1% starch solution in 20 mM phosphate buffer pH 7.0 for 30 minutes at optimum temperature of α-amylase activity.The gel was then immersed in lugol solution for 15 minutes.The molecular weight of α-amylase was determined by comparing the clear band on the gel to the markers.
Characterisation of α-amilase activity
Characterization of α-amylase activity was carried out at various temperature, pH and organic solvents.The effect of temperature on α-amylase activity was studied at 60, 70 and 80 °C, in phosphate buffer pH 7. The effect of pH on protease activity was carried out at pH 5 (sodium acetate 0.2M), pH 7 (phosphate 0.2M) and pH 9 (glycine-NaOH 0.2M), at optimum temperature.Meanwhile, the effect of organic solvents on protease activity was carried out by mixing methanol, ethyl acetate, chloroform and n-hexane with the substrate to give 50% (v/v) solution, and incubated at optimum temperature and pH.
Ammonium sulfat precipitation
Purification of α-amylase is often done by a combination of several methods.The initial step was mostly done by salt precipitation, followed by chromatograpy methods (Wu et al., 2018;Du et al., 2018;Sudan et al., 2018).Of the five fractions after ammonium sulfate precipitation, Fraction 2 produced the highest activity (42.4 U/ mL) and protein concentration (2.09 mg/mL).However, the highest specific activity (21.7 U/ mL) was observed in Fraction 3 (Table 1).
The concentration of ammonium sulfate needed to produce optimum α-amylase specific activity varies.Precipitation of α-amylase is commonly carried out at concentrations of 60-80% (Al-Quadan et al., 2011;Sudan et al., 2018), although there is an α-amylase precipitated in only 30% concentration of salt (Wu et al., 2018).The surface of proteins from thermophilic bacteria contains plenty of charged residues.Salts thus provide screening effects on the solubility of the proteins.The solubility of some proteins will increase at low salt concentrations (salting in), but precipitate with the increase of the concentrations (salting out).Salt concentrations threshold for salting-in or salting out differ amongst proteins, depending on the type and the amount of amino acids on the protein surface (Hiteshi and Gupta, 2014).This explains why, despite having the highest activity, Fraction 2 has a lower specific activity compared to Fraction 3.
Although the difference of α-amylase specific activity in Fractions 2-5 was insignificant, Fraction 3 was chosen for further purification using ion exchange chromatography.This selection was supported by SDS-PAGE result, as the crude extract, Fraction 1, and Fraction 5 showed several bands with very low concentrations.Fraction 2 and Fraction 4 also produced several bands but with greater concentrations.Meanwhile, Fraction 3 showed one distinct band at 10-15 kDa, but with fewer proteins of other molecular weights (Figure 1, lane 5).
Purification with DEAE Sepharose
The fractions after DEAE sepharose purification did not show a significant difference in α-amylase activities.Except for flow through, activity ranged from 20.1 -35.7 U/mL.Some adjacent fractions with similar activity values in the same salt gradient were combined and the activity was determined.Combined fractions were labelled A -I and Fraction H (single fraction 41) showed the highest activity of 35.7 U/mg (Table 2).Fraction H was then used to characterize the α-amylase.
Evaluation of the purification steps
The comparison of the α-amylase from all purification steps showed that precipitation using ammonium sulfate (40-60%) gave a purity of 7.7fold of the crude extract, with a yield of 16.2%.Further purification using DEAE sepharose (Fraction H) produced an enzyme with a purity of 11.1-fold of the crude extract, but with only 7.1% yield (Table 3).
The purity and yield of α-amylases from various microorganisms varies despite employing similar purification steps.Purification of α-amylase from G. thermoleovoran using Q-Sepharose FF anion exchange chromatography give a purity of 8.3-fold of the crude extract with 9.6% yield (Finore et al., 2011).Meanwhile, α-amylase from Nesterenkonia sp.strain F purified with similar methods has a purity of 5.77-fold with 8% yield (Shafiei et al., 2010).Occasionally, a combination of chromatography methods is needed to increase the enzyme purity.For example, combination of exclusion gel (Sephadex G100) and ion exchange (DEAE Sepharose) chromatography increases the purity of about 30% (Al-Quadan et al., 2101), while the combination of Q Sepharose and Superdex 75PG increases the purity by eight folds (Sudan et al., 2018).This, again, may occur due to differences in the interactions of the purified protein with the substance used for purification, in this case, ammonium sulfate and column matrix.The structure of proteins is largely determined by the amino acids constituting them.The difference in the type and the amount of amino acids on the protein surface will certainly influence the interaction with the purifiying materials.
Molecular weight of α-amilase from PLS 75
SDS PAGE results indicated that Fraction H had a protein band of about 14 kDa.Zymography produced a clear single zone around the same molecular weight, indicating the presence of α-amylase activity, and was a monomeric protein (Figure 3).A majority of α-amylases are reported to have molecular weight greater than 20 kDa (Mehta and Satyanarayana, 2016), and the enzyme with low molecular weight is rare.The relatively low molecular weight of α-amylase from PLS 75 is attractive, as it could simplify the immobilization and protein modification process for better enzyme stability and performance.
Effect of temperature, pH and organic solvent
The effect of temperature on the activity showed that α-amylase from PLS 75 was thermostable, having notable activity in the range of 60 -80 °C with activity values of 24.5 -31.7 U/ mL (Figure 4A).Maximum activity was shown at 80 °C (30.7 U/mL).Activity above 80 °C was not measured, so that the optimum temperature could not be confirmed.Activity at 60 °C was still around 80% of that at 80 °C.Optimum temperature for enzyme activity is closely related to the environmental conditions from which the producing microorganisms are isolated.For example, the activity of α-amylase from Geobacillus sp.isolated from the sub-seafloor sediments is optimum at 60-65 °C (Jiang et al., 2015).Meanwhile, α-amylase from B. mojovensis isolated from hot spring has optimum activity at 80-90 °C (Sudan et al., 2018;Wu et al., 2018).
The effect of pH on the α-amylase activity was tested at three pH values representing acidic (pH 5), neutral (pH 7) and alkaline (pH 9) conditions.The highest α-amylase activity was observed at pH 5 (33.6 U/mL).Activity at pH 7 was about 9% lower than that at pH 5, while activity at pH 9 (20.0U/mL) was still around 63% of the maximum activity (Figure 4B).The low activity of α-amylase from PLS 75 at high pH indicates that alkaline condition causes changes in the ionic interactions that affect the protein structure, thus the catalytic reaction cannot occur properly.
Acidic α-amylase is not uncommon and several studies have reported α-amylases that are active in an acidic condition.For example, the optimum activity of α-amylase from Bacillus strain HUTBS26 is observed at pH 4.4 (Al-Quadan et al., 2011), from Alicylobacillus sp.A4 is at pH 4.2 Bai et al., 2011), from G. thermoleovoran is at pH 5.6 (Finore et al., 2011), and from B. licheniformis B4-423 is at pH 5.0 (Wu et al., 2018).Acidic α-amylase is widely used in starch processing industries because of their thermoacidophilic properties and high conversion rate (Homaei et al., 2015).Starch processing is carried out in several steps, including liquefaction and saccharification.Liquefaction process is done to reduce starch viscosity and usually uses amylases that are active at high temperatures and neutral pH.Meanwhile, saccharification step is done to release reducing sugars and is carried out by enzymatic reaction at pH 4.2-4.5.Therefore, the pH of the liquefied starch must be reduced before saccharification process starts.In production perspective, this step is very time consuming and increases costs (Sharma and Satyanarayana, 2013).Therefore, α-amylase from PLS 75, which has optimum activity at low pH, may be beneficial if being used in the starch industry.
As PLS 75 was a microorganism isolated from sea, its metabolites are expected more adaptive to high salinity compared to those from mesophiles.Adaptation to high salt concentration correlates to their ability to catalyze reactions in organic solvents (Shafiei et al., 2010).In this study, polarity of organic solvents did not correlate to the increase or decrease of enzyme activity.Polar (methanol) and non-polar (n-hexane) solvents increased and maintained the α-amylase activity, respectively.The addition of methanol increased activity to 126% compared to the control, while n-hexane did not cause a change in the activity.In contrast, ethyl acetate (semi-polar) and chloroform (non-polar) reduced the activity to 46% and 59%, respectively (Figure 4C).
Comparison to the activity of α-amylase from other sources also did not show a linear correlation.For example, the activity from G. thermoleovoran in non-polar is greater than in polar solvents (Finore et al., 2011).Furthermore, the activites of α-amylase from B. tequilensis are comparable when tested in methanol, n-hexane and benzene, with relative activities of 102%, 133% and 99%, respectively (Tiwari et al., 2014).In contrast, α-amylase from G. thermoleovorans K1C is tolerance to acetone and benzene, but the activity decreases when in ethanol and methanol (Sudan et al., 2018).Metabolites from extremophiles require salts to maintain the structure at high temperatures, extreme pH, and organic solvents (Sinha and Khare, 2014).The stability of the structure is contributed by the negative surface of proteins due to the presence of acidic amino acids, especially in halophilic enzymes (Elcock and McCammon, 1998).Nevertheless, there are enzymes from halophiles that still show high activity in the absence of salts and some are stable without excessive negative amino acids on the surface of the structure (Tan et al., 2008).To sum up, the α-amylases from PLS 75 was unique than other reported enzymes.Having a low molecular weight of about 14 kDa, which is rarely reported, it could be more easily immobilised to improve its stability for longer uses.The halophile property is reflected in the enzyme as it showed stability in polar and non-polar solvents, and still had considerable activity in semi polar solvent.As it was reasonably stable across a moderate range of temperature and acidic pH, it would be a good candidate for the use in the starch liquefaction industries.
CONCLUSIONS
The SDS PAGE result showed that α-amylase from PLS 75 is a small protein of about 14 kDa.This enzyme has a good stability because it is active at high temperatures (60-80 °C), acidic to neutral pH and in the presence of methanol.We suggest that complete characterization of the α-amylase should be done, for example by testing the substrate preference, as well as adding metal ions and inhibitor molecules.Enzyme kinetics also needs to be investigated for the determination of K M and V max .
Figure 2 .
Figure 2. Elution profile of α-amylase from Fraction 3 purified by DEAE Sepharose fast flow.Elution was carried out at a flow rate of 0.9 mL/ min using 0.01 M tris-HCl buffer pH 7.0 containing NaCl 0.15 M (I), 0.3 M (II), 0.6 M (III) and 1.0 M (IV).
Table 1 .
Activity of α-amylase and protein concentration after ammonium sulfate precipitation
Table 2 .
The α-amylase activity and protein concentrations after purification with DEAE Sepharose fast flow
Table 3 .
Comparison of α-amylase and protein concentration of different purification steps
|
2019-04-10T13:12:52.346Z
|
2018-12-06T00:00:00.000
|
{
"year": 2018,
"sha1": "c41b18c0a2d914bcc66c175b1180e3a4341d30fe",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/biosaintifika/article/download/15861/8714",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c41b18c0a2d914bcc66c175b1180e3a4341d30fe",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
256882360
|
pes2o/s2orc
|
v3-fos-license
|
Response of Physalis peruviana L. genotypes to Fusarium oxysporum f. sp. physali under greenhouse
The goldenberry ( Physalis peruviana ) is an exotic fruit that in recent years has acquired great importance in both the local and international markets; one of the limiting phytosanitary problems for this crop is vascular wilt caused by Fusarium oxysporum f. sp. physali , which causes losses of 80-90%. The management of this pathogen is difficult and so far, it is based on preventive measures; however, there are alternatives such as genetic resistance, which is one of the most effective and profitable measures for its management. Taking that into account, the objective of this study was to evaluate the reaction of 40 genotypes of goldenberry against F. oxysporum under greenhouse conditions, by means of pathogenicity tests. The experiment was conducted in a selected place in the city of Pasto (Nariño department, south of Colombia). It was carried out with 40 genetic materials corresponding to different genotypes, one commercial control and four replicates per experimental unit; the statistical design was completely randomized. The traits evaluated were plant height (cm), disease severity (%), AUDPC area under the disease progress curve (units), disease incidence (%) and degree of vascular discoloration. The genotypes 09U138 and 12U399 have greater plant height (50.19 and 47.36 cm), lower AUDPC (zero units), lower incidence (0%) and lower degree of vascular discoloration (zero), with statistical differences from the rest of the genotypes, including the control. Field evaluations should be conducted with the same isolation and other commercial controls, as this research is only a step forward in the search for the resistance of uchuva to F
INTRODUCTION
The goldenberry, or also known as Goldenberry or Uchuva (Physalis peruviana L.), is an Andean fruit species that has become an alternative for the economy of many countries as it stands out as an export product; its importance derives from the nutritional characteristics and medicinal properties (Chávez et al., 2019). In Colombia, cape gooseberry is the second most exported product after bananas; its cultivation offers great advantages because, being Colombia a tropical country, the permanent production of the fruit can be guaranteed for international markets (Ruiz et al., 2018;García et al., 2021). In Colombia, the producing departments are Boyacá, Cundinamarca, Antioquia, Nariño, Norte de Santander, Santander, Huila, Tolima, Cauca and Valle del Cauca, with Boyacá being the largest producer with 30.6% of the planted area, followed by Cundinamarca with 28.6% and Nariño with 13.4% (EVA, 2021).
Vascular wilt caused by Fusarium oxysporum f. sp. physalis (FoPh) is one of the most limiting diseases for Goldenberry as it has generated losses between 80 and 90% (García et al., 2021;Simbaqueda et al., 2021). In the last decade, the department of Cundinamarca suffered at great lost of the crop and it was necessary to relocate it to Boyacá department; that is why the production of this fruit is concentrated in this place (Valderrama, 2018;Simbaqueba et al., 2018;Chávez et al., 2019;Simbaqueba et al., 2021). This fungus is difficult to manage due to soil contamination with the pathogen, something that occurs when harvest residues and affected plant tissues are not properly discarded, or when contaminated soil is moved (Valderrama, 2018). Plants can become infected with the fungal reproductive structures such as mycelium, conidia, and chlamydospores (resistance structures that manage to remain up to thirty years in soils), which germinate UNIVERSIDAD DE NARIÑO e-ISSN 2256-2273 Rev. Cienc. Agr. July -December 2022 Volume 39(2): 89-107 upon contact with host plant root exudates (Vásquez & Castaño, 2017;Joshi, 2018;Giraldo et al., 2020). Plants affected by FoPh are initially characterized by leaf chlorosis followed by generalized yellowing. Symptoms that are mixed with loss of turgor in branches and stems. In general, these symptoms tend to be unilateral, affecting one or two of the main stems; it is also common to see that they are bent leaving the fruits attached to them.
The progression of the disease ends up affecting the entire plant which eventually dies (Gordon, 2017;Gonzáles, 2019;Chávez et al., 2019;Giraldo et al., 2020) When longitudinal cuts are made in the stems and branches, a light brown coloration of the parenchyma is observed, which in advanced stages turns brown; comparatively, the bark tissues are seemingly healthy (Valderrama, 2018;Agudelo, 2020).
Disease management is based on preventive measures that include not planting in plots with a history of incidence by F. oxysporum, the use of pathogen-free propagation material, avoiding unnecessary wounds during cultivation, weed control to reduce excess moisture, avoiding soil waterlogging, eradication of diseased plants, crop rotation, and solarization (Moreno et al., 2019;Berruezo, 2018;Chávez et al., 2020). Non-preventive alternatives include biological control and the search for resistant cultivars; the latter is seen as one of the most effective and economically profitable measures for disease management in the field (Vásquez & Castaño, 2017;Rodríguez & Pedraza, 2019).
Regarding the resistance response to Foph to avoid losses in the producing areas, numerous efforts have been made, such as the one done by Pulido et al. (2011), who carried out bioassays with 70 goldenberry accessions and through cluster analysis, concluded that three of the evaluated materials presented resistance to the pathogen. Rodríguez (2013), on the other hand, through pathogenicity tests in the field, identified two introductions of Physalis with significant values of resistance to vascular wilt. Osorio et al. (2016) identified promising accessions with different degrees of resistance, as well as 16 markers associated with the resistance response. Mayorga et al. (2019) found genetic materials with desirable agronomic traits and excellent response to Foph attack and considered them important for breeding schemes.
Until 2018, the department of Nariño did not present a historical record of incidence of vascular wilt (Agencia UNAL, 2018). However, foci of the disease have been identified, so the efforts of producers and technicians should be oriented to avoid its appearance and dissemination, due to the considerable losses in yields that this fungus causes.
The objective of this work was to evaluate the reaction of Goldenberry genotypes against Fusarium oxysporum f. sp. physali (FoPh) under greenhouse conditions, with the purpose of identifying sources of resistance that could be used in breeding programs for the production of improved cultivars with resistance to the disease.
MATERIALS AND METHODS
Location. The evaluation of the 40-goldenberry genotypes and their reaction to Fusarium oxysporum attack was carried out in the greenhouse located in the Agrosavia C.I. Obonuco facilities, at 2760masl and with an average temperature of 13°C.
Genetic materials. Forty genetic materials were used; 9 double haploid lines and 11 genotypes of the Fusarium group belong to the Colombian Agricultural Research Corporation AGROSAVIA Tibaitatá, 19 genotypes from the University of Nariño and a commercial control, which is a selection made by cape gooseberry farmers in the department of Nariño (Table 1). The seeds were initially sown in germination trays with peat substrate. When the seedlings presented 3 to 4 true leaves, they were taken to 1kg bags containing sterile soil and irrigated four days a week for 4 hours by mist irrigation. Fertilization was edaphic.
During transplantation and growth, a mixture of DAP+ Agriminis® (0.5g/plant) was applied, during the flowering stage a mixture of calcium nitrate+10-30-10 (2g/plant) was used and during production, potassium nitrate (3g/ plant). Monthly applications of 15-15-15 (5g/ plant) were then made, which were increased over time. A V-shaped trellis system was used; with pruning every 20 days, weed control was manual and harvests were scheduled twice a month. To control the stem borer, Exalt® was applied at the dose suggested by the manufacturer (2mL/L).
Isolation and purification of the pathogen.
The pathogen strain used in this study was obtained from one of the experimental plots located in Puerres. For its isolation, samples were taken from plant stems showing vascular necrosis; diseased tissue cuts of approximately 3 mm were made, and each cut underwent the disinfection protocol (1% sodium hypochlorite, 70% alcohol, and sterile distilled water washings). The disinfected tissues were seeded in PDA culture medium (39g/L of water) and incubated for eight days at 22°C. Colonies that presented a whitish cottony appearance and pink color on the underside were replicated in the same medium for isolation and subsequent identification. A small amount of mycelium was then extracted with the aid of a dissecting needle and observed under the microscope at 40X, identifying microconidia with 0 to 3 septate macroconidia characteristic of Fusarium oxysporum (Carmona et al., 2020). The isolation used was molecularly confirmed by DNA sequencing of the ITS and EF1α genes carried out in the molecular biology laboratories of Agrosavia at C.I. Tibaitatá.
Table 1. Genetic materials evaluated for their reaction against
to Fusarium oxysporum f.sp. physali. Inoculation by root immersion. The inoculum was obtained in PDA culture medium; the Petri dishes seeded with the pathogen were incubated for eight days. The conidia were removed with the help of a bacteriological rake by adding 200mL of sterile distilled water with 0.1% Tween 80, then filtered through a gauze. The suspension was adjusted to 1x10 6 conidia/mL calibrated using a hematocytometer, following the methodology described by Arellano (2018) and Agudelo (2020). When the goldenberry plants were 20 cm high, they were removed from their plastic bags; the roots were washed with sterile distilled water and with the help of scissors disinfested with 1%, quaternary ammonium. The apexes of the main root were cut; then they were submerged from the stalk base to the root for 30 minutes in 250 mL plastic cups containing the conidial suspension. Finally, the plants were planted in bags with sterilized soil. Arellano (2018), Ángel et al. (2018) and Agudelo (2020) have used this method.
Traits evaluated
Plant height (cm). The height of each genotype and its replicates were recorded weekly by using a tape measure.
Area under the disease progress curve (AUDPC).
To compare the differences between genetic materials from disease severity values, the AUDPC calculation was performed, this parameter incorporated the speed of disease progression and severity into a single value. That is, accumulation of daily values of the percentage of infection interpreted directly without performing any transformation (Chañag et al., 2017;Sánchez et al., 2017;Bocianowski et al., 2020).
Rev. Cienc. Agr. July -December 2022 Volume 39 (2): 89-107 Where: X i = Proportion of affected tissue under observation I, T i +1-T i = time in days between two readings, n = Total number of observations.
Incidence (%). The number of diseased plants was evaluated weekly-The incidence was obtained by applying the formula:
Vascular discoloration.
Once the observations were finished, the plants were removed from the plastic bags and a transversal cut was made; the upper, middle, and lower part of each genotype was evaluated, using the scale (Table 3) proposed by Garcés et al. (2017).
Susceptibility index.
The mean and standard deviation statistics were used; then a relative value was assigned to the evaluated traits to obtain a score that allowed a classification of the tested genotypes.
Experimental design and statistical analysis. The Completely Randomized Design was used with 39 genotypes, a commercial control and five plants like replications per experimental unit, of which one was a control inoculated with water. During the entire experiment, 28 evaluations were made; it should be noted that after reading 14, the surviving genotypes were inoculated again to verify that the resistance presented was not associated with escape. Analysis of Variance (ANOVA) and Tukey's mean comparison tests at 95% probability were carried out using the statistical program Statgraphics Centurion XVII.
RESULTS AND DISCUSSION
Plant height. The ANOVA for plant height indicated significant statistical differences among the genetic materials evaluated (P<0.05). The Tukey test for comparison of means showed that the genotype that presented the greatest growth was 12U399 with 49.94cm, followed by 09U138 with 46.73cm; results that did not differ statistically from the commercial control (35.09cm), but from UN35 that showed the lowest average (9.61cm); 13U407 and UN34 showed heights of 44.73, 44.21, 43.92 and 41.35cm respectively, with no statistical difference with each other; the other plants showed averages fluctuating between 40.91 and 22.64cm. (Table 3) proposed by Garcés et al. (2017). (Table 3) proposed by Garcés et al. (2017). (Table 3) proposed by Garcés et al. (2017). (Table 3) proposed by Garcés et al. (2017). Susceptibility index. The mean and standard deviation statistics were used; then a relative value was assigned to the evaluated traits to obtain a score that allowed a classification of the tested genotypes.
ulation of daily values of the percentage of infection interpreted ly without performing any transformation (Chañag et al., 2017;ez et al., 2017;Bocianowski et al., 2020 Alvarado (2005), when evaluating pathogenic strains of Fusarium sp. in colored calla lilies (Zantedeschia spp.) indicates that one of the most important aspects affecting this fungus is size; at the end of the experiment, the calla lilies, besides being dwarf, had small and thin leaves, showing poor root development. F. oxysporum is a fungus that invades the cortical cells of the root intercellularly during the infection process; it enters the vascular system through the xylem and begins to produce microconidia, which rapidly colonize the plant generating obstructions by hyphae, which prevent communication between the cells of the conducting vessels, blocking the flow of water and nutrients and directly interfering with plant growth (Marín et al., 2018;Chávez et al., 2019;Srinivas et al., 2019;Giraldo et al., 2020).
The results obtained for this trait ( It is worth mentioning that at the end of evaluation 14, a second inoculation of the surviving genotypes was performed by the root immersion method, as explained in the methodology, to verify that the absence of symptoms was not associated with escape. The results validate the report by Osorio et al. (2017), who through conglomerate analysis identified the resistance of different Goldenberry accessions to FoPh attack, the genotype 09U138 was characterized by low severity and lower AUDPC. Regarding resistance, Pulido (2010) and Gayosso et al. (2021) explain that once pathogens overcome mechanical barriers to infection, plant receptors initiate signaling pathways that drive the expression of defense response genes that depend on their ability to recognize harmful molecules, carry out signal transduction, and respond defensively to a potential invader. Burbano (2020) and Kaur et al. (2021) additionally state that when a resistance gene identifies an avirulence gene in the pathogen, a process of cell death is activated in the infected cell, completely stopping colonization; thus, granting resistance to the plant, which is what possibly occurred with genotypes 09U138 and 12U399. However, it is recommended that these genetic materials be evaluated against other strains of the fungus. Pouralibaba et al. (2017), Bani (2018), and Joshi (2018) explain that Fusarium oxysporum colonization is restricted in cultivars resistant to the region of initial entry of the pathogen, due to occlusion of the vessels by gels, callose, and tyloses deposition.
The high AUDPC values observed in the rest of the genotypes apparently, indicated low amounts of callose and tyloses that were degraded by the pectolytic enzymes of the pathogen, which interfered in the activation of the defense system against Fusarium oxysporum, showing a high susceptibility and a compatible interaction where the pathogen initiates the infection in which it can multiply and progress systemically, invading other tissues triggering the disease (Chañang et al., 2017;Castro et al., 2020;Islas, 2021;Kaur et al., 2021).
For infection to be successfully achieved, the pathogen-plant interaction responds to a process where different sets of genes, must be mobilized that allow early host signaling, adhesion to the host surface, enzymatic breakdown of physical barriers, defense against plant antifungal compounds, and inactivation and subsequent death of host cells by secreted mycotoxins (Agrios, 2005).
The expression of symptoms in susceptible genotypes was observed after the second week of inoculation with the fungus, results that coincide with those reported by Pulido (2010) and Osorio et al. (2017), who affirms that in the case of Goldenberry the age of the plant does not influence the speed of infection and symptoms begin to be evident within the first two weeks after inoculation. Figure 1 shows the initial and final severity of each of the genotypes evaluated, it can be clearly seen that almost all reached a final affectation of 100%, except for 13U407, UN26, Silvania, commercial control, UN14, UN34, and 12U352, whose severities ranged between 74±24.7% and 78±24.7%; on the other hand, Peru, UN52, and 09U089 showed damages of 50, 53, and 54±24.7%, respectively, and genotypes 09U138 and 12U399, showed the lowest result for this trait (0%), which is lower than that of the commercial control (76±24.7%), which undoubtedly indicates resistance to pathogen attack; it was also possible to appreciate that the materials UN36, Andina, UN45, 09U116, 12U347, and 12U368 initially did not show symptoms associated with FoPh, but at the end of the experiment they showed 100% damage which was the highest value.
In this regard, Cubedo (2008) and Marín et al. (2018), mention that the diagnosis of F. oxysporum cannot be made immediately since the fungus colonizes the vascular system before the expression of symptoms in the plant, that is, when the disease is detected, the infection is already in advanced stage.
Incidence. The analysis of variance revealed significant differences between the evaluated genotypes (P<0.05), and the Tukey test for comparison of means between genotypes showed that 09U138 and 12U399 had the lowest average for this trait, which was 0, results that differed statistically from the commercial control (75%), followed by Peru and 09U089 with incidences of 25 and 50% respectively, on the other hand the genotypes UN34, 12U352, UN14, 12U360, 09U099, Silvania, 13U407, the commercial control, and UN52 had an average of 75% diseased plants with no statistical differences with 12U347, and 12U368 initially did not show symptoms associated with FoPh, but at the end of the experiment they showed 100% damage which was the highest value. In this regard, Cubedo (2008) and Marín et al. (2018), mention that the diagnosis of F. oxysporum cannot be made immediately since the fungus colonizes the vascular system before the expression of symptoms in the plant, that is, when the disease is detected, the infection is already in advanced stage.
Incidence. The analysis of variance revealed significant differences between the evaluated genotypes (P<0.05), and the Tukey test for comparison of means between genotypes showed that 09U138 and each other, even when compared to the control (75%), the rest of the genotypes with the exception of UN26 (81.25%), showed the highest percentage which was 100% (Table 4). When these plants were visually analyzed wilting was observed on the leaves, mainly in the lower part extending to the upper part, causing defoliation and necrosis (Figure 2), therefore they are considered susceptible to attack by F. oxysporum f. sp. physali, and their use is not recommended in soils where there is high inoculum pressure The results show that these genetic materials lack effective defense mechanisms against the strain used, but their response to other strains is unknown; it should be noted that there are no treatments to cure plants infected by FoPh; besides, this fungus has a wide range of hosts, so measures such as crop rotation are not effective. The use of biological agents does not achieve the desired success because they can be affected by biotic and abiotic factors that make biological control in the field inconsistent.
In this research the fungus was able to cause the disease, associated symptoms, and even plant death; in commercial plots, the use of these varieties could cause an increase in inoculum and spread of the pathogen. In relation to the results obtained in genotypes 09U138 and 12U399, it can be inferred that they showed resistance to the attack of FoPh from the first to the last day of the evaluations carried out, since no symptoms associated with the disease such as wilting of basal leaves, loss of turgor, epinasty, chlorosis, prostration of the stalk, and the stem, were observed during the first or second inoculation, chlorosis, stem prostration or necrosis of the stem, or roots (Figure 2 D), this is how these two genotypes become an option for the management of the fungus in our region. Controlling pathogens that cause vascular wilt is not an easy task, the chemical fungicides that must be applied in the soil around the plant are Controlling pathogens that cause vascular wilt is not an easy task, the chemical fungicides that must be applied in the soil around the plant are ineffective especially for fungi such as F. oxysporum, which presents resistance structures that survive for long periods in the soil even in the absence of a host plant (Bani et al., 2018;Chávez et al., 2020;Srinivas et al., 2019).
Pabón-Villalobos
Regarding resistance, Pulido (2010), Manon et al. (2018), Carmona et al. (2020), and Leitão et al. (2020) state that this measure is the most effective and economically profitable for vascular fusarium in the field. Muñoz et al. (2019), Lamo & Takken (2020), and Leitão et al. (2020) add that the search for genotypes with different degrees of resistance allows a considerable decrease in the frequency of pesticide application, which minimizes the effects on human and environmental health, as well as a direct reduction in production costs. When trying to plan and apply new disease management methods, the objective should be a rational, effective, and safe control at a minimum cost as it is the use of resistant cultivars, many severe fungal diseases, as well as vascular wilt in economically important crops are treated in this way.
Vascular discoloration.
Taking into account the scale (Table 3) proposed by Garcés et al. (2017), it was clearly evident that the vascular bundles of genotype 12U374 presented the most severe discoloration in the upper, middle, and lower part, and a reddish brown tone towards the interior of the three portions evaluated corresponding to different heights from the stem to the root (high, middle and low), allowing it to be considered one of the most liable to attack by FoPh (Figure 3), even when compared to the commercial control, which showed discoloration grade 2 (intermediate Figure 3 and Figure4A: SH=H, SM=M, SL=L); furthermore, the plants did not exhibit external symptoms such as generalized wilting, leaflet loss, growth reduction, or progressive drying, so they could be considered resistant to attack by this strain of Fusarium oxysporum.
In advanced stages, the roots of the plants show vascular discoloration, necrosis in the stalk base and stem, and discoloration of vascular bundles in addition to necrosis with a pattern of advancing towards the pith (Maurya et al., 2019;Mendoza et al., 2019). Cardona & Castaño (2019), in Solanum lycopersicum describe cross sections with necrotic processes, vascular tissue of dark brown color, being more noticeable at the junction point of the petiole with the stem, similar symptomatology to that presented in genotypes with an intermediate and severe degree of discoloration. In materials such as 12U374, externally, loss of the primary root and grayish-brown lesions at the point of emergence of lateral roots could be observed, as well as adventitious roots above the stem lesion ( Figure 4B).
Susceptibility index (SSI).
To determine the effect of Fusarium oxysporum f. sp. physali (FoPh) on the forty genotypes the susceptibility index was calculated, which made it possible to identify the outstanding materials considering the traits evaluated (plant height, AUDPC, incidence, and vascular discoloration), resulting in a relative value, by which the genotypes could be compared in general. In the analysis carried out, the response of the genotypes was separated into four groups: Very resistant (VR), Resistant (R), Susceptible (S), and Very susceptible (VS). Figure 5 shows that many of the genotypes evaluated were in the very susceptible category (VS), far from the materials 12U399 and 09U138 which were classified as very resistant (VR), since their susceptibility index was 0; on the other hand, the commercial control reached an SSI of 1 being one of the most susceptible (VS), the above, coincide with that reported by Osorio et al (2017) who by conglomerate analysis classified 09U138 as (VR), this source resistance to FoPh is apparently related with wild-type germplasm. Mayorga et al. (2019) in field tests and soils with high inoculum pressure observed an opposite reaction in which the material 09U138 was (S) to FoPh.
The results obtained for 12U399 and 09U138 showed good potential to be incorporated in breeding programs to develop genotypes resistant to FoPh populations. Peru showed a certain degree of tolerance to the fungus indicating an index of 0.5, genotype that together with 12U399 and 09U138 should be evaluated in other regions and against other strains of the pathogen. UN34, 13U407, 09U089, Silvania, UN26, and UN52 indicated an ISS that ranged between 0.83 and 0.96, and that are therefore considered susceptible (S), i.e., F. oxysporum in these plants is capable of penetrating, infecting, and causing symptoms characteristic of the disease and even causing death, which in the field could generate total losses to the farmer. The rest of the genetic materials, as well as the commercial control, are considered very susceptible (VS), although the highest SSI was for UN35 with 1.57. Finally, it should be remembered that and soils with high inoculum pressure observed an opposite reaction in which the material 09U138 was (S) to FoPh. The results obtained for 12U399 and 09U138 showed good potential to be incorporated in breeding programs to develop genotypes resistant to FoPh populations. Peru showed a certain degree of tolerance to the fungus indicating an index of 0.5, genotype that together with 12U399 and 09U138 should be evaluated in other regions and against other strains of the pathogen. UN34, 13U407, 09U089, Silvania, UN26, and UN52 indicated an ISS that ranged between 0.83 and 0.96, and that are therefore considered susceptible (S), i.e., F. oxysporum in these plants is capable of penetrating, infecting, and causing symptoms characteristic of the disease and even causing death, which in the field could generate the use of resistant genotypes is one of the most appropriate strategies to reduce losses caused by the disease, since production costs are not increased, and the annual application of chemical products is reduced. The resistant genotypes drastically reduce environmental contamination, health, and food risks, and it also guarantees the success of other management tactics.
CONCLUSIONS
Genotypes 09U138 and 12U399 from the first to the last evaluation did not show symptoms such as basal leaf wilt, loss of turgor, epinasty, chlorosis, stem prostration, or necrosis of the stem or roots, so they can be considered resistant to the attack of this strain of F. oxysporum f. sp. physali, however, the results obtained are an advance in the search for resistance of this pathogen and the genotypes should be evaluated in the field, with the same isolation and with other pathogenic strains of the fungus.
Genotypes 09U138 and 12U399 could be used in breeding programs given their resistance to such an important disease as vascular wilt. Although the resistance to Fusarium shown by the other materials did not meet the expectations of this work, it is likely that these genotypes have characteristics of importance for the agro-industrial sector, where their potential should be considered.
|
2023-02-16T16:02:35.789Z
|
2022-06-16T00:00:00.000
|
{
"year": 2022,
"sha1": "f9af7c6748ad757e28f6a7e464758ce6afccb54a",
"oa_license": "CCBYNC",
"oa_url": "https://revistas.udenar.edu.co/index.php/rfacia/article/download/6858/9217",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7023563aa8297ebea833e0d7cc857cd3a28d852b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
23175430
|
pes2o/s2orc
|
v3-fos-license
|
Is Limb Salvage With Microwave-induced Hyperthermia Better Than Amputation for Osteosarcoma of the Distal Tibia?
Background Amputation has been the standard surgical treatment for distal tibia osteosarcoma owing to its unique anatomic features. Preliminary research suggested that microwave-induced hyperthermia may have a role in treating osteosarcoma in some locations of the body (such as the pelvis), but to our knowledge, no comparative study has evaluated its efficacy in a difficult-to-treat location like the distal tibia. Questions Does microwave-induced hyperthermia result in (1) improved survival, (2) decreased local recurrence, (3) improved Musculoskeletal Tumor Society (MSTS) scores, or (4) fewer complications than amputation in patients with a distal tibial osteosarcoma? Methods Between 2000 and 2015, we treated 79 patients for a distal tibia osteosarcoma without metastases. Of those, 52 were treated with microwave-induced hyperthermia, and 27 with amputation. Patients were considered eligible for microwave-induced hyperthermia if they had an at least 20-mm available distance from the tumor edge to the articular surface, good clinical and imaging response to neoadjuvant chemotherapy, and no pathologic fracture. Patients not meeting these indications were treated with amputation. In addition, if neither the posterior tibial artery nor the dorsalis pedis artery was salvageable, the patients were treated with amputation and were not included in any group in this study. A total of 13 other patients were treated with conventional limb-salvage resections and reconstructions (at the request of the patient, based on patient preference) and were not included in this study. All 79 patients in this retrospective study were available for followup at a minimum of 12 months (mean followup in the hyperthermia group, 79 months, range 12–158 months; mean followup in the amputation group, 95 months, range, 15–142 months). With the numbers available, the groups were no different in terms of sex, age, tumor grade, tumor stage, or tumor size. All statistical tests were two-sided, and a probability less than 0.05 was considered statistically significant. Survival to death was evaluated using Kaplan-Meier analysis. Complications were recorded from the patients’ files and graded using the classification of surgical complications described by Dindo et al. Results In the limb-salvage group, Kaplan Meier survival at 6 years was 80% (95% CI, 63%–90%), and this was not different with the numbers available from survivorship in the amputation group at 6 years (70%; 95% CI, 37%–90%; p = 0.301).With the numbers available, we found no difference in local recurrence (six versus 0; p = 0.066). However mean ± SD MSTS functional scores were higher in patients who had microwave-induced hyperthermia compared with those who had amputations (85% ± 6% versus 66% ± 5%; p = 0.008).With the numbers available, we found no difference in the proportion of patients experiencing complications between the two groups (six of 52 [12%] versus three of 27 [11%]; p = 0.954). Conclusions We were encouraged to find no early differences in survival, local recurrence, or serious complications between microwave-induced hyperthermia and amputation, and a functional advantage in favor of microwave-induced hyperthermia. However, these findings should be replicated in larger studies with longer mean duration of followup, and in studies that compare microwave-induced hyperthermia with conventional limb-sparing approaches. Level of Evidence Level III, therapeutic study.
Introduction
The tibia is the second most-common site of osteosarcoma, accounting for 19% of all osteosarcomas, with 20% of those occurring in the distal tibia [22]. Amputation has long been regarded as the standard surgical treatment for these tumors, with satisfactory functional results when an appropriate prosthesis is used [25]. With the advances in chemotherapy and surgical techniques, limb salvage has become the preferred treatment when possible. However, other than in locations like those surrounding the hip or knee, it is difficult to perform a safe, negative-margin resection in the distal tibia because of its subcutaneous location and the proximity of the distal tibia to the neurovascular bundle and tendons [18]. Complications, poor function, and decreased durability of the reconstruction are difficult to avoid in this location [19].
Conflicting findings regarding survival and function after limb salvage and amputation for patients with osteosarcoma of the distal tibia have been reported [2,4,15,19,20,26]. While survivorship of patients who undergo amputation for distal tibia osteosarcoma generally is high [26] and complications are disconcertingly frequent [4], function as measured by the Musculoskeletal Tumor Society (MSTS) [6] score after amputation is generally low [15]. Small series of patients undergoing limb salvage for osteosarcoma in this location are not always dramatically better in terms of function [20], complications are likewise common [25], and survivorship seems even worse [18]. For this reason, we believe the best surgical option for patients who have osteosarcoma of the distal tibia is unclear.
Hyperthermia has been introduced as an alternative treatment method for osteosarcoma [8]. It is capable of accurately killing tumor cells while tending to minimize injury to the surrounding tissue, perhaps facilitating resections in difficult-to-access locations. Hyperthermia can be used to achieve acceptable local disease control while maintaining the structural integrity of the skeleton in some patients [9]. This technique may reduce the need for complex reconstruction, and so seems appealing in terms of potential functional benefits; however, this is unproven for patients with osteosarcoma of the distal tibia. In this setting, microwave-induced hyperthermia is administered to the tumor bed and causes immediate heat necrosis of the tumor and adjacent tissues, followed by limited surgical excision of the mass with preservation of the surrounding skeleton. Because of its perceived benefits, we have used microwave-induced hyperthermia in patients with malignant bone tumors for 20 years in our department [7,8]; however, no formal study has compared microwave-induced hyperthermia with the conventional treatment (transtibial amputation), and it seems important to do so.
Patients and Methods
The research was approved by the Ethics Review Committee of Tangdu Hospital, Xian, Shanxi, China (approval ID 2016016), and written informed consent was obtained from all participating patients.
Cohort Selection
Between 2000 and 2015, we treated 106 patients for distal tibia osteosarcoma without metastases. Of those, 52 were treated with microwave-induced hyperthermia (Table 1), and 41 with amputation. A total of 13 patients who would have met our indications for microwave-induced hyperthermia were instead treated with the conventional limbsalvage resection and reconstruction based on the patient's responses to neoadjuvant chemotherapy, and no pathologic fracture. Chemonecrosis was assessed using the grading system of Huvos et al. [14]. More than 90% necrosis on the histologic sections was considered a good response to chemotherapy. All patients in this series were available for followup at a minimum of 12 months (mean followup in the hyperthermia group, 79 months, range, 12-158 months; mean followup in the amputation group, 95 months, range, 15-142 months). All patients had radiographs, CT, MRI, and bone scans. With the numbers available, we found no difference in sex between the amputation group and microwave-induced hyperthermia group (12 males and 15 females versus 30 males and 22 females; p = 0.263) ( Table 3). We also found no difference in age (27.5 ± 8.7 years versus 31.2 ± 6.4 years; p = 0.586), tumor grade, tumor stage, and tumor size between the amputation group and the microwave-induced hyperthermia group (Table 3). Of the 79 patients, 54 had a needle biopsy and 32 had an incisional biopsy, including those whose needle biopsy was nondiagnostic. We graded the histologic sections based on the biopsy using Broders' classification [1], which has four grades according to the rate of differentiation of the tumor cells. We staged patients using the surgical staging systems of the MSTS [6] and the American Joint Committee on Cancer (AJCC) [24]. Nineteen patients had Stage I tumors and 60 had Stage II tumors.
Surgical Technique
All patients were evaluated by CT and MRI at the end of each chemotherapy regimen preoperatively to define the edge of the tumor, which was determined at the transition of marrow signal from abnormal to normal. Areas of intermediate signal intensity adjacent to the tumor edge were regarded as part of the tumor and should be included in the ablation area. All 79 patients received two cycles of preoperative neoadjuvant chemotherapy based on a standard protocol which was described in a previous study [17].
Patients treated with microwave-induced hyperthermia were evaluated according to the following criteria: (1) assessment of tumor response or progression as assessed by MRI; (2) distance between the ankle cartilage and the tumor as assessed by MRI of 20 mm or more, to obtain a bone width margin of 10 mm and a remaining residual epiphysis of 10 mm, and wide proximal margins on the bone resections [19] (defined as a cuff of 2 cm to 3 cm of normal tissue remaining on all sides of the tumor); and (3) a sufficient amount of epiphysis preserved to allow fixation of the osteotomy junction [21]. Intraoperatively, the adequacy of bone resection was evaluated with frozen section biopsy of a tissue sample obtained from the medullary canal of the residual tibia. For all patients who had amputations, the margins were wide (a cuff of 2 cm to 3 cm of normal tissue remaining on all sides of the tumor). After surgery, the histologic margins were negative in all patients.
All operations were performed by the same two surgeons (QYF and YZ). The microwave-induced hyperthermia machine we used was the FORSEA (Xinhua Company, Nanjing, China) [9,10], and the microwave generator frequency is 2450 MHz. When microwave-induced hyperthermia was performed (Fig. 1), the main principle was to dissect the tumor with a safe margin as described above, and subsequently perform an en bloc ablation using antenna-guided hyperthermia therapy. The first step was to identify the extent of the tumor and dissect the tumor-bearing bone from surrounding normal tissues with a safe margin (at least 20 mm width). We usually use the original dissection method of double incisions to obtain adequate exposure (Fig. 1F). This step is very important because it is helps to ensure the entire tumor can be killed by microwave-induced hyperthermia. A heat-isolation pad and wet gauze then were placed between the bone tumor and surrounding normal tissues. Then, one to six antennas were placed in different location of the tumor from different angles, matching the suction one-to-one, according to the shape and size of the tumor to ensure the therapeutic range and the tumor edge could be ablated adequately.
Heat output was instant when the antennas were placed, and electromagnetic energy then was delivered to the tumor (Fig. 1G). The tumor was ablated with direct heating while normal soft tissues were protected from overheating. The goal of microwave ablation is to create an ablation zone that extends 1 cm beyond the tumor boundary at all points with the core temperature of the tumor reaching 85°t o 100°C and the normal tissue temperature remaining less than 40°C for 15 to 20 minutes. During surgery, a circulating cool saline system was used to protect the surrounding normal tissues, and multiple thermocouples were placed in various critical locations to monitor the temperature. To avoid damaging the joint, outlet piping which was connected with a circulating water pump and the thermocouples were specifically placed in the ankle cavity to keep the articular cartilage and its subchondral bone from overheating. All the tissue blocks were evaluated histologically for tumor hyperthermia necrosis and the histologic examination showed that part of the proximal margins were histologically negative and part of the margins were necrotic. After the dead tumor mass was removed or curetted (Fig. 1H), the reconstruction was performed using a mixture of bone chips and bone cement ( Fig. 1I-K) [18]. The normal shape of the tibia was restored and prophylactic fixation was performed if necessary ( Fig. 1I-K). Transtibial amputation was performed as common practice [13].The goals and requirements were resecting the bone 2 cm to 3 cm proximal to abnormal bone density, obtaining adequate length of the residual limb, and achieving good soft tissue coverage.
Postsurgery Rehabilitation and Followup
All patients in both groups were given antibiotics for 72 hours after surgery, and they performed bed exercises until wound healing was achieved. A short cast or a brace was used for patients who had microwave-induced hyperthermia until there was radiographic evidence of bone union. Signs of bony union were evaluated by serial sets of plain radiographs [11]. All patients in both groups received postoperative chemotherapy (adriamycin, cisplatin, methotrexate, ifosfamide) [10]).
After discharge from the hospital, clinical and radiographic followups are done every month during the first 6 months, then every 3 months during the next 2 years, and then every 6 months. Chest CT scans were performed to observe pulmonary metastasis every 3 months during the first year and then every 6 months afterward. A bone scan was performed every 6 months during the first year and then every year. All patients have radiographs taken once a year. The MSTS score was used to observe the function of the patients. The status and function of the ankle were specifically assessed clinically and radiologically at followups.
Clinical Outcomes
Clinical outcomes were assessed by review of clinic notes, supplemented by phone questionnaires, and email where needed. Local recurrence, metastasis, complications, and death were recorded from the patients' files. Complications were graded using the classification described by Dindo et al. [3], which graded the complications at five levels. Followup review and data were sorted and analyzed by three of the authors (KH, NB, TY).
Statistical Analysis
All values are expressed as mean ± SD, and all error bars represent the SD of the mean. Student's t test and one-way ANOVA were used to determine significance. Survival rates were estimated using the Kaplan-Meier method. We compared survival between the two groups using a log-rank test. Chi-square test was used to compare complications between the two groups. The mean, SD, and 95% CI were provided. All statistical tests were two-sided. A probability less than 0.05 was considered statistically significant. Statistical analyses were performed using SPSS Version 17.0 (SPSS Inc, Chicago, IL, USA).
Survival
With the numbers available, there was no difference in Kaplan-Meier survivorship between the groups. In the limb salvage group, Kaplan Meier survival at 6 years was 80% (95% CI, 63%-90%), and in the amputation group it was 70% at 6 years (95% CI, 37%-90%; p = 0.301) (Fig. 2). At last followup, six of 27 patients (22%) had died in the amputation group and nine of 52 (17%) had died in the microwave-induced hyperthermia group.
Local Recurrence
With the numbers available, we found no difference in local recurrence (six versus 0; p = 0.066) between the amputation and microwave-induced hyperthermia groups. Six of the 52 patients who had microwave-induced hyperthermia (11.5%) ( Table 1) had a local recurrence, whereas no patients in the amputation group had a local recurrence. The time to local recurrence was 4 to 18 months after surgery (median, 8.74 months). Two of the six patients were treated with microwave-induced hyperthermia again and four underwent amputations. No patient has had a second local recurrence.
MSTS Functional Score
However, mean ± SD MSTS functional scores were higher in patients who had microwave-induced hyperthermia compared with those who had amputations (85% ± 6% versus 66% ± 5%; 95% CI of the difference, 16.01-23.10; p = 0.008) ( Table 3). At latest followup, we observed no evidence of ankle instability, deformity, or degenerative changes of the ankle in any of the patients who had microwave-induced hyperthermia.
Complications
With the numbers available, we found no difference in the proportion of patients experiencing postsurgical complications between the two groups (six of 52 [12%] versus three of 27 [11%]; odds ratio, 1.043; 95% CI, 0.240-4.544; p = 0.954). Complication severity, as graded according to Dindo et al. [3], likewise was not different with the numbers available (p = 0.9983). Six of the 52 patients who had microwave-induced hyperthermia (Table 3) experienced complications. Two patients experienced delayed union and eventually achieved union (Grade IIIb). One patient experienced fracture and was treated with arthrodesis (Grade IIIb). Two patients had superficial infections (Grade I), which resolved with local dressing changes. One patient had a deep infection (Grade IIIb), which was resolved by irrigation, débridement, and administration of systemic antibiotics.
Three of the 27 patients who had amputations (Table 1) experienced complications. Two patients experienced wound dehiscence and were treated with wound débridement (Grade IIIb). One patient had a superficial infection that resolved with local dressing changes (Grade I).
Discussion
Below-knee amputation has been regarded as the standard surgical treatment for distal tibia osteosarcoma because of the difficulties in reconstruction when massive bone is lost so close to the ankle [16]. Historically, it has been very difficult to achieve satisfactory oncologic results and function with limb salvage in this anatomic location because of its particular challenges [12,16,18]. It has been reported that transtibial amputation provides a low risk of local recurrence and satisfactory function [2]. However, many patients refuse amputation for psychological or social reasons. Microwave-induced hyperthermia has been used with some success for two decades [7,8]. We believe that the biggest advantage of microwave-induced hyperthermia is that it may relieve the patients of the need to have an amputation. However, to our knowledge, no comparative study has evaluated its efficacy for patients with distal tibia osteosarcoma. We therefore asked whether it would provide (1) improved survival, (2) decreased local recurrence, (3) improved MSTS scores, or (4) fewer complications than amputation in patients with a distal tibial osteosarcoma.
There were some limitations in this study. First, the sample size was relatively small despite this being one of the largest studies reported. This limited our ability to analyze for other factors that might have influenced the oncologic outcomes. Second, this study was a retrospective analysis and the two groups were not randomly selected. That being so, selection bias might have been an issue here. Patients perceived to have a worse prognosis may have been selected to have amputation. However, we tried to apply consistent indications for microwave-induced hyperthermia. In addition, the patients in whom limb salvage was not considered possible (such as those in whom neither the posterior tibial artery nor the dorsalis pedis artery was salvageable) were not included in any group. In general, patients were considered eligible for microwaveinduced hyperthermia if they had an at least 20 mm available distance from the tumor edge to the articular surface, good clinical and imaging response to neoadjuvant chemotherapy, and no pathologic fracture. Patients not meeting these indications were treated with amputation. However, some patients meeting the indications for microwave-induced hyperthermia were treated instead with amputation or conventional limb-salvage approaches because of the patient's subjective wishes (such as cost, function demand, social recognition). Two patients were unable to afford microwave-induced hyperthermia because of its high price and two other patients had anxiety owing to the possibility of tumor recurrence. Finally, the followup is relatively short. These patients need to be followed for longer periods to ensure that the tumors do not recur and that other complications related to treatment do not become evident. We intend to continue to follow these patients.
With the numbers available, we found no difference in oncologic survival between patients treated with microwave-induced hyperthermia and those who had transtibial amputation for distal tibia osteosarcoma. Other series [12,16,18] have had similar results between limb salvage and amputation for osteosarcoma of the distal tibia. However, the sample sizes in those studies are relatively smaller and comparisons were performed mostly between different types of reconstructions after limb salvage. Amputation is the secondary treatment when there is recurrence or a complication, in most cases.
Likewise, with the numbers available, the treatments were no different in terms of local recurrence, although there were some local recurrences in the microwave-induced hyperthermia group, and we believe that longer followup will be important in these patients. The incidence is relatively higher in other studies of limb salvage [5,12,16,26], because it is difficult to obtain a safe margin resection when good function is desired at the same time because of the proximity of nerves, vessels, and tendons. When microwave-induced hyperthermia was given, the first step was to dissect the tumor-bearing bone from surrounding normal tissues with a safe margin. The distance between the ankle cartilage and the tumor as assessed by MRI was 20 mm or more to obtain a bone width margin of 10 mm and a remaining residual epiphysis of 10 mm. The margins of proximal bone resections were wide (a cuff of 2 cm to 3 cm of normal tissue remaining on all sides of the tumor). In addition, surrounding tissues were fully protected and multiple antennas were inserted in different locations from different angles to ensure the therapeutic range. This could account for some of the observed recurrence benefit of microwave-induced hyperthermia in our series. To the best of our knowledge, there were no local recurrences reported when amputation was performed, which is the same as in our study [5,15,18,28,29].
Our technique for microwave-induced hyperthermia resulted in improved function compared with transtibial amputation. Function is very important in all operations. However, the unfortunate reality is that better function seems to carry some risk of recurrence [2,11,14]. The reason for this is that for better function less tissue needs to be removed which could result in a high risk of recurrence. We also found that the mean MSTS functional scores for the patients who had microwave-induced hyperthermia were better than scores reported in other limb salvage studies [13,23,24]. There could be several reasons for this, although all are somewhat speculative. Osteotomy was not used, so the ankle remained intact; this could account for some of the observed functional benefit in this series. Second, we used a mixture of bone chips, cement, and prophylactic internal fixation for reconstruction. This may have facilitated revascularization, which has been confirmed by animal and clinical experiments [9,15,30], and perhaps helped to reduce the likelihood of nonunion, aseptic loosening, and allograft fracture. The maintained intraarticular structures can provide a good osseous bed for reattachment of resected soft tissues, such as muscles and ligaments.
Finally, we did not see an important difference between the treatment groups in terms of major complications. In fact, complications have a relatively high incidence in the distal tibia compared with other locations because of its unique anatomy [15,19]. Reported complication rates range from 17% to 92% for patients having limb salvage treatment [16,18,27]. Topping that ranking were infection, allograft fractures, and nonunion, which is similar to our observed results.
Microwave-induced hyperthermia is an alternative treatment for distal tibia osteosarcoma, which in this series showed that it provided improved function compared with transtibial amputation, without any apparent increase in death, local recurrence, or complications. However, these findings should be replicated in larger studies with longer mean followups, and in studies that compare microwaveinduced hyperthermia with conventional limb-sparing approaches.
|
2018-04-03T04:17:20.914Z
|
2017-02-13T00:00:00.000
|
{
"year": 2017,
"sha1": "c33d1f5bd5e17772fce0961c564d631b06361136",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11999-017-5273-1.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c33d1f5bd5e17772fce0961c564d631b06361136",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4945123
|
pes2o/s2orc
|
v3-fos-license
|
Elevation of MMP-9 Levels Promotes Epileptogenesis After Traumatic Brain Injury
Posttraumatic epilepsy (PTE) is a recurrent seizure disorder that often develops secondary to traumatic brain injury (TBI) that is caused by an external mechanical force. Recent evidence shows that the brain extracellular matrix plays a major role in the remodeling of neuronal connections after injury. One of the proteases that is presumably responsible for this process is matrix metalloproteinase-9 (MMP-9). The levels of MMP-9 are elevated in rodent brain tissue and human blood samples after TBI. However, no studies have described the influence of MMP-9 on the development of PTE. The present study used controlled cortical impact (CCI) as a mouse model of TBI. We examined the detailed kinetics of MMP-9 levels for 1 month after TBI and observed two peaks after injury (30 min and 6 h after injury). We tested the hypothesis that high levels of MMP-9 predispose individuals to the development of PTE, and MMP-9 inhibition would protect against PTE. We used transgenic animals with either MMP-9 knockout or MMP-9 overexpression. MMP-9 overexpression increased the number of mice that exhibited TBI-induced spontaneous seizures, and MMP-9 knockout decreased the appearance of seizures. We also evaluated changes in responsiveness to a single dose of the chemoconvulsant pentylenetetrazol. MMP-9-overexpressing mice exhibited a significantly shorter latency between pentylenetetrazol administration and the first epileptiform spike. MMP-9 knockout mice exhibited the opposite response profile. Finally, we found that the occurrence of PTE was correlated with the size of the lesion after injury. Overall, our data emphasize the contribution of MMP-9 to TBI-induced structural and physiological alterations in brain circuitry that may lead to the development of PTE.
Introduction
Traumatic brain injury (TBI) is caused by an external mechanical force, such as a blow to the head, concussive forces, acceleration-deceleration forces, or blast injury [1]. One of the major long-lasting consequences of TBI is posttraumatic epilepsy (PTE), which has been estimated to account for 10-20% of symptomatic epilepsies in the general population and 5% of all epilepsy patients [2]. Posttraumatic epilepsy results from molecular and cellular changes that drive the inhibition-excitation balance toward excitation [3]. The molecular and cellular changes that occur after the brain insult also involve the extracellular matrix (ECM), which plays a major role in remodeling neuronal connections after injury. One of the proteases that is implicated in ECM remodeling and consequently synaptic plasticity is matrix metalloproteinase-9 (MMP-9) [4]. MMP-9 is an extracellularly/pericellularly operating protease [5,6] that regulates numerous cell activities, such as cell differentiation, cell migration, cytokine release, survival, apoptosis, inflammation, and cell-cell contacts [7][8][9]. In the brain, MMP-9 is expressed and released by neurons and glia, with very low levels in the resting state and markedly greater activity in response to physiological stimulation and various pathological insults [10]. Importantly, MMP-9 levels are increased in the cerebral cortex and hippocampus in mice after TBI [11,12] and in blood plasma and serum in TBI patients [13,14]. MMP-9-related synaptic plasticity has been shown to play an important role in the development of epilepsy in both humans and rodents [15,16]. MMP-9 deficiency inhibits epileptogenesis, and excess MMP-9 facilitates it [17,18]. Notably, prolonged seizures are related to high serum MMP-9 levels in humans [19]. However, no direct evidence that links MMP-9 to PTE has been reported.
The present study utilized the controlled cortical impact (CCI) model of TBI in mice. After validating the model of CCI-induced structural changes, we investigated the detailed kinetics of enzymatic MMP-9 activity following brain injury in mice. We then evaluated the effects of either MMP-9 knockout or overexpression on neuronal excitability in the cerebral cortex in mice posttrauma. We additionally characterized spontaneous seizure activity in mice 14 weeks following brain trauma. To shed light on the anatomical changes that lead to PTE, we correlated lesion volume with epilepsy morbidity. The results indicated that TBIinduced increases in MMP-9 levels are involved in PTE.
Materials and Methods
Detailed experimental scheme is presented in Fig. 1.
Animals
The experiments were performed with adult male C57BL/6J mice (12-14 weeks old; Jackson Laboratory, Bar Harbor, ME, USA). The analysis of the influence of MMP-9 expression levels on lesion volume and the development of epilepsy was performed using mice with modifications of mmp-9 gene expression. Two transgenic strains were used: homozygous MMP-9 knockout mice on a C57BL/6J background (MMP-9 KO mice) and their wild-type (WT) littermates [20] and mice that overexpressed human pro-MMP-9 under the human PDGF-B promoter on a C57BL/6J background (MMP-9 OE mice) and their WT littermates [21]. Strain colonies (C57BL/ 6J, MMP-9 KO, and MMP-9 OE) were maintained in the Animal House of the Nencki Institute. Before the experiment, the animals were housed in individual cages under a controlled environment (22°C ± 1°C, 50-60% humidity, 12 h/ 12 h light/dark cycle), with free access to food and water. All of the procedures were performed in accordance with the Animal Protection Act in Poland (directive 2010/63/EU) and were approved by the 1st Local Ethics Committee (permissions no. 383/2012 and 609/2014).
The following numbers of animals comprised the groups in the PTE experiments: MMP-9 KO (n = 15), MMP-9 WT (n = 10), MMP-9 OE (n = 8), MMP-9 WT-OE (n = 6). For the studies of the occurrence of seizures, the number of MMP-9 KO animals was doubled because only a small proportion of the mice developed seizures in the first round of the experiment.
Induction of TBI with CCI
The mice were subjected to unilateral cortical contusion using the CCI protocol [22]. The animals were anesthetized with 4% isoflurane (Aerrane, Baxter, UK) in 100% oxygen at a flow rate of 4 L/min and placed in a stereotaxic frame. During surgery, they were maintained with 3% isoflurane and 100% oxygen at a flow rate of 0.6 L/min (Combi Vet Anesthesia System, Rothacher, Switzerland). For deeper sedation, the mice were injected with butorphanol (10 μg/30 g body weight). The skull was exposed by a midline scalp incision, and craniectomy was performed using a 5 mm ∅ trephine (Fine Science Tools FST, Heidelberg, Germany) over the left parietotemporal cortex between lambda and bregma (Fig. 2e, 5a1). The bone piece was carefully removed without disruption of the underlying dura. Traumatic brain injury was induced with a Leica Impact One device (Leica Biosystems, Kawaska, Poland) that was equipped with an electrically Fig. 1 Study design. Somatomotor performance, indicated by neuroscores, was assessed 1 day before TBI and 2, 7, and 14 days after TBI. Skull electrodes were implanted at 10 weeks post-TBI. vEEG monitoring lasted for 2 weeks, starting 14 weeks post-TBI. At 16 weeks post-TBI, the mice were injected with pentylenetetrazol (PTZ) and monitored by vEEG for 1 h driven metallic piston that was controlled by a linear velocity displacement transducer. After craniectomy, the adjustable equipment for CCI was mounted on the left stereotaxic arm at a 20°angle from vertical. CCI was performed per protocol using the following parameters: 3 mm ∅ (flat tip), 0.5 mm depth from dura, 5 m/s velocity, and 100 ms dwell time. After the injury, bleeding was controlled. A piece of sterile plastic was placed over the craniectomy, and the incision was sutured with nylon stitches (Sigmed, Cisek, Poland). The animals were then returned to heated home cages for postsurgical recovery. Sham-injured animals (n = 3 for each experiment per time point) underwent identical anesthesia and craniectomy procedures but were not subjected to CCI.
Nissl Staining
The mice were anesthetized and perfused with 0.37% sulfide solution (5 ml/min, 4°C) for 5 min, followed by 4% paraformaldehyde in 0.1 M sodium phosphate buffer (pH 7.4, 5 ml/min, 4°C) for 10 min. The brains were removed from the skull, postfixed in buffered 4% paraformaldehyde for 4 h at 4°C, and then cryoprotected in a solution that contained 30% glycerol in 0.02 M potassium phosphate-buffered saline (PBS) for 48 h. The brains were then frozen on dry ice and stored at − 80°C. Frozen brains were sectioned in the coronal plane (40 μm) with a sliding cryostat (Leica Biosystems, Kawaska, Piaseczno, Poland). The sections were mounted on gelatin-covered microscope slides, dried, and stained with Cresyl Violet. Pictures of Nissl-stained sections were taken using a Nikon Eclipse Ni light microscope that was equipped with a PlanApo 2× objective.
Gel Zymography
Gel zymography of tissue that was isolated from the ipsi-and contralateral cortex and hippocampus in CCI and sham The changes were measured as a ratio of lesion volume to the somatosensory cortex (cx) area (%). The data are expressed as mean ± SD. **p < 0.01, ***p < 0.001, ****p < 0.0001 (one-way ANOVA followed by Tukey post hoc test) C57BL/6J mice was performed according to Szklarczyk et al. [23]. After 10 min, 30 min, 60 min, 2 h, 6 h, 1 day, 3 days, 7 days, 14 days, and 30 days, the brains were rapidly removed, and tissue was dissected on a cold plate and frozen on dry ice. Each time-point group consisted of three CCI animals and three sham-operated animals. The samples were stored at − 80°C until analysis. After tissue frostbite, the samples were homogenized in a buffer that contained 10 mM CaCl 2 , 0.25% Triton X-100, and protease inhibitor cocktail (cOmplete mini EDTA-free; Roche, Basel, Switzerland) and centrifuged at 6000×g for 30 min. The entire supernatant that contained soluble proteins was quantitatively recovered. The pellet (Triton X-100-insoluble) was resuspended in a buffer that contained 50 mM Tris (pH 7.4) and 0.1 M CaCl 2 in water, heated for 15 min at 60°C, and then centrifuged at 10,000×g for 30 min at 4°C. This treatment results in the release of ECM-bound MMPs into the solution. The final pellet was free from MMP activity, as evaluated by gel zymography, thus confirming completeness of the extraction. The final supernatant was considered a Triton X-100-insoluble fraction. After centrifugation, the entire supernatant was quantitatively recovered. Sample protein concentrations were measured using the BCA protein assay (Pierce, Rockford, IL, USA). After quantification, samples that were lysed in buffer without 2mercaptoethanol were subjected to sodium dodecyl sulfatepolyacrylamide gel electrophoresis with 8% Tris-glycine acrylamide gels that contained 0.5% gelatin (Sigma-Aldrich, St. Louis, MO, USA) under nondenaturating and nonreducing conditions. The gels were washed twice for 30 min each in 2.5% Triton X-100 and incubated in zymography buffer (50 mM Tris [pH 7.5], 10 mM CaCl 2 , 1 M ZnCl 2 , 1% Triton X-100, and 0.01% sodium azide) for 5-7 days and stained with 0.5% Coomassie blue G-250 (Sigma-Aldrich, St. Louis, MO, USA). The optical density (intensity) of white bands on a blue background corresponded to MMP-9 levels and was quantified with the GeneTool program. For MMP-9 level comparisons between tissue and blood, we collected venous serum samples from CCI and sham animals. Blood was collected on 3.2% sodium citrate and centrifuged at 13,000×g for 30 min at 4°C. Serum was then collected and frozen at − 80°C for further analysis.
Neuroscore Test Following TBI
After CCI, the mice were subjected to behavioral testing to assess their level of motor and cognitive functions. The assessment of motor function was performed using the neuroscore test as previously described by Scherbel et al. [24]. The test began 1 day before TBI to obtain a baseline score and was repeated on days 2, 7, and 14 following TBI. The 20-point composite neuroscore was derived from the sum score of three tests: forelimb and hindlimb flexion tests (0-4 score for individual forelimbs/hindlimbs, maximum score = 8) and angle board test (0-4 score, maximum score = 4). In the forelimb flexion test, forelimb coordination and grip strength were evaluated when the mouse was suspended from its tail over a metal cage top and then lowered to allow it to grasp the cage bars with both forelimbs. One point was deducted from the forelimb score if the mouse exhibited a reduction of grip strength, crossed its forelimbs, or presented excessive hyperactivity and limb spasms. In the hindlimb flexion test, abnormal limb extension or toe splaying each resulted in a 1-point deduction, whereas three points were deducted if the mouse curled its hindlimb up to its body. The mice were then tested in the angle board test to assess their ability to stand on an inclined plane that was covered with a vertically grooved rubber mat. On the first test day (i.e., 1 day before TBI), the board was inclined at a 40°angle. The mouse was first placed on the board with its head upward, then to the left, then to the right, and finally downward. The mouse had to stay on the board for 5 s without holding on by its tail. The angle of the board was increased in 2.5°increments until the mouse could no longer stand on the board. After TBI, the tests were started at a 10°i nclination below the animals' baseline value, and scores (0-4) were assigned for each direction. A 2.5°decrease from the baseline angle resulted in a 1-point reduction of the angle board score. The neuroscore was used to calculate two additional parameters. ΔImpairment (Δi) refers to the difference in neuroscore between the baseline and day 2 post-TBI. ΔRecovery (Δr) was calculated by subtracting the neuroscore on day 2 post-TBI from the neuroscore on day 14 post-TBI.
Intracranial Electrode vEEG Monitoring of Epileptiform Activity
Four stainless-steel screw electrodes (1.6 mm ∅, Bilaney Consultants GmbH, Dusseldorf, Germany) were implanted 10 weeks post-TBI. One recording electrode was placed ipsilaterally, rostromedial to the craniectomy. Another recording electrode was placed contralaterally to the region that corresponded to the center of the craniectomy. A reference electrode was positioned above the contralateral frontal cortex. A ground electrode was placed in the occipital bone over the cerebellum (Fig. 5a1). Two weeks of continuous (24 h/day, 7 days/week) video-EEG (vEEG) monitoring began 12 weeks post-TBI [25]. The mice were placed in Plexiglas cages (one mouse per cage) and connected to the recording system with commutators (SL6C, Plastics One, Roanoke, VA, USA). vEEG was performed using the Twin EEG recording system that was connected to a Comet EEG PLUS with AS40-PLUS 57-channel amplifier (Natus Medical, Pleasanton, CA, USA) and filtered (high-pass filter cut-off 0.3 Hz, low-pass filter cutoff 100 Hz). The animals' behavior was recorded using an I-PRO WV-SC385 digital camera (Panasonic, Osaka, Japan). As outcome measures, we assessed the occurrence, frequency, a n d d u r a t i o n o f s p o n t a n e o u s s e i z u r e s . A n electroencephalographic seizure was defined as a highamplitude (> 2 times baseline) rhythmic discharge that clearly represented an abnormal EEG pattern that lasted > 5 s. The frequency of seizures for each mouse was calculated as the number of seizures per completed EEG recording day or per week. The modified 0-5 point Racine scale was used: 0 (electrographic seizure without any detectable motor manifestation), 1 (mouth and face clonus, head nodding), 2 (clonic jerks of one forelimb), 3 (bilateral forelimb clonus), 4 (forelimb clonus and rearing), and 5 (forelimb clonus with rearing and falling) [26]. The mice were monitored for the appearance of spontaneous seizures during the monitoring period. Mice with modified levels of MMP-9 were used and randomly assigned to the following groups: MMP-9 KO mice (n = 15), WT littermates of MMP-9 KO mice (n = 10), MMP-9 OE mice (n = 8), and WT littermates of MMP-9 OE mice (n = 6).
Pentylenetetrazol Threshold Test
After 2 weeks of vEEG recording (14 weeks after CCI), the mice were examined for pentylenetetrazol (PTZ)-induced seizure susceptibility. The animals were randomly assigned to groups. The first part of the experiment included C57BL/6J mice, 14 weeks after CCI (n = 13), and sham animals (n = 5). The second part of the experiment included mice with modified levels of MMP-9: MMP-9 KO mice (n = 15), WT littermates of MMP-9 KO mice (n = 10), MMP-9 OE mice (n = 8), and WT littermates of MMP-9 OE mice (n = 6). To test for seizure susceptibility at 14 weeks post-CCI (in the C57BL/6J and sham groups) or on the last day of vEEG recording (MMP-9 KO, MMP-9 WT, MMP-9 OE, and MMP-9 WT-OE groups), the animals were intraperitoneally injected with a subconvulsant dose of PTZ (30 mg/kg; Sigma-Aldrich, St. Louis, MO, USA) [27]. Immediately following the single injection of PTZ, the mice were connected to the monitoring system and observed for 60 min. The latency (in seconds) between the injection and the first epileptiform discharge ( Fig. 7a-b) was recorded. After staining the coronal sections with Cresyl violet, photographs were taken with a Nikon Eclipse Ni light microscope with a PlanApo 2 × 0.1 objective using ImagePro Plus 7.0 software. The histological lesion area was quantified with ImageJ software and is presented as a ratio between the lesion area and somatosensory cortex. To confirm the functionality of the model, gel zymography was performed in unstimulated (naive) and stimulated (24 h after brain injury) animals to evaluate MMP-9 levels in mice with modified genotypes.
Statistical Analyses
The statistical analysis was performed using GraphPad Prism 6.0 software. Dynamic cortex degeneration upon CCI and differences in lesion area were compared using one-way analysis of variance (ANOVA) followed by Tukey's multiplecomparison test. The level of active MMP-9 was analyzed using two-way ANOVA followed by the Sidak post hoc test. Neuroscores were analyzed using the nonparametric Mann-Whitney U test. Differences between injured animals and sham animals in PTZ-induced seizure thresholds were analyzed using the nonparametric Mann-Whitney U test or oneway ANOVA followed by the Sidak post hoc test. Differences in epileptiform activity 3.5 months post-CCI between genotypes were analyzed using one-way ANOVA followed by the Sidak post hoc test. Values of p < 0.05 were considered statistically significant.
Time-Dependent Somatosensory Cortex Degeneration and Long-Term Motor Function After CCI-Induced TBI
As a model of TBI, we used CCI in mice [28]. To characterize CCI-induced brain injury, brains were collected at different time-points after the insult (1, 7, 14, and 30 days). Shamoperated control animals were subjected to craniectomy only. Massive cerebral cortex degeneration was observed in the injured area beginning on day 1 post-CCI (32% cortical loss). The level of degeneration reached a peak after 14 days (64% cortical loss) and stabilized over next 2 weeks (58% cortex loss) compared with the contralateral site ( Fig. 2c-f). The sham groups exhibited only minor cortical changes compared with CCI animals (1 day, p = 0.018; 7 days, p = 0.0027; 14 days, p < 0.0001; 30 days, p < 0.0001; Fig. 2a, d, f).
MMP-9 Is Upregulated After TBI
To characterize the temporal profile of MMP-9 levels after TBI, brain tissue samples were collected from CCI-and sham-operated animals at 10 min, 30 min, 60 min, 2 h, 6 h, 1 day, 3 days, 7 days, 14 days, and 30 days. The somatosensory cortex and hippocampus were collected from the injured hemisphere (ipsilateral) and contralateral hemisphere separately. Venous blood was collected to measure plasma MMP-9 levels. Gel zymography was performed. Traumatic brain injury that was induced by CCI significantly increased MMP-9 levels in the ipsilateral somatosensory cortex 30 min after CCI (p < 0.05; Fig. 3a). MMP-9 levels then decreased slightly 2 h after CCI and rose again 6 h after CCI, with a two-fold increase compared with sham-operated animals (p < 0.0001; Fig. 3a). Interestingly, MMP-9 levels also increased in sham-operated animals 6 h post-CCI and then began to decrease. However, after 1 day, MMP-9 levels were still elevated (Fig. 3a). Sustained MMP-9 levels were observed over the next 2 weeks (i.e., between days 3 and 14 post-TBI; Sidak post hoc test; time factor: F 9,36 = 49.66, p < 0.0001; injury factor: F 1,4 = 96.31, p = 0.0006; Fig 3a).
A similar but weaker effect was observed in the hippocampus, where MMP-9 levels were elevated after 30 min (p < 0.05; Fig. 3b). Peak MMP-9 levels were observed after 6 h (five-fold increase compared with shamoperated group; p < 0.0001; Fig. 3b), and they decreased during the next days. Interestingly, we found significant heterogeneity between animals in the CCI group at the 6 h time-point. In the contralateral hemisphere, MMP-9 was nearly undetectable (Fig. 3a, b). In sham-operated animals, craniectomy alone did not increase MMP-9 levels in the hippocampus (Sidak post hoc test; time factor: F 9,36 = 11.72, p < 0.0001; injury factor: F 1,4 = 17.78, p = 0.0135).
No significant changes in plasma MMP-9 levels upon brain injury (Fig. 3c) were found between the CCI, sham, and naive groups (without any manipulations). Fig. 3 Time-dependent MMP-9 levels in the cerebral cortex and hippocampus after controlled cortical impact (CCI). a, b Gel zymography from ipsi-and contralateral cortex (a) and hippocampus (b) performed 10 min, 30 min, 60 min, 2 h, 6 h, 1 day, 3 days, 7 days, 14 days, and 30 days post-CCI. CCI, mice after CCI; sham, mice after craniectomy (without brain damage). c Gel zymography of venous blood plasma that was collected from animals after CCI and sham animals at different time-points. Representative zymograms are shown. For zymograms, each time-point group consisted of three CCI animals and three sham animals. MMP-9 levels were measured for each sample separately. The data are expressed as mean ± SD. *p < 0.05, **p < 0.01, ****p < 0.0001 (two-way ANOVA followed by Sidak post hoc test)
MMP-9 Levels in Genetically Modified Animals
To investigate the functional role of MMP-9 in the development of PTE after TBI, we used MMP-9 KO and MMP-9 OE mice. Each genotype had its own WT littermate control group. We assessed MMP-9 levels by gel zymography. We also analyzed MMP-9 levels before and after the brain insult (24 h post-CCI; Fig. 4a; n = 4/group). Basal activity in the ipsilateral cortex in naive animals was detected only in MMP-9 OE animals. No gelatinolytic MMP-9 activity was observed in the MMP-9 KO, MMP-9 WT, or MMP-9 WT-OE group (Fig. 4a). The CCI increased endogenous MMP-9 levels in MMP-9 WT, MMP-9 OE, and MMP-9 WT-OE groups. No changes in MMP-9 levels were observed in the MMP-9 KO group (Fig. 4b). After stimulation in the MMP-9 OE group, we observed MMP-9 of both mouse origin (92 kDa) and MMP-9 that derives from the human pro-MMP-9 genomic insert (lower band in Fig. 4b).
No Genotype-Dependent Differences in Motor Activity After Brain Injury
To clarify the experimental results, we employed a study design that was based on Miszczuk et al. [29] (Fig. 1). Following TBI, the mice were subjected to behavioral testing to assess their level of motor function. Animals that were subjected to CCI were tested for motor recovery, based on neuroscores, 1 day before and 2, 7, and 14 days post-injury [18]. The animals were put on an angled platform, and motor performance was tested, based on the angle of inclination. The mice received scores for stability in four directions (head up, down, left, and right). All of the scores were compared with basal scores that were recorded 1 day before the injury. Motor recovery in CCI mice was impaired compared with the sham groups 2 days after CCI ( Fig. 5b1; p = 0.0133), and then all animals fully recovered 7 days after CCI. No significant differences in Δi (difference between baseline on day − 1 and day 2 post-TBI; p = 0.2) or Δr (recovery; difference between the scores on days 2 and 14 post-TBI; p = 0.2) were observed between the CCI and sham groups.
To verify the effect of genotype, analogical analysis was performed in animals with mmp-9 gene modification (Fig. 5b2, b3). We did not observe any significant differences between MMP-9 KO mice and MMP-9 WT mice after CCI (day 2 post-CCI; p = 0.2487), with no significant differences in Δi (p = 0.1907) or Δr (p = 0.0985). Similarly, no differences were observed between MMP-9 OE mice and MMP-9 WT-OE mice after CCI (day 2 post-CCI; p = 0.333), with no significant differences in Δi (p = 0.333) or Δr (p = 0.2667). We found motor impairment between MMP-9 KO and MMP-9 WT mice and between MMP-9 OE and MMP-9 WT-OE mice during the first 24 h after injury. Motor activity returned to basal levels within the next 6 days.
MMP-9 Contributes to the Development of Epilepsy
Ten weeks after CCI, the mice were implanted with electrodes (Fig. 5a1). After 1 week of recovery, vEEG monitoring was performed for 2 weeks. The number of epileptic animals and number of seizures per day and per week were recorded. MMP-9 deficiency protected animals against the development of spontaneous seizures. Only 6% (1/15) of MMP-9 KO mice exhibited spontaneous seizures compared with MMP-9 WT mice (10%; 1/10), whereas 62% (5/8) of MMP-9 OE mice exhibited the epileptic phenotype (Fig. 6a). Among the animals that developed spontaneous seizures, the number of seizures per week of recording differed between MMP-9 KO and MMP-9 OE mice. MMP-9 KO mice had fewer seizures per week (p = 0.0017) compared with MMP-9 OE mice (Sidak post hoc test; genotype effect: F 3,35 = 4.351, p = 0.0105) (Fig. 6b). Similarly, MMP-9 KO mice exhibited fewer seizures a2 Example of spontaneous seizure in the ipsilateral perilesional cortex (CxL) that lasted > 50 s (behavioral score = 5) preceded by epileptiform discharges (*) that lasted < 2 s. b1-3 Assessment of somatomotor performance, indicated by neuroscores, in mice with modification of mmp-9 gene expression. Δi, impairment (difference between baseline on day − 1 and day 2 post-TBI); Δr, recovery (difference between scores on days 2 and 14 post-TBI). The data are expressed as mean ± SD. Changes between groups were assessed each day separately. *p < 0.05 (nonparametric Mann-Whitney test).
Fig. 6
Epileptiform activity in MMP-9 KO mice and MMP-9 OE mice during the 3.5 months of follow-up after controlled cortical impact-induced traumatic brain injury. Data from the vEEG recordings were collected between weeks 12 and 14 after injury. a Percentage of epileptic animals that developed a minimum of one spontaneous seizure per 2 weeks of recordings. b Number of seizures per day. c Number of seizures per week. The data are expressed as mean ± SD. *p < 0.05, **p < 0.01 (one-way ANOVA followed by Sidak post hoc test) per day (p = 0.0017) than MMP-9 OE mice (Sidak post hoc test; genotype effect: F 3,35 = 4.124, p = 0.0132) (Fig. 6a).
MMP-9-Dependent Seizure Susceptibility Evoked by Single Subconvulsant Dose of Pentylenetetrazol
On the last day of vEEG, the mice received an intraperitoneal injection of a subconvulsant dose of PTZ (30 mg/kg). We first evaluated the influence of TBI on PTZ-induced seizure susceptibility. We used C57BL/6J control animals after CCI and sham-operated littermates and injected PTZ 12 weeks post-CCI/sham surgery. We observed a significantly shorter latency between the PTZ injection and the first epileptiform discharge in CCI animals compared with sham-operated animals (p = 0.0017; Fig. 7a) and a higher occurrence of seizures (46%).
To evaluate the effect of MMP-9 on PTZ-induced seizure susceptibility, we used MMP-9 KO and MMP-9 OE mice. The latency between the PTZ injection and first epileptiform discharge was significantly shorter in MMP-9 KO mice compared with MMP-9 WT mice (p = 0.0022; Fig. 7b) and MMP-9 OE mice (p < 0.0001; Fig. 7b). The occurrence of seizures in MMP-9 KO mice was significantly less frequent (15%) compared with MMP-9 OE mice (100%). Pentylenetetrazolinduced seizures were observed in 54% of MMP-9 WT mice and 50% of MMP-9 WT-OE mice. Mice with higher levels of MMP-9 had a higher mortality rate after the PTZ injection.
Post-CCI Lesion Volume Is Dependent on MMP-9 Levels
Finally, we correlated MMP-9 levels with lesion severity. Fourteen weeks after CCI, brain tissue was collected, and sections were stained with Nissl staining to assess the size of the lesion after injury. MMP-9 KO mice had a smaller lesion volume within the injury area compared with MMP-9 WT mice (p < 0.01; Fig. 8a, b); MMP-9 OE mice had a significantly greater lesion volume than MMP-9 WT-OE mice (p < 0.001; Fig. 8a, b). The lesion volume in MMP-9 KO mice was significantly smaller compared with MMP-9 OE mice (genotype effect: F 3,16 = 31.88, p < 0.0001).
Discussion
In the present study, TBI significantly increased MMP-9 levels in the perilesional cortex and ipsilateral hippocampus 30 min, 6 h, and 24 h post-injury. Elevations of MMP-9 levels augmented both the susceptibility to PTZ-induced seizures and occurrence of spontaneous seizures. Deficiency of the active form of the MMP-9 in MMP-9 KO mice reduced epileptogenesis. We also found that MMP-9 KO mice had smaller post-TBI cortical lesion volumes, whereas MMP-9 OE mice had greater lesion volumes in the cerebral cortex.
Previous studies of time-dependent increases in MMP-9 levels post-injury focused on the events that occurred 1-7 days after the induction of brain damage [11,12,30]. However, little is known about MMP-9 levels during the acute phase after TBI (i.e., within the first 24 h) and during the chronic phase up to several weeks post-TBI when epileptic seizures might start to occur. Therefore, the present study extended this time-course by measuring MMP-9 levels both during the acute phase post-CCI throughout the first 24 h and during the chronic phase up to 30 days post-injury. Brain injury increased active MMP-9 levels in the cerebral cortex and hippocampus within the first hour post-CCI. This increase was followed by a decrease in MMP-9 levels after 1-2 h and then a marked increase in MMP-9 levels between 6 and 24 h after the cortical insult. This time-course of elevations of MMP-9 levels was observed most prominently in the cerebral cortex that surrounded the injury and to a lesser extent in the ipsilateral hippocampus. The dynamic changes in MMP-9 can be explained by complex and multi-stage mechanisms of MMP-9 expression and activation. Neuronal MMP-9 can be produced within minutes after excitatory simulation through local dendritic/synaptic translation from the preexisting pool of mRNA [31]. Within 2 h following stimulation, transcriptiondependent MMP-9 accumulation can occur [32]. Under conditions of sustained stimulation, as previously demonstrated following kainate treatment, various molecular mechanisms of MMP-9 production and activation overlap, resulting in a massive and prolonged increase in the levels of the enzyme and its activity [23]. Similar phenomena of sustained MMP-9 expression, translation, release, and activation may occur around the injury site after TBI (i.e., within the surrounding cerebral cortex and hippocampus), with the involvement of neurons, glia, and invading leukocytes as cellular sources of MMP-9.
Interestingly, we did not observe any changes in gelatinolytic MMP-9 levels in blood serum that was collected from injured, sham, and naive animals. This is in contrast with human studies, in which high MMP-9 and MMP-2 levels were detected acutely post-injury in blood plasma as well as in brain extracellular and cerebrospinal fluids in adult patients with moderate to severe TBI [33]. In humans, high MMP-9 levels were associated with poorer outcomes, including a longer stay in the intensive care unit and a greater risk of mortality [33]. Severe TBI in humans leads to long-term cognitive and motor dysfunction [34]. Severe injury may cause MMP-9 release from other cells, and such higher levels of MMP-9 may then be detectable in blood in patients. In the present study, neuroscores were used to assess motor activity in mice after CCI, with the deficits being relatively minor, and the animals' performance in the test quickly returned to basal levels. Therefore, our model appears to produce less severe TBI than the one reported in the aforementioned human study, and thus, it produces less enhanced MMP-9 levels.
The involvement of MMP-9 in epileptogenesis has been previously reported [18,[35][36][37]. However, to date, the role of MMP-9 in PTE that is evoked by TBI has not been investigated. Therefore, we used animals with genetic modifications of MMP-9 levels. We used MMP-9 KO mice that had no functional MMP-9 and mice with MMP-9 overexpression in the brain. Pentylenetetrazol-induced seizure susceptibility was evaluated in injured animals. The latency between the injection of a subconvulsant dose of PTZ and the first epileptiform discharge confirmed a positive correlation between TBI and seizure susceptibility in the CCI animal model, as previously demonstrated by Bolkvadze and Pitkanen [28]. We also found a strong correlation between MMP-9 genotype and PTZ-induced seizure susceptibility. The latency between the PTZ injection and the first epileptiform discharge was the shortest in MMP-9 OE mice and longest in MMP-9 KO mice.
Continuous vEEG recording for 2 weeks further implicated MMP-9 in the development of PTE after CCI. Approximately 10% of MMP-9 WT mice developed PTE after CCI, and this percentage markedly increased in MMP-9 OE animals (50%) and decreased in MMP-9 KO animals, in which only 6% (1/15) developed seizures. Therefore, we demonstrated a significant functional role for MMP-9 in post-TBI processes that lead to the development of epilepsy. MMP-9 may contribute to epilepsy through mechanisms that involve synaptic Fig. 7 Post-CCI neuronal excitability in the pentylenetetrazol threshold test. A subconvulsant dose of PTZ (30 mg/kg) was injected on the last day of vEEG recording (14 weeks post-CCI). a, b Latency between the PTZ injection and first epileptiform discharge measured in C57BL/6J and sham mice (a) and mice with modification of MMP-9 levels (b). The table presents the latency (in seconds), seizure occurrence (% of animals), and mortality (%). The data are expressed as mean ± SD. **p < 0.01, ****p < 0.0001 (nonparametric Mann-Whitney test for C57BL/6J animals, oneway ANOVA followed by Sidak post hoc test for animals with modification of mmp-9 gene expression) plasticity, neuroinflammation, and blood-brain barrier disruption [5,38,39].
In the present study, higher MMP-9 levels were associated with a higher prevalence of PTE after brain injury. Moreover, the occurrence of PTE correlated with the lesion area after injury. We suggest that the extent of structural and physiological changes in brain circuitry that occur after injury contribute to epileptogenesis. Notably, structural and functional changes that occur within the dentate gyrus and CA1 field of the hippocampus (mossy fiber sprouting, CA1 degeneration) have been strongly and functionally implicated in different animal models of epilepsy [40][41][42][43]. One unresolved issue is whether these changes are related to increases in MMP-9 levels in the hippocampus that are induced by different stimuli, such as TBI. Only 10% of mice developed spontaneous seizures upon TBI, which may be related to the findings in the hippocampus, in which individual differences in MMP-9 levels reflected high heterogeneity in the injured animal group. Importantly, the number of animals that develop spontaneous recurring seizures in this model strongly depends on the strain. Approximately 20% of CD-1 mice develop seizures after TBI [44], whereas only approximately 10% of C57BL/6J mice develop seizures weeks after brain trauma [28].
In a large number of TBI patients, MMP-9 levels increase, which is directly related to physiological inflammatory and repair processes that occur in the injured area and surrounding brain structures. In approximately 20% of TBI patients, MMP-9 levels exceed the threshold of physiological repair, and TBI-induced reorganization becomes excessive, leading to irreversible changes and aberrant rewiring that are the basis of epileptogenesis. Future studies should investigate whether early interventions to lower post-injury MMP-9 levels can Lesion volume in mice with modification of mmp-9 gene expression. The brain sections were analyzed using ImageJ software. The lesion area is presented relative to the somatosensory cortex area. The data are expressed as mean ± SD. **p < 0.01, ***p < 0.001, ****p < 0.0001 (one-way ANOVA followed by Tukey post hoc test) prevent pathophysiological processes and reduce the risk of developing epilepsy.
|
2018-04-27T03:28:06.281Z
|
2018-04-17T00:00:00.000
|
{
"year": 2018,
"sha1": "3717b586722a7de68abf4f64bea3e190e49841e9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12035-018-1061-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d00ce138448d1d2d8571053142cb422541a83ff9",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219257225
|
pes2o/s2orc
|
v3-fos-license
|
Productivity of potato seed submitted to different doses of potassium in hydroponic system
The potato is one of the most economically important crops in Brazil, and among the items that most cost the production is seed potatoes. The deficiency of one nutrient can interfere with the absorption and accumulation of the others in plants. The aim of this work was to quantify the optimal potassium (K) dose for minituber basic seed potato yield in hydroponic system. The experiment was installed in a greenhouse with the Agata cultivar minitubers. The treatments consisted of five doses of K (0.0; 2.5; 5.0; 7.5 e 10.0 mmol L -1 ) with four repetitions. The experimental design was a randomized block design. Data were submitted to analysis of variance and regression. The number of tubers, fresh mass, classification and dry mass were measured. The content of K and content other nutrients in the seed potato tubers were also evaluated. In the hydroponic systems, the maximum yield per plant was 48.41 tubers obtained with 6.15 mmol L -1 of K and maximum mass of fresh matter was 646.6 g.
INTRODUCTION
The production of potato (Solanum tuberosum L.) seed is a challenging step in the process of potato production in Brazil. Tuber of any size can be marketed as potato seed, but the term potato minituber is generalized in the seed segment of the basic category although it is not mentioned in the standards and standards for the production and marketing of potato propagation material in the Brazil (MAPA, 2012). Regardless of the system used in the production of basic seed potatoes, it is almost always necessary to add potassium (K) to the medium. K deficiency may lead to reduced potato leaf dry matter accumulation (Cao and Tibbits, 1991), emergence and growth retardation, early senescence, dark green leaf (Chapman et al., 1992). followed by necrosis, thinner stems, shorter internodes, curved appearance and wilted foliage.
Several studies have evaluated the effect of potassium fertilization on potato crop productivity in the field, almost always indicating the need to apply K to obtain high productivity and quality of the tubers (Zorb et al., 2014;Silva and Fontes, 2016) increase the number of tubers (Mohr and Tomasiewicz, 2012;Singh and Lal, 2012), an essential characteristic in the production of minitubers of the basic category. There is little information on the multiplication of Agata minitubers, currently the main cultivar planted in Brazil.
For the correct management of fertilization, nutrient uptake and export studies are required to aid in fertilization programs, with the purpose of optimizing tuber production and reducing the excessive use of fertilizers (Cabalceta et al., 2005;Zobiole et al., 2010). According to Sancho (1999) and Bertsch (2003), nutrient extraction depends on external factors, which are related to the growing environment, but also on internal factors such as genetic potential and plant age.
The low productivity of potato crops is often related to nutritional limitations (Queiroz et al., 2014). And these limitations can begin when there is an imbalance of the nutrients accumulated in the seed tubers. Thus, a balance of nutrient concentrations in the tuber is essential, mainly for the production of seed tubers, since it is at the initial stage of development of the potato, which begins with the planting of the seed potato sprouted until the emergence of the main stems, which plant uses the nutrient reserve of the mother potato, since the root system has not yet developed (Mesquita et al., 2012).
The aim of this work was to to quantify the optimal dose of K on the productivity of seed potato minitubers in hydroponic system.
MATERIAL AND METHODS
The experiment was conducted in a greenhouse of the Plant Mineral Nutrition Laboratory of the Plant Science Department of the Universidade Federal de Viçosa (UFV), Viçosa, MG, Brazil. The experiment was conducted between August and November 2012. The propagating material used was a minituber of the Agata cultivar, basic category G0, type VI (13-16 mm cross-sectional diameter, IMA, 2003) sprouting naturally with shoots approximately 0.5 cm the long.
The treatments consisted nutrient solution, with five doses of K (0.0, 2.5, 5.0, 7.5 and 10.0 mmol L -1 ). The experimental design was completely randomized, with four repetitions. Each experimental unit consisted of a vessel with a plant. The sources of K used were potassium chloride (KCl -, with 60% of K 2 O) and potassium nitrate (KNO 3 -with 44% of K 2 O), and the doses used were determined according to Furlani (1998) modified. The control of the irrigation was automated, being realized by a digital timer programmed to activate the electric pump for two minutes at 7:00 a.m, 10:00 a.m., 12:00 p.m., 3:00 p.m. and 7:00 p.m.
The plants emerged four to five days after planting. Until the emergence, the minitubers were irrigated with deionized water. After the emergency, irrigation was done with half of the concentration of nutrients described for each treatment and after seven days, solution with macro and micronutrients concentration was provided for each treatment. The different concentrations of K doses for each treatment are shown in Table 1. Have been added NaNO 3 concentration so that the proportion of nutrient loads (anions and cations) are in equilibrium in the nutrient solution (Martinez and Silva Filho, 2012). The nutrient solution was monitored and adjusted to 5.5 ± 0.5, with 1 mol L -1 HCl or 1 mol L -1 NaOH, both once a day. The solution exchange was performed whenever the initial electrical conductivity had a reduction of 30%.
The monthly distribution of the average temperature during the period of conduction of the experiments was obtained by daily measurements, with a minimum 13.6; 14.2; 17.3 and 20.8 o C and the maximums 38.1; 40.5; 42.7 and 48.5 o C, in the months of August, September, October and November, respectively. Plants grown in hydroponics had a cycle of 81 days. After senescence of the shoot part, the tubers were harvested, washed and evaluated for number (NT), fresh mass (FMT) and classified. The tubers were classified according to the transverse diameter, according to the original proposition do Ima (2003). The classes and their diameters were: 0 (above 60 mm); I (50 to 60 mm); II (40 to 50 mm); III (30 to 40 mm); IV (23 to 30 mm); V (16 to 23 mm); VI (13 to 16 mm); VII (10 to 13 mm) e VIII (less than 10 mm).
After harvesting the tubers, the plants were separated into leaves, stems and roots. Afterwards, they were packed in paper bags and placed in a forced circulation oven at 70 ºC, until they reached a constant mass when the mass of the dry matter was determined.
Also were counted rotted tubers, attacked by pests and diseases and with defects (greening, smudging or cracking). The tubers were then cut into small pieces, placed in petri dishes and left on the laboratory table for partial drying. Subsequently, the samples were placed in a forced circulation oven at 70 ºC, where they remained until reaching constant weight, when the mass of the dry matter tuber (DMT).
After drying, the tubers were milled in a Wiley type mill, equipped with a 20 mesh screen, submitted to nitroperchloric digestion and analyzed for K and P contents by flame emission photometry (Braga and Defelipo, 1974), Ca, Mg and S by atomic absorption spectrophotometry (Blanchar et al., 1965). The contents of K, P, Ca, Mg, and S were obtained by the multiplication of the dry mass of the tubers, by K, P, Ca, Mg, and S content in the tubers. Data were submitted to analysis of variance and regression. The regression model was chosen based on the biological significance and significance of the regression coefficients, using the t test, adopting the level of up to 10% probability, and the coefficient of determination (R 2 = SQregression/ SQtration).
RESULTS
The plant development at 21 days after bud emergence and final tuber seed yield are shown in Fig. 1, according to the applied of K doses.
In the experiment, there was an effect of doses of K in relation to the number of tubers, represented by the equation of the quadratic type ( Fig. 2A). The dose 6.15 mmol L -1 K productivity of minitubers 48.41 units plant -1 .
The behavior of FMT versus doses of K was similar to that presented by NT, and the dose that allowed the maximum FMT, 646.6 g plant -1 , was 6.13 mmol L -1 of K (Fig. 2B).
For the plants cultivated in hydroponics there were effect of the K doses for the dry matter tuber (DMT), leaf matter (DML) and stem (DMS) variables. The variables evaluated and the equations representative of the relationship between dose of K and these variables are in Table 2.
In the experiment in hydroponic system, 592 tubers were harvested. Of these, 55.4% were abnormal. Of these abnormal tubers, 5.24% were totally rotten; 2.53% had dark patches on the epidermis; 24.32% were sprouted; 22.47% were malformation and 0.84% had some type of defect. The highest proportion of abnormal tubers (73.47%) was observed in the treatment with 2.5 mmol L -1 K where of the 24.5 tubers produced per plant, 18 were abnormal.
There were tubers with larger diameter, but the percentage was not expressive, being 2.36% of type II, 7.77% of type III and 42.42% of types IV, V and VI. Additionally, there was a predominance of smaller tubers with 11.99% classified as type VII and 35.64% as type VIII (Table 3). There was a dose effect of K on the phosphorus and calcium content in the tubers. In the experiment there was also an effect of the dose of K on potassium and sulfur contents. The variables evaluated and the equations representative of the relationship between dose of K and these variables are in Table 4.
There was a dose effect of K on sulfur in the experiment. The variables evaluated and the equations representative of the relationship between dose of K and these variables are in Table 5.
DISCUSSION
In the present experiment we have chosen to express the number of tubers produced since the systems used are apparently more appropriate for the production of minituber potato seed of the highest quality basic category, which are usually marketed in units. While in the experiment the response of the NT to the doses of K increased until the maximum point, when it started to decline.
The K doses were insufficient, optimal or deleterious, depending on the amount of K applied. The negative dose action of K on the NT was possibly determined by the increase in the electrical conductivity of the solution of the medium and/or the negative effect of the accompanying ion of K.
The difference in the dose value of K that provides the maximum yield of tubers can be attributed to several factors, such as methodological differences, year, planting season, management, heterogeneity of the organic constituents of the substrate associated to the mineral fertilizer added by the manufacturer. Medeiros et al. (2002), in a hydroponic system experiment, were able to produce from 8.6 to 49.6 tubers plant -1 and the average fresh tuber mass of 3.3 to 15.4 g, depending on the cultivar, propagation material and hydroponics system, including different ionic concentrations in nutrient solutions, such as 183 and 298 mg L -1 of K, for example.
The results obtained in experiments, as well as those obtained by Medeiros et al. (2002), show a more qualified performance of these system (hydroponic system) in relation to seed potato production when compared to soil multiplication, which is 3 to 5 tubers per plant (Daniels et al., 2000). The superiority may be due to the lower incidence or absence of pathogens in the environment, a more appropriate control of plant nutrition, especially when nutrient solution is used, because through it is possible to maintain the concentration of nutrients near the roots and to make adjustments when necessary (Medeiros et al., 2002), to control the pH of the solution in bands that optimize nutrient uptake (Martinez and Alvarez, 1993), in addition to focusing on obtaining a larger number of tubers of lower mean mass.
For example, in the field, Bansal and Trehan (2011) obtained an increase in size only and not in the number of potato tubers with the application of K. Singh and Lal (2012) verifying the behavior of nutrients N and K in cultivated potato crop (39.83 t ha -1 ), with 225 and 150 kg ha -1 of N and K 2 O, respectively, were observed in the soil. The two nutrients increased the production of the larger tubers and decreased the yield of smaller tubers.
However, in the protected environment, there are few studies evaluating the effect of K doses on the production of basic seed potatoes. Usually, a single dose is used, for example, in the multiplication of minitubers in the soil and in a greenhouse, where Farran and Mingo-Castel (2007) using 0.75 g per K 2 O plant, obtained 95 g plant -1 of fresh mass of tubers and 4.5 tubers plant -1 .
In hydroponics, using organic substrate and seedlings originating from tissue culture, Favoretto (2005) obtained 6.7 minitubers with 23 mm of greater diameter and fresh mass of 16.10 g, harvested 53 days after the transplant. Such values indicated in the work correspond to the total productivity of 469 minitubers and 1127 g m -2 of fresh mass. With these treatments, the total quantities available in each bag for a potato plant through the nutrient solutions during the experimental period were 912; 1434; 1694; 2085 and 2476 mg plant -1 of K, respectively. In each bag was planted a tuber of Asterix cultivar, at a density of 4.4 pots m -2 and the experiment was closed at 73 days after planting. The study showed that there were no significant effects of the addition of K on the number, 7 and 16 commercial and total tubers and on the fresh mass of tubers, 755 g plant -1 (Cogo et al., 2006). Precise control of the number and size of potato tubers still remains a challenge for researchers (Levy and Veilleux, 2007).
The tuber is a stem and presents epidermis, peridermis, cortex (50%), vascular rings (30%) and medulla (Fontes 2005). When immature, it releases the film, dehydrates easily and is easily damaged and penetrated by microorganisms. The occurrence of tuber defective in the experiment can be attributed in part to the high temperature inside the greenhouse during the period in which the experiments were conducted. The temperature determines the occurrence of physiological disturbances and the growth rate of most of the pathogens that cause rot in the tubers.
In the potato, there are variations in the amount of nutrients accumulated by the tubers, influenced, mainly, by factors related to the production and dry matter partition for them. It may also be related to the amount of an element for the plant. In the present work K interferes with the absorption and accumulation of other elements such as P, Ca, Mg and S.
Balanced concentrations of nutrients in the potato tuber is important for a good initial development of the plant, this may interfere with the plant's future productivity. Since the potato plant uses the reserves accumulated in the seed tubers for their initial growth, root emission and stems. Fernandes et al. (2011) found that due to the reduction of dry mass and remobilization of nutrients to the growing regions, the amounts of N, P, K, Mg and S accumulated in the seed tubers decreased. Thus, among the nutrients accumulated by potato Malavolta and Dantas (1980) found that 80 to 94% of P, 68 to 74% of N, 32.5 to 57.8% of S, 25.5% of Mg, 19 to 20% of K and 2.8 to 3.6% of Ca are exported by the tubers.
|
2020-04-30T09:09:01.481Z
|
2019-08-27T00:00:00.000
|
{
"year": 2019,
"sha1": "7be24a60c45338d1fa47c1c1a62d574335118742",
"oa_license": null,
"oa_url": "https://doi.org/10.9755/ejfa.2019.v31.i7.1979",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9a809ba79555ea691b4bead3a9f12a97c9d58eba",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
56595382
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of a cardiac sarcoma with CT multislice contrast-enhanced and 18FDG-PET/TC.
We present the case of an adult male who arrived to our emergency room with progressive dyspnea that had been ongoing for 2 months. During the radiological investigation, we found a large intracardiac mass, which invaded the pericardium, pulmonary trunk, pulmonary arteries, and left ventricle. Studies done with the 18FDG-PET/CT scan helped us to determine the malignant nature of the mass and to suspect the diagnosis of rhabdomyosarcoma.
Introduction
According to the Armed Forces Institute of Pathology, primary tumors of the heart and pericardium are rare, having an incidence of 0.001 % to 0.28% in autopsy series [1] , while metastatic cardiac tumors are about 30 times more common than primary neoplasm [2,3] .
About 75% of all tumors of the heart in adult age are benign with myxoma being the most common benign tumor (50%).
Malignant tumors account for the remaining 25%: mesenchymal tumors constitute 2/3 of the cases, with the most common pathologies being angiomyosarcoma and rhabdomyosarcoma followed by fibrosarcoma, leiomyosarcoma, liposarcoma, and other poorly differentiated sarcomas [4,5] . (Iobitridolo; volume: 100 mL; flow rate:3.5 mL/s, slice thickness: 2.5 mm) was performed to exclude pulmonary embolism. The exam documented a large intracardiac mass which showed nonhomogeneous contrast-enhancement during contrast CT acquisition with some necrotic areas, that infiltrated left ventricle, pericardium, pulmonary trunk, and both pulmonary arteries, mainly involving the right branch, which extended into the hilum and to the arterial branches tributary for the ipsilateral lower lobe, where neoplastic embolism phenomena were documented ( Fig. 1 ). A FDG-PET/CT (298 Mbq-120 mL Iobitridol 300 mg/mL) was then performed and that excluded any primary noncardiac tumor or any other distant metastasis. The cardiac mass had a nonhomogeneous uptake of FDG with a Maximum standardized uptake value (SUVmax) of 32.3. Increased uptake of FDG was also documented for mediastinic lymphnodes ( Fig. 2 ).
No biopsy was performed because the patient died soon after his hospitalization due to cardiorespiratory failure.
No autopsy was performed because the family declined consent.
Discussion
Cardiac tumors are rare and can be divided into primary cardiac tumors (PCT) and metastatic tumors. Most frequent metastatic lesions come from breast, lung, kidney, skin and hematopoietic tumors [6] . Almost 75% of PCT are benign tumors, with atrial mixoma being the most frequent one [4,7] . Malignant PCT accounts for 25% of the cases, of which about 2/3 are sarcomas, in particular angiomyosarcoma and rhabdomyosarcoma, followed by fibrosarcoma, leiomyosarcoma, liposarcoma, and other poorly differentiated sarcomas [4,5] while the last 1/3 is composed of cardiac lymphomas and mesotheliomas.
The majority of primary cardiac sarcomas are located in the atrial cavities. According to the literature, about half of right atrium tumors are malignant, while the majority of those arising in the left atrium are benign, most of which being myxomas [8] .
Echocardiography is the first line exam. CT or magnetic resonance imaging can confirm the suspicion of malignancy [9,10] and can be employed to determine the possible presence of a primary noncardiac tumor, thus helping us to rule out the possibility of the mass being a metastasis. FDG-PET/CT may help in the diagnosis as it can differentiate a benign from a malignant lesion, with a 100% sensitivity and about 86% specificity, using a cutoff of SUVmax of 3.5 [11] . Biopsy may also be performed if there is suspicion of a lymphoma (this kind of tumor must be suspected in immunodepressed patients) and must be followed by chemotherapy and/or radiotherapy [12] . Immunohistochemistry is the final step to identify the histotype of the tumor [7] .
Angiomyosarcoma is a very rare cardiac neoplasia with only 200 cases described in the literature [13] . Typical localizations of cardiac angiomyosarcoma are the right atrium, the pericardium, the tricuspidal valve, and the vena cava. The pathology is more frequently diagnosed in more advanced states due to the aspecificity of its symptoms; therefore, metastatic finding at the time of diagnosis is typical, making the prognosis severe [14] . The advanced disease state upon diagnosis makes the treatment controversial: the surgical excision is the first line treatment, but chemotherapy and radiotherapy have well-established postoperative roles because of the high probability of metastasis, particularly pulmonary; targeted therapy may also be considered [7] .
Rhabdomyosarcoma is the second most common malignant tumor of heart, with an incidence of about 20%. It is rare in the pediatric age and more common in adulthood, but it has no well-defined age prevalence. It originates from the cardiac striated muscle, with higher incidence in men compared to women [1] . Unlike angiomyosarcoma, it can involve any of the heart chambers and it can grow invasively in either a single or in multiple locations. Invasion of the cardiac valves and spread to neighboring organs, pericardium, pleura, and mediastinum has been reported [4] . Similarly to angiomyosarcoma, symptoms usually manifest in late stage disease and may be not specific for cardiac disease, but more indicative of malignancy (fever, anorexia, weight loss), or pericardial disease (dyspnea, chest pain, pleural effusion, and embolic phenomena). Valvular infiltrations may restrict blood flow mimicking a stenosis of the mitral or tricuspid valve. Heart wall infiltration may result in hypertrophic or restrictive cardiomyopathy [15] .
Cardiac tumors are often diagnosed because of thrombotic or neoplastic embolism that may cause stroke, pulmonary artery embolism, or peripheral limb vasculature [16] .
In this report, we present a primary cardiac malignancy evaluated by contrast-enhancement chest CT and by 18FDG-PET/CT scan. Due to the lack of specific symptoms, the patient presented to the emergency department with only dyspnea and, after excluding a myocardial infarction with electrocardiogram, was evaluated with a contrast-enhancement chest CT which showed a large intracardiac mass that infiltrated left ventricle, pericardium, pulmonary trunk, and both pulmonary arteries, with a major extension on the right branch, involving the hilum and the arterial branches tributaries of the ipsilateral lower lobe where neoplastic embolism phenomena were documented.
FDG-PET/CT helped us to determine, noninvasively, the malignancy of the tumor as well and excluded the presence of possible secondary lesions as no further neoplastic disease was detected.
The history of the patient, the localization of the tumor, and the absence of pleural lesions helped us to exclude the diagnosis of mesothelioma, while origin, extension, and local infiltration of mediastinal pleura and pericardium oriented towards the diagnosis of rhabdomyosarcoma instead of angiomyosarcoma.
Despite advances in imaging techniques and their increasing clinical availability, most patients are diagnosed at an advanced stage because the symptoms lack and the prognosis of the patients does not survive more than 12 months from the time of diagnosis.
|
2019-01-22T22:22:01.166Z
|
2018-12-17T00:00:00.000
|
{
"year": 2019,
"sha1": "fdebe83895ad7c082e262ca9c52ba668e287eda5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.radcr.2018.12.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c257e31263eeabec32628ac7f37634af862cafc4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216511081
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Desire for an Artificial Intelligence–Based Surgical Assistant System: Electronic Survey–Based Study
Background Techniques utilizing artificial intelligence (AI) are rapidly growing in medical research and development, especially in the operating room. However, the application of AI in the operating room has been limited to small tasks or software, such as clinical decision systems. It still largely depends on human resources and technology involving the surgeons’ hands. Therefore, we conceptualized AI-based solo surgery (AISS) defined as laparoscopic surgery conducted by only one surgeon with support from an AI-based surgical assistant system, and we performed an electronic survey on the clinical desire for such a system. Objective This study aimed to evaluate the experiences of surgeons who have performed laparoscopic surgery, the limitations of conventional laparoscopic surgical systems, and the desire for an AI-based surgical assistant system for AISS. Methods We performed an online survey for gynecologists, urologists, and general surgeons from June to August 2017. The questionnaire consisted of six items about experience, two about limitations, and five about the clinical desire for an AI-based surgical assistant system for AISS. Results A total of 508 surgeons who have performed laparoscopic surgery responded to the survey. Most of the surgeons needed two or more assistants during laparoscopic surgery, and the rate was higher among gynecologists (251/278, 90.3%) than among general surgeons (123/173, 71.1%) and urologists (35/57, 61.4%). The majority of responders answered that the skillfulness of surgical assistants was “very important” or “important.” The most uncomfortable aspect of laparoscopic surgery was unskilled movement of the camera (431/508, 84.8%) and instruments (303/508, 59.6%). About 40% (199/508, 39.1%) of responders answered that the AI-based surgical assistant system could substitute 41%-60% of the current workforce, and 83.3% (423/508) showed willingness to buy the system. Furthermore, the most reasonable price was US $30,000-50,000. Conclusions Surgeons who perform laparoscopic surgery may feel discomfort with the conventional laparoscopic surgical system in terms of assistant skillfulness, and they may think that the skillfulness of surgical assistants is essential. They desire to alleviate present inconveniences with the conventional laparoscopic surgical system and to perform a safe and comfortable operation by using an AI-based surgical assistant system for AISS.
Introduction
Artificial intelligence (AI) has been rapidly developing in recent years, and relevant research is being actively conducted in the health care field through deep learning and big data technology [1]. AI applied in the medical area can be divided into the following two categories: virtual and physical AI. Virtual AI includes the programs that can help clinical diagnosis, whereas physical AI involves smart operating rooms, nanorobots, and patient-assistance systems [2]. In particular, physical AI in the operating room can assist the operator or replace the assistant during surgery [2,3]. For instance, the da Vinci surgical system, which is the first computer-based robotic surgical system approved by the US Food and Drug Administration in 2000, has been widely used for minimally invasive surgery, including laparoscopic surgery. The demand for the robotic surgical system is rapidly increasing in the surgical areas of gynecology, general surgery, and urology [4]. This increase in demand is due to reduced surgeon fatigue and improved surgical access through ergonomic instruments and three-dimensional imaging [4,5].
However, the current robotic surgical system still depends on coordination of the human eye and hand, which is insufficient in terms of autonomy or interaction [6,7]. In particular, the injection of carbon dioxide and insertion of trocars into the peritoneal cavity are still performed by surgeons without the aid of a robotic surgical system, and the laparoscopic camera and instruments are adjusted manually to the target by surgeons. Thus, an automated robotic surgical system that is better than the current master-slave approach may be expected to reduce human error and thereby improve the quality of surgery. Up to now, relevant studies have mainly focused on the development of robots capable of performing short surgical tasks, such as knot tie and needle insertion [8,9], and the application of voice interaction technology during surgery may be one of the crucial elements that should be developed in an AI-based surgical assistant system [10][11][12].
Nevertheless, high medical cost may be one of the barriers to the adoption of an AI-based surgical assistant system [13], and it is not yet know how this system will improve the quality of surgery or reduce human resources effectively. Therefore, we conceptualized AI-based solo surgery (AISS) that was defined as laparoscopic surgery conducted by only one surgeon with support from an AI-based surgical assistant system and considered the clinical desire for AISS via an electronic survey (e-survey).
An e-survey has been a common method of research in human and social sciences since the 1990s. In the case of research using a web-based questionnaire, it is possible to attach pictures or materials in order to avoid response omission as much as possible and avoid inconsistent or out-of-frame results. Besides, data can be effectively organized and archived without paper resources, and distribution via email can be quickly done through a URL [14]. Moreover, by distributing the web questionnaire via email, it is possible to limit the target respondents to people belonging to a specific community so that the questionnaire survey is conducted for experts in the relevant field.
Therefore, we performed an e-survey to investigate the clinical desire for an AI-based surgical assistant system for AISS as compared with the current laparoscopic surgical system and to determine the reasonable cost of such an AI-based surgical assistant system for AISS.
Survey
We surveyed gynecologists from the Korean Society of Obstetrics and Gynecology, urologists from the Korean Urologic Association, and general surgeons from the Korean Surgical Society between June and August 2017 through nownsurvey (ELIMNET Co, Ltd) [15], a commercially available e-survey platform. In this survey, the AI-based surgical assistant system for AISS was considered to have the following functions: camera automatic recognition and operation function through voice commands; action as an assistant by manipulating surgical instruments through automatic screen recognition and voice commands; and smart storage for recognizing, indexing, and storing surgical procedures while recording specific events. There were a total of 13 questions that included six items about the responder's experience, two about limitations of the conventional laparoscopic surgical system, and five about the clinical desire for an AI-based surgical assistant system for AISS (Table 1). We estimated that 5000 gynecologists, 7000 general surgeons, and 2500 urologists would participate in the survey. This study was approved by the Institutional Review Board of Seoul National University Hospital (approval no: 1910-131-1072). How important is the skillfulness of your assistant for successful laparoscopic surgery? 6
Limitation
What are your discomforts during laparoscopic surgery owing to inexperienced camera assistants? (multiple choice) 7 What are your discomforts during laparoscopic surgery owing to inexperienced laparoscopic instrument assistants? (multiple choice) 8
Desire
What functions do you expect to be included in the AI a -based surgical assistant system for AISS b ? (multiple choice) 9 What percentage of your assistant's function will the AI-based surgical assistant system for AISS replace? 10 Would you want to buy the AI-based surgical assistant system for AISS if it thrives? 11 Why would you want to buy the AI-based surgical assistant system for AISS? (multiple choice) 12 How much would you like to pay for the AI-based surgical assistant system for AISS? 13 a AI: artificial intelligence. b AISS: artificial intelligence-based solo surgery.
Data Analysis
We analyzed each question by using descriptive statistics. Additionally, we analyzed all the respondents, and the response rate was 3.5%. Each item in the questionnaire was stratified according to the surgeons' fields as follows: gynecologists, urologists, and general surgeons. Categorical variables were analyzed with the chi-square test or Fisher exact test using the statistical software SPSS 20.0 (IBM Corp, Armonk, New York, USA). A P value <.05 was considered statistically significant. Table 2 shows the demographic data of the responders. A total of 508 people responded to the questionnaire, and there were 278 gynecologists, 173 general surgeons, and 57 urologists. Among the three surgeon fields, most of the urologists (49/57, 86.0%) worked at a university hospital, whereas relatively many gynecologists (67/278, 24.1%) worked as general practitioners. Moreover, most of the urologists (43/57, 75.4%) performed laparoscopic surgery in less than 10 cases per month, whereas relatively many general surgeons performed laparoscopic surgery in 31 or more cases per month (40/173, 23.1%). In terms of the number of assistants during laparoscopic surgery, 38.6% (22/57) of urologists required one or less assistant, whereas 90.3% (251/278) of gynecologists required two or more assistants.
Experience
In terms of the preferred assistant during laparoscopic surgery, most of the urologists (33/57, 57.9%) preferred fellows, whereas many general surgeons (76/173, 43.9%) preferred physician assistants ( Figure 1). With regard to the importance of the skillfulness of assistants, who manipulate cameras or instruments, for successful laparoscopic surgery, most of the responders indicated "very important" or "important," regardless of the surgeon field. Although the trend was similar among the three surgeon fields with regard to the camera assistant, general surgeons (33/173, 19.1%) relatively underestimated the importance of the skillfulness of instrument assistants as compared with gynecologists (93/278, 33.5%) or urologists (18/57, 31.6%) ( Figure 2). Table 3 shows the responses to questions on the surgeons' discomforts related to inexperienced camera and instrument assistants for the conventional laparoscopic surgical system. Table 4 depicts the functions that should be included in an AI-based surgical assistant system for AISS to overcome the limitations of the current laparoscopic surgical system. More than half of the responders preferred intuitive and easy maneuverability (308/508, 60.6%), a demister and self-cleaning system for the laparoscopic camera lens (326/508, 64.2%), and safety for minimizing tissue damage (279/508, 54.9%). In particular, more urologists (29/57, 50.9%) desired fast running by minimizing time delay as compared with gynecologists (86/278, 30.9%) and general surgeons (67/173, 38.7%). However, interest in the autosave or voice command system for special events during the operation was the lowest among the three surgeon fields. In terms of the possibility that the AI-based surgical assistant system for AISS can replace the functions of assistants, about 40% (199/508, 39.1%) of responders expected it to substitute 41%-60% of the existing workforce ( Figure 3).
Limitation
When asked about the purchase intention and reasonable price to buy the AI-based surgical assistant system for AISS, 83.3% (423/508) of all responders wanted to buy the system. The most common reason for wanting to buy the system was the comfort of laparoscopic surgery (257/508, 50.6%). In particular, general surgeons had a relatively strong desire to decrease the burden of repetitive training for assistants, whereas they had less interest in the reduction of the operation time by purchasing the AI-based surgical assistant system for AISS as compared with gynecologists. Regarding the reasonable price for the system, 29.7% (151/508) of the responders had a willingness to pay US $30,000-50,000 (Table 5).
Principal Findings
This study involved a survey about the clinical desire for an AI-based surgical assistant system for AISS among surgeons who currently perform laparoscopic surgery. In this survey, we identified the importance of assistants and the discomforts with the conventional laparoscopic surgical system and determined surgeons' expectations and demands for new AI-based robotic surgery aids.
Experience
In terms of experience, gynecologists were more likely to have two assistants than general surgeons and urologists. The reason is that gynecologists may use a uterine manipulator frequently during laparoscopic gynecologic surgery [16]. Therefore, gynecologists commonly require two or more assistants for laparoscopic surgery, including two assistants who hold a laparoscopic camera and a uterine manipulator.
On the other hand, urologists' preference for fellows as surgical assistants could be related to more common practice in university hospitals. Moreover, urologists can be less dependent on residents during surgery, which may be similar for general surgeons who prefer physician assistants as surgical assistants. Furthermore, most of the responders valued the skillfulness of surgical assistants who manipulate the laparoscopic camera and instrument assistants, because the extent of assistant experience may be closely related to the operation time and complication rate [17]. Recently, in the Republic of Korea, owing to the implementation of the special act regarding an 80-hour workweek for residents, their working time has reduced, and thereby, the number of cases of surgical training has reduced [18]. In contrast, physician assistants are still useful for coordination in the operating room because of their high level of proficiency based on repetitive work [19]. Therefore, most surgeons seem to prefer fellows or physician assistants who are proficient in laparoscopic surgery rather than residents or interns because of their skillfulness as surgical assistants in the Republic of Korea.
Limitation
In terms of limitation, most of the surgeons felt uncomfortable with camera assistants when they showed unskilled movement of the camera in the intended direction and instrument assistants when they showed unskilled movement of the instruments in the intended direction. This result is consistent with the finding that most of the surgeons considered the skillfulness of surgical assistants as "very important" or "important," regardless of the field.
Desire
In terms of desire, the essential functions desired to be present in an AI-based surgical assistant system for AISS were intuitive and easy maneuverability, a demister and self-cleaning system for the laparoscopic camera lens, and safety for minimizing tissue damage. Interestingly, only 10%-20% of surgeons complained about discomfort regarding the camera lens or foreign objects, whereas a high percentage of surgeons desired a self-cleaning system for AISS. These findings seem to be associated with the role of surgical assistants in camera cleaning when using the conventional laparoscopic surgical system, which is perceived as an essential function by the operator, and the absence of an uncomfortable feeling with the current system.
Notably, more than 80% of the responders intended to buy the AI-based surgical assistant system for AISS, and the reasons for buying it were comfort of laparoscopic surgery and improved safety and maturity of laparoscopic surgery. Considering the results from the questions on the conventional laparoscopic surgical system, surgeons showed a tendency to overcome current constraints regarding laparoscopic surgery with the AI-based surgical assistant system, especially with regard to the skillfulness of assistants.
The majority of responders anticipated that the introduction of the AI-based surgical assistant system would replace the existing workforce by 41%-60%. Therefore, an AI-based surgical assistant system for AISS could be a great solution in university hospitals where resident working hours are regulated (eg, 80-hour resident special act in Korea and The European Working Time Directive in Europe) [18,20]. Of course, there may be some opinions concerning undertraining of residents, but the introduction of educational tools, such as simulation training systems, is a possible alternative [17,21].
Issues Related to Practical Application
Before adopting and introducing an AI-based surgical assistant system in the surgical field, ethical and legal responsibilities should be discussed through consensus of medical, legal, and administrative experts and others. Additionally, although not included in this survey, the recent development of AI is likely to include explainable AI, a concept contrasted with previous black-box AI, in the development of new technologies.
At the time of the introduction of robotic surgery, which is being actively used presently, many experts had discussed ethical issues [22][23][24]. Current robotic surgery is a master-slave system, with the surgeon having most of the responsibility, making it easy to discuss ethical issues. However, in the case of an autonomous AI-based surgical assistant system, there may be controversy regarding the responsibility for harm and injuries caused to the patient during the robotic surgery, and social discussions about this need to be carried out for the adoption of an AI-based surgical assistant system [24,25].
Explainability should be considered when newly developing AI-based surgical assistant systems. Current AI-based medical programs involving deep learning and machine learning techniques lack explainability, hindering the dependence of medical professionals on conclusions from these programs. Therefore, considering the characteristics of surgical procedures that are repeated continuously with small and large decisions, it is expected that explainability will be essential for the interaction between the machine and the operator and should be incorporated in the development of AI-based robotic assistance systems that contribute to these procedures [26].
Strengths and Weaknesses
This report is based on a survey among experts who have been actively performing laparoscopic surgery in various fields. To the best of our knowledge, this is the first report showing the clinical need for an AI-based surgical assistant system for AISS according to an e-survey. Moreover, this study is meaningful because we could identify the unmet need of clinicians for an AI-based system for AISS, which could be developed soon. However, this study has some limitations. First, it was challenging to check the exact response rate through the mailing system used in this study, which could act as a bias, and thus, the results of this study should be interpreted carefully. However, we could assume that the questionnaire was answered by our targeted responders because most of the responders mentioned that they performed more than one surgery per month. Second, the specific national health insurance system controlled by the government in the Republic of Korea could affect the expected value of an AI-based surgical assistant system for AISS, and the finding should be complemented by international surveys later. Third, the validity and reliability of the items in the questionnaire could not be confirmed because there has been no previous comparable study and this study targeted a specific group of experts in our country.
Conclusion
In the conventional laparoscopic surgical system, surgeons may value the proficiency of assistants, and most of them may feel uncomfortable with the unintended or not intuitive movement of laparoscopic cameras and devices. For the development of an AI-based surgical assistant system in the future, safe operation may be expected through lens cleaning, intuitive manipulation, and tissue damage minimization. Furthermore, an AI-based surgical assistant system is expected to replace approximately 41%-60% of the workforce, which may increase surgeons' willingness to purchase such a system for reducing human resources and performing a comfortable, safe, and skilled operation. Conclusively, an AI-based surgical assistant system for AISS will become essential to enhance surgeons' convenience, but it will be necessary to increase the safety and quality of surgery for patients.
|
2020-03-19T10:52:46.117Z
|
2019-12-31T00:00:00.000
|
{
"year": 2020,
"sha1": "2a26d2d6ded318daada0d94993a8518dc5f57ca0",
"oa_license": "CCBY",
"oa_url": "https://medinform.jmir.org/2020/5/e17647/PDF",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "df51f349542f5503b09208fe21cb901e7fb24292",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119653912
|
pes2o/s2orc
|
v3-fos-license
|
Estimating topological entropy from the motion of stirring rods
Stirring a two-dimensional viscous fluid with rods is often an effective way to mix. The topological features of periodic rod motions give a lower bound on the topological entropy of the induced flow map, since material lines must `catch' on the rods. But how good is this lower bound? We present examples from numerical simulations and speculate on what affects the 'gap' between the lower bound and the measured topological entropy. The key is the sign of the rod motion's action on first homology of the orientation double cover of the punctured disk.
Introduction
The paper of Boyland, Aref & Stremler [1] pioneered the study of two-dimensional rod-stirring devices using tools from topological surface dynamics. The central idea is that some rod motions impose a minimal complexity to the fluid trajectories, resulting in good mixing in at least part of the domain. Since then, many studies have followed: these include several papers dealing directly with rod motion [2,3,4,5,6,7,8,9,10]; various work on vortices, 'ghost rods,' and almost-invariant sets [11,12,13,14,15,16,17]; papers on the topology of chaotic trajectories and random braids [18,19,20,21,22,23,24]; a paper on a extension to three dimensions using stationary rod inserts [25]; and a review [26] and magazine article [27].
Throughout all this, there remains a vexing question, first raised by Phil Boyland: if one studies the rod motion depicted in Fig. 1(a), which is denoted σ 1 σ −1 2 in terms of braid group generators, the growth rate of material lines in the fluid is almost the same as that predicted by the rod motion, which is a lower bound. How do we explain such a small discrepancy (or gap) between the lower bound and the measured value? Here we do not propose a full solution to this problem, but instead offer some observations, based on numerical simulations, of when the lower bound is and isn't sharp, what this correlates with, and speculate on possible causes. At the heart of the matter is 'secondary folding,' or the observation that in some cases material lines fold a lot more than is strictly required by the topology (a) of the rod motion. This issue was explored in detail by the authors for toral linked twist maps [28,29]. Here we focus on physical rod-stirring devices, also called rod mixers.
We can also interpret a small gap in terms of 'taffy pullers' [10]. Ignore the fluid and consider a plastic 'strap' wrapped around the rods. As the rods move, imagine the plastic strap can stretch, but can never shrink. The motions with a small gap described below will lead to a plastic strap that never develops any slack throughout the entire rod motion. The question is to determine what properties of a braid are needed to ensure this.
Braid-based rod mixers
We consider rod-stirring devices or mixers that are constructed such that the rod motion is described by braids [30,31,32]. In these two-dimensional circular containers, the rods start along a fixed horizontal line and move in accordance with braid generators, σ ±1 i , depicted in Figure 1(b). For example, in a 3-rod mixer given by the braid σ 1 σ −1 2 [1], first the two leftmost rods move halfway around a circle in a clockwise direction clockwise. Immediately after that, the two rightmost rods move halfway around a circle in a counter-clockwise direction (Figure 1(a)). The circular paths are centred directly between the two rods, and have diameter equal to the rod spacing. The speed of the rods is immaterial, since we are only considering Stokes (slow viscous) flow.
More generally, we write the stirring motion for n rods as a braid expressed as a sequence of generators, σ i , i = 1, . . . , n − 1. Each generator represents the clockwise interchange of the ith and (i + 1)th strands or rods. The inverse, σ −1 i , is a counter-clockwise interchange (Figure 1(b)). Note that the strands are always numbered from left to right, so a given subscript does not always refer to the same rod. By having the rods move in the same way as a specific braid, we can directly and systematically compare the measured topological entropy in the fluid system to the lower bound predicted by the braid (via the isotopy class [33,34,35]).
Remark. There are different conventions in the literature: In some papers σ i is defined as the counter-clockwise interchange, which is the opposite of our definition. There are also differing conventions on composition order. We will always write generators from left to right -that is, in the braid σ 1 σ 2 , the σ 1 interchange occurs before the σ 2 interchange.
Remark. The lower bound on the entropy, based on the braid, is independent of the specific details of the rod motion. However, the measured flow entropy depends in general on the rod radius, rotation, and how near the rods come to each other and to the outer wall of the container during their motion. In our simulations, the rod radii are relatively small and we keep them from coming too close to the wall to avoid extra growth of material lines due to image effects. Our simulations were performed with the computer program Flop, by Matthew D. Finn, Emmanuelle Gouillart, and J.-L.T. The program is based on the complex-variable method described in [2]. We measure the flow topological entropy h from the growth rate of material lines in the flow [36,37,38].
Three-rod mixers
We start by looking at devices with three rods. In particular, we will focus on motions based on braids of the form σ k 1 σ − 2 . When k > 0, we call the braid counter-rotating; when k < 0 we call it co-rotating. Braids of this form are pseudo-Anosov if and only if |2 + k | > 2. All counter-rotating braids are pseudo-Anosov, but co-rotating braids are only pseudo-Anosov if k < −4. Braids that are not pseudo-Anosov are finite order or reducible, according to the Thurston-Nielsen classification theorem [33,34,35]. We will not encounter any reducible braids in this paper. Figure 2 shows an iterated material line for several different braid mixers. The three in the left column are counterrotating, and the three in the right column are co-rotating. Of the six braid mixers shown in Figure 2, five are pseudo-Anosov and one is not. With only a quick glance, it is not hard to guess that σ 1 σ 2 is the odd one out -in comparison to the others, the material line in that device has hardly stretched at all, even after 9 periods, and pseudo-Anosov braids have an exponential line stretching rate [33,34,35].
However, despite the fact that the braid σ 1 σ 2 is not pseudo-Anosov, we still measure a positive topological entropy for the flow in the mixing device. In fact, the braid mixers tend to fall into two categories: those where the flow entropy h is close to the braid entropy h rods (of the order of 10% difference), and those where h is considerably larger than h rods (> 25% difference). Table 1 shows, for several braids of the form σ k 1 σ − 2 , the measured topological entropy in the braid mixer (h) and the lower bound obtained from the rod braid (h rods ). The last column gives the 'gap' between the two values, expressed as a percentage of h. The first set of braids is counter-rotating (k < 0); the second set co-rotating (k > 0). Note that the counter-rotating mixers show a small gap, and the co-rotating ones have a much larger gap.
The penultimate column of Table 1 gives the sign of the dominant eigenvalue (the one with the largest magnitude) of the Burau matrix representation of the braid. The Burau representation [39,31,40,41,42] arises from an action of the braid on first homology of a double cover of the punctured disk (actually a Z-cover, but we only use the double cover here). Figure 3 depicts the construction of the double cover for a disk with three rods. Notice in Table 1 that for the pseudo-Anosov braids (h rods > 0), all the counter-rotating cases have a negative Burau eigenvalue, while all the co-rotating cases have a positive eigenvalue. For the non-pseudo-Anosov braids (i.e. those of finite order), the eigenvalue of the Burau matrix is always on the unit circle (complex), so we do not record a sign.
For 3-braids, the logarithm of the spectral radius of the Burau matrix agrees with the topological entropy of the braid. For pseudo-Anosov braids this largest eigenvalue is real but can be either positive or negative. A negative eigenvalue corresponds to a 'flip' of the homological generators at every application of the braid. For toral linked twist maps, this is associated with 'kinks' in the material lines, as shown in [28]. These are what we call 'secondary folds,' as depicted for a fluid system in Figure 5 and discussed in Section 3.1). The conjecture is that these kinks lead to additional growth of material lines, thus causing extra entropy above the lower bound. However, this connection has not been yet rigorously demonstrated.
Four-rod mixers
We now look at four-rod mixing devices. With four rods, there is no sense in classifying braids as counter-or co-rotating. Instead, we will focus on the sign of the dominant eigenvalue of the Burau matrix. Since we have more than three rods, the dominant eigenvalue of the Burau matrix is no longer guaranteed to give the topological entropy of the braid -it merely provides a lower bound [40,41]. Band & Boyland [42] showed the Burau eigenvalue gives the exact topological entropy for a pseudo-Anosov braid if and only if the corresponding foliation has odd-order singularities at all the punctures, and any interior singularities are of even order. One consequence is that the Burau bound is always sharp for 3-braids, a fact we used in Section 2.1 to compute the entropy. Figure 4 shows material line patterns for some four-rod mixers. Table 2 lists the braid, the measured topological entropy (h), the topological entropy of the braid (h rods ) given by the Bestvina-Handel algorithm [43,44] than 2%. We will discuss the sources of discrepancy in the next section.
Explaining the gap
Our ultimate goal is to predict when the lower bound from the rod motion is close to the measured topological entropy (small gap), and when it is not (large gap). Furthermore, we wish to understand what causes a large gap, that is: what is it about the flow that creates more topological entropy? The easier question to answer, at least partially, is why the lower bound fails. We will address this first. Then we will attempt to explain why it happens.
Why there is a gap -secondary folding
Recall that the lower bound on entropy arises from the braid giving the rod motion: this braid labels the isotopy class of the period-1 map. Since the pseudo-Anosov representative of the isotopy class is the 'simplest' map in the class (the one with the lowest entropy), the isotopy from the flow to the pseudo-Anosov representative has the effect of pulling tight the material lines. In order for the flow to have a higher topological entropy, there must be some part of the material line pattern that is not already pulled-tight. In other words, there must be some extra folding that is not directly due to the rods. We call this secondary folding [28,29]. Figure 5(a) shows an example of folding due to a rod, and Figure 5(b) shows secondary folding, which is not associated with a rod and could be removed by pulling tight.
Having a few extra folds is not necessarily enough to cause higher topological entropy. Recall that for 2D systems topological entropy is related to the exponential stretching rate of material lines [36,37,38]. The extra folds must cause a higher line growth rate in order to affect the topological entropy.
When there is a gap -negative eigenvalues
We would now like to predict when we can expect a gap between the measured topological entropy and the lower bound given by the braid. From the data presented, it is tempting to say that mixers with a braid whose Burau matrix has a negative dominant eigenvalue have a large gap, while those with positive eigenvalues have a small gap. However, this says nothing about braids for which the Burau bound is zero. Furthermore, the data does not include any braids whose Burau bound is non-zero, but also not equal to the topological entropy of the braid (because of odd interior singularities in the foliation). We discuss why we didn't include such braids in Section 4. However, it is clear at this point that the sharpness is closely correlated with the sign of the action on first homology of the orientation double cover, as given by the Burau representation in most cases examined here.
Discussion
In summary, we have exhibited a number of examples of braid-based rod mixers. These fall in two categories: those for which the rod motion is a good predictor of the flow entropy, and those for which it isn't. For both three-and four-rod systems, the sign of the Burau eigenvalue correlates well with the two cases: a positive eigenvalue usually means that the bound is sharp.
When the Burau entropy is not sharp, the relevant quantity is the sign of the action of the braid on homology lifted to the orientation double cover. When the sign is negative, then the entire homological chain must 'flip' which each action of the braid. The conjecture is that this flip causes secondary folding by promoting 'slack' in the material lines. This is evident when examining toral linked twist maps [28,29]. Unfortunately, this cannot be the whole story, since repeating the rod motion twice will always make the homological eigenvalue positive, but will clearly not make the lower bound any better.
Why is the orientation double cover important? The foliations obtained on disks are always non-orientable, due to the odd-pronged singularities at the rods. The orientation double cover turns the disk foliation into an orientable foliation on a closed surface of some genus (a torus in Figure 3). It is then easy to compute the topological entropy, since the linear action on homology gives the entropy for the case of orientable foliations. However, in order to construct the orientation double cover we need to know a priori the odd-pronged singularities associated with a braid's isotopy class.
In general we should be able to ascribe a homological sign even for braids that are not Burau-sharp. This is easy to do when a pseudo-Anosov is given in terms of Dehn twists on the double cover [45], but is not so straightforward when starting from braids on the disk; this is a future challenge. For the braid σ 1 σ 2 σ −1 3 in Table 2, we were able to determine that the sign is positive by puncturing at the 3-pronged singularity and computing the Burau action of the resulting 5-braid ((σ 1 σ 2 σ 1 )σ 3 σ −1 4 ). Note that both homological signs can always be realised, since the braids giving rise to different signs are related by the deck transformation (involution) of the double cover [46].
|
2019-04-11T20:46:03.391Z
|
2012-08-03T00:00:00.000
|
{
"year": 2012,
"sha1": "cfe5e1de6381b3cac66fb711be3203e71d6061af",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.piutam.2013.03.014",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "529fe372d1329ec94b0ecac9233e22cce2eded32",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
213670236
|
pes2o/s2orc
|
v3-fos-license
|
Recovery of valuable metal from Photovoltaic solar cells through extraction
The installation of PV modules was 97.9GW and the accumulation volume of PV device was 500GW in 2018 According to the research, the accumulation of waste modules will reach to 8600 tons in 2030 as the result of the life expectancy of PV modules. Moreover, Crystalline-Silicon solar panels account for 90% of the waste. This study recycles photovoltaic solar cells by leaching and extraction. According to the analyst, Silicon cells content 90% of Si, 0.7% of Ag, and 9.3% of Al. Silicon cells were leached by 4M nitric acid at 80°C for 4 hours then 3M sodium hydroxide at 70°C for 3 hours, and the leaching efficiency were 99.7% of Ag, and 99.9% of Al, respectively. Leaching process separated Silicon from others metal. Na-Cyanex 272 in kerosene was employed to separate Al from Ag. The efficiency of extraction is 96%. After the process of extraction, 1M hydrochloride acid was employed to strip Al from the organic solvent. The efficiency of stripping is 99.9%. After extraction process, Ag was remained in the aqueous phase and precipitated to AgCl. In conclusion, this study shows the optimal parameters of the leaching steps, extraction steps, precipitation and vaccum concentration to get the product. The recovery of Si and Ag were 99.5% and 98% respectively.
Introduction
PV modules is one of the greenest and promoted energy-generator. By the statistic, 106GW of solar PV capacity added in 2018 and the accumulation of PV modules is now increased to 508GW. [1] More solar PV was installed than the net capacity additions of fossil fuels and nuclear power combined. PV modules have an average life of 20-25 years [2] , which means that at 2030 we will have the great amount of PV waste in the future. According to the statistic in IRENA, the waste of solar panels in 2050 will reach 600,000 tons, which means that we should make effort on recycle PV modules right now.
According to the experiment, Poly-crystalline Silicon solar panels contain Silicon, Aluminum, silver, Lead, and Tin and PV cells contain the most of the Silicon (90%), Aluminum and Silver (1%). This study dismantled PV modules to PV cells and recovered PV cells by leaching and extraction process. Silicon is hard to leach by non-organic acids (HCl, H2SO4, and HNO3). The first step to purify silicon was leaching metals from PV cells and purify silicon. Aluminum is known as easy to separate by precipitate and extraction. This study decided to use extraction process to separate Aluminum from silver to improve the purity of silver. According to the literature review [4]- [12] , N503, Tributyl Phosphate (TBP), D2EHPA (di-(2-ethylhexyl) phosphoric acid), Cyanex901 and Cyanex272 was used to separate Aluminum from other metals. This study used Cyanex272 for extractant because of the optical pH value 2-3. Cyanex272 and D2EHPA was in the pH range 2-3 for effectively extract Aluminum(III). Therefore, Cyanex272 has higher efficiency and selectivity for Aluminum(III) which ICOMET 2019 IOP Conf. Series: Materials Science and Engineering 720 (2020) 012007 IOP Publishing doi:10.1088/1757-899X/720/1/012007 2 makes Cyanex272 the better agent for selectively extraction. [13] After extraction and stripping process, precipitation was employed to recover Silver to AgCl. [14] [15] This study provided the separation and recovery of Silicon, Aluminum and silver from PV cell via hydrometallurgy to obtain valuable metals. The resources can be recycled and also reduced the wastes of PV modules.
Materials
Poly-crystalline silicon PV modules used in this study are from the waste PV modules recycling factory. Table 1 shows the mass fraction of PV modules. PV modules are composed of aluminum frame, tempered glass, Ethylene-vinyl acetate resin and back sheet, PV cells, Ribbon, and Junction Box. This study dismantled PV moduels to PV cells and recovered it by several processes. Table.2 shows the chemical composition of commercial PV cells.
Pretreatment
PV cells was dismantled from the PV modules. After removing Aluminum frame and tempered glass, EVA reins still remained on the PV cells. Pretreatment process was employed to eliminate the EVA resin from the PV cells.
Leaching
Nitric acid was employed to leach Al and Ag from the PV cells. After acid leaching process, Al-Si alloy still remained on the PV cells. In order to purify the Silicon, Sodium Hydroxide leaching was employed to break the boning between Si and Al. To optimize the experimental condition for effective leaching, this study was made carrying different process parameters viz. time (0.5-4hr), liquid-solid ratio, and acid concentration (0.5M-6M). Leaching efficiency of metal was calculated by the equation below: X%=leaching efficiency(g/L), M=the weight of the sample(g), Vc=the volume of the liquid(L), Wx=target metals weight fraction (wt%).
Extraction
The metals in the leaching solution is separated through selectively solvent extraction and stripping. Aluminum was extracted by Na-Cyanex272 and silver remained in the solution. Using HCl to strip Aluminum from the organic phase. The metals can be separated. The solution with silver was precipitate by sodium chloride into AgCl. This study also carried the parameters of extraction process The extraction efficiency of Na-Cyanex272 and stripping efficiency were calculated by equations below:
Analytical method
The sample after leaching, extraction process, and precipitation were filtered by 0.45μm of membrane filter and were diluted by 1% HNO3 solution for ICP-OSE (Inductively Coupled Plasma-Optical Emission Spectrometry, PerkinElmer optima 2100DV) analyst. The ICP-OSE analyst was calibrated with ICP multi-element stander and tin standard solutions. Each aqueous solution was analyzed for three times and averaged for the reported.
Thermal treatment to eliminate EVA resin from PV cells
After dismanlting PV modules, EVA resin still remained on the PV cells which might harm to leaching and the process below. Fig. 1 (a) and (b) show the TG/DTA analyst chart of PV cells with EVA resin and pure EVA reasin. Comparing to figures, both of them decline the weight on 300 degree. After heating to 500 degree, the mass changing tend to balance. This study used the thermal process to eliminate EVA resin by heating to 500 Celsius degrees for 5 hours in the atmosphere. This process can eliminate 99.97% of EVA resin from PV cells.
Nitric acid for first leaching process
This study provides the leaching process to leach Ag and Al form PV cells through Nitric acid. Nitric acid can effectively separate Ag and 70% weight of Al from PV cells. Fig.3 shows the leaching efficiency from adjusting the concentration of HNO4 in 4 hours, 50 of L/S ratio, and 85 degrees of temperature. Silver and aluminum both get a good leaching efficiency on 4M of nitric acid. After rising the concentration of HNO4, the leaching efficiency of aluminum still not overcome 75%. Al-Si alloy still remained on the PV cells. The optimal condition of the concentration was chosen as 4M. Fig.4 show the leaching efficiency from adjusting the liquid-solid ratio. The L/S ratio was investigated by verifying the ratio from 10 to 400. As the result, PV cells has effective leaching efficiency (99.5% of silver and 70% of aluminum) after 100. Hence, the optimal condition of the L/S ratio was chosen as 100ml/g.
Effect of the reaction times
Fig . 5 shows the leaching efficiency from adjusting the reaction times for 4M of HNO4 in 100 of LS ratio, and 85 degrees. Reaction times was investigated by verifying from 30 minutes to 240 minutes. As the result, after and hours, both of the leaching efficiency tend to balance. Hence, the optimal condition of reaction times was chosen as 60 minutes.
Sodium Hydroxide leaching process
After nitric acid leaching process, Al-Si alloy still remained on the PV cells. As the equation showing below: Sodium Hydroxide has the ability to transfer Al to Al ion. This study set the optimal parameters as 3M of NaOH, L/S ratio=50, 240minutes, and temperature for 70 degrees. After two leaching processes, the leaching efficiency of Al ion can reach to 99.9%.
Alumnium extraction through Na-Cyanex272
After leaching process, Na-Cyanex272 was employed to extract Aluminum from Silver. Na-Cyanex272 was diluted by kerosene. Cyanex272 was saponificated by sodium hydroxide into Na-Cyanex272. As Fig. 6 and Fig. 7 show, the extraction efficiency of aluminum with saponification is higher than non-saponification.
Effect of pH value
The pH value was adjusted with nitric acid and ammonia solutions. The pH values were verified from 1 to 4. After the pH value reach 4.2, aluminum started to precipitation into Al(OH)3. According to Fig. 6, the extraction efficiency increased at 2.5. The extraction efficiency was 96% at pH 3.7. Hence, this study set 3.7 as the optimal parameters.
Effect of Na-Cyanex272 concentration
The concentration of Na-Cyanex272 was verified from 0.1M to 0.6M. Fig. 8 shows the extraction efficiency of the concentration of Na-Cyanex272. The extraction efficiency increased from 3% to 96% when the concentration of extractant increased from 0.1M-0.6M. After reaching the concentration of extractant to 0.3M, the efficiency tend to balance. Hence, the optimal parameters of the extractant concentration was set as 0.3M.
Fig. 7
Extraction efficiency of adjusting the concentration of Na-Cyanex272
Effect of extraction aqueous-oil volume ratio
The effect of A/P ratio was investigated and set from 0.1 to 10. Fig. 9 was the extraction efficiency which decreased from 97% to 26.8%. In order to concentrated aluminum in high concentration and reduced the waste of extractant. This study set the optimal parameters of A/O ratio as 3(mL/mL). Fig. 8 Extraction efficiency of adjusting aqueous-oil volume ratio
Effect of adjusting reaction times
Under the condition of fixed solution pH value=3.7, A/O=3, and [Na-Cyanex272]=0.3M, the shaking time was verified to 1min., 5min., 10min., 15min., 20min., and 30min., individually. Fig. 10 shows the result of verified the reaction times. After the time reach 5min., the efficiency tends to balance. Hence, this study set 5min as the optimal parameters of reaction times.
Stripping experiment
From the extraction experiments, it was observed that Na-Cyanex272 has the ability to extract aluminum from silver-aluminum solution. The organic phase with aluminum was stripped to water phase by different concentration of hydrochloric acid, OA ratio and reaction times. The results indicated that 1M HCl, O/A ratio=0.25, and reaction times=5 min. can efficienctly strip the aluminum from organic phase. The efficiency of stripping was 99.9%.
Silver chloride and purified silicon after leaching and extraction processes
After leaching and extraction processes, silicon, aluminum, and silver were separated effectively. Silicon was collected after leaching process due to the selectivity of nitric acid. The composition after leaching process was showed on Table. According to the reaction equation below: The standard Gibbs free energies of Ag+, Cl-and AgCl(s) are 77.16kJ/mol, -131.0563kJ/mol, and -109.86kJ/mol, respectively. The solubility product (Ksp) of AgCl is 10-9.82 at 25°C. [16] This reason indicated Ag ion precipitated easily and rapidly as AgCl then separated with Al ions. Hence, this study precipitated Ag ion into silver chloride for recovering Silver in the PV cells. The recovery of Si and Ag were 99.1 and 95% individually. Therefore, the purity of silicon after the processes is 99.5%.
Conclusion
This study proposed a recovery route of silicon and silver from PV cells by hydrometallurgy ways. Two leaching steps (acid and Alkaline) were employed to purify silicon and the purity of silicon was 99.5%. The leaching rate of Al and Ag were up to 99% after two leaching process. Moreover, the recovery of Ag and Si was 98% and 99.5.%. This study can provide to the research of recycling PV modules.
Acknowledge
This study was supported by National Cheng-Kung University Dept. of Resource Engineering, R.O.C.
|
2020-01-09T09:14:42.880Z
|
2020-01-07T00:00:00.000
|
{
"year": 2020,
"sha1": "598b8a230e64dd3e146e8ad32de64001b3f2104a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/720/1/012007",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6ed2e8273db80217be0e9f3783b3b36f208612c1",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
259064355
|
pes2o/s2orc
|
v3-fos-license
|
Quadratic Diophantine equations, the Heisenberg group and formal languages
We express the solutions to quadratic equations with two variables in the ring of integers using EDT0L languages. We use this to show that EDT0L languages can be used to describe the solutions to one-variable equations in the Heisenberg group. This is done by reducing the question of solving a one-variable equation in the Heisenberg group to solving an equation in the ring of integers, exploiting the strong link between the ring of integers and nilpotent groups.
Introduction
Equations in nilpotent groups, and equations in the ring of integers are deeply linked. This has been demonstrated repeatedly since Roman'kov used Matijasevič's result that the satisfiability of systems of quadratic equations in integers is undecidable [28] to show that the satisfiability of systems of equations in various free nilpotent groups is undecidable [33]. The proofs that many other nilpotent groups have an undecidable satisfiability of equations involve a similar method of reducing the question to systems of quadratic equations in integers [14,19,34]. After this Matijasevič's result can be applied. Duchin, Liang and Shapiro's positive result that the satisfiability of single equations is decidable in class 2 nilpotent groups with a virtually cyclic commutator subgroup [14], involves reducing the problem to single quadratic equations in integers, and then applying Siegel's result that the satisfiability of such equations is decidable [38]. There are other aspects of equations that can be studied beyond decidability, such as the structure of the set of solutions, and one way of investigating this is to express the set of solutions as a formal language, which is the purpose of this paper.
Formal languages have been used in group theory in a variety of settings for over the last few decades. Perhaps one of the most striking uses of languages in groups was Anisimov's result of 1971 showing that the set of words over a given finite generating set for a group G that represent the identity (called the word problem of G) forms a regular language if and only if G is finite [2]. Muller, Schupp and Dunwoody later showed that a group has a context-free word problem if and only if it is virtually free [15,29]. Following Muller, Schupp and Dunwoody's result, the word problem has been generalised in a number of different ways, including to semigroups [10,20,24].
Languages have also been used in a number of other different settings. Groups that admit regular geodesic normal forms must have rational growth series. Regular languages can be used to describe (λ, µ)-quasi-geodesics in hyperbolic groups if λ and µ are rational [21]. Languages have also been used to study conjugacy in various classes of groups [9,22]. The complement of the word problem has also been studied for a wide variety of groups, including Thompson's groups, the Grigorchuk group and Baumslag-Solitar groups [3,4,8,23,25].
In 2016, Ciobanu, Diekert and Elder showed that the solutions to a system of equations in a free group can be expressed as an EDT0L language [6]. This led to a number of results showing that solutions to systems of equations in various classes of groups are EDT0L, starting with right-angled Artin groups in the same year [13]. Virtually free groups [12], hyperbolic groups [7], virtually abelian groups [18], and virtually direct products of hyperbolic groups [27] all followed later.
We consider single equations in the Heisenberg group in one variable. The fact that satisfiability of equations with one variable in the Heisenberg group is decidable was first shown by Repin [32]. Duchin, Liang and Shapiro generalised this to all single equations in any number of variables in a class 2 nilpotent groups with a virtually cyclic commutator subgroup [14], however Roman'kov showed that the restriction on the commutator subgroup cannot be relaxed [34]. We show that the solutions to these equations, when written as words in Mal'cev normal form are EDT0L, with an EDT0L system constructible in non-deterministic polynomial space. Theorem 6.5. Let L be the solution language to a single equation with one variable in the Heisenberg group, with respect to the Mal'cev generating set and normal form. Then (1) The language L is EDT0L; (2) An EDT0L system for L is constructible in NSPACE(n → n 8 (log n) 2 ), where the input size is the length of the equation as an element of H(Z) * F (X).
Proving Theorem 6.5 involves reducing the problem of solving one-variable equations in the Heisenberg group to describing solutions to two-variable quadratic equations in the ring of integers. This uses a similar construction to the method of Duchin, Liang and Shapiro, which was used to show that the satisfiability of single equations in any class 2 nilpotent group with a virtually cyclic commutator subgroup is decidable [14].
Despite the extensive use EDT0L languages have had in describing solutions to group equations, there have been no attempts to describe solutions to equations in the ring of integers using EDT0L languages, other than linear equations, which are just equations in an abelian group. In order to make progress studying equations in the Heisenberg group, we will have to first learn to what extent EDT0L languages can be used to describe solutions to quadratic equations in the ring of integers. Our result for equations in the Heisenberg group involves reducing to the two-variable case of quadratic equations in integers.
We prove this theorem using Lagrange's method. This involves reducing an arbitrary two-variable quadratic equation to a generalised Pell's equation X 2 − DY 2 = N . This again reduces to Pell's equation X 2 − DY 2 = 1, the set of solutions of which is well-understood. The reduction involves writing solutions to the two variable quadratic equation (1) in the form λx+µy+ξ η , where (x, y) is a solution to some computable Pell's equation, and λ, µ, ξ, η ∈ Z with η = 0 are all computable.
Showing that the set of solutions to Pell's equation can be expressed as an EDT0L language is not too difficult. However, studying λx+µy+ξ η requires more work, particularly when the signs of λ, µ and ξ are not all the same, or when |η| ≥ 2. To deal with the division, we use the concept of #-separated EDT0L systems, first introduced in [27], and work in the world of EDT0L languages.
Understanding λx + µy + ξ, when λ, µ and ξ are not all the same sign is more difficult to resolve by manipulating EDT0L systems. This is because we represent the integer n by a n ; that is, a word of length n comprising n occurrences of the letter a (when n ≥ 0) or n occurrences of the letter a −1 (when n ≤ 0). Adding 4 to −2 corresponds to concatenating a 4 with a −2 , resulting in a 4 a −2 , which is not equal as a word to a 2 . We cannot simply 'cancel' a's and a −1 s either; in general the language obtained by freely reducing all words in an EDT0L language is not EDT0L (it need not even be recursive). Therefore, we work with facts about the solutions themselves to show that for fixed integers λ, µ and ξ, the set is sufficiently well-behaved that we can describe it using an EDT0L language. We can then apply our method for the 'division' to obtain the desired language. EDT0L languages were defined by Rozenberg in 1973 [36] as members of the broad collection of languages called L-systems. L-systems were introduced by Lindenmayer for the study of growth of organisms. A key aspect of L-systems is their ability to perform parallel computation, which was useful for the study of organisms, but has also been effective for expressing solutions to equations, as the solutions to each individual variable can be computed in parallel. EDT0L systems interested computer scientists in the 1970s, with a variety of papers proving different results from pumping lemmas to alternative definitions [16,17,30]. Since Ciobanu, Diekert and Elder's paper showing solutions to systems of equations in free groups can be expressed as EDT0L languages, interest in the class has been reinvigorated, leading to a number of recent publications on EDT0L languages [5,8], in addition to the previously mentioned papers on equations.
We cover the preliminaries of the considered topics in Section 2. In Section 3, we prove our result about 'division' of EDT0L languages by a constant that is a key part of the proof that the solutions to two-variable quadratic equations in the ring of integers are EDT0L, which appears in Section 5. In Section 4, we study the solutions to Pell's equation, and their images under linear functions. The proof of the fact that solutions to two-variable quadratic equations are EDT0L involves reducing to the case of Pell's equation. This reduction is contained in Section 5. Section 6 includes the reduction from equations in the Heisenberg group to quadratic equations in the ring of the integers, and the proof that single equations in one variable in the Heisenberg group are expressible as EDT0L languages. Notation 1.1. We introduce a variety of notation we will frequently use.
(1) Functions will be written to the right of their arguments; (2) If S is a subset of a group, we define S ± = S ∪ S −1 ; (3) We use ε to denote the empty word; (4) For elements g and h of a group G, the commutator is defined by [g, h] = g −1 h −1 gh.
Preliminaries
2.1. Nilpotent groups. We start with the definitions of a nilpotent group and the Heisenberg group. For a comprehensive introduction to nilpotent groups we refer the reader to [11]. Definition 2.1. Let G be a group. Define γ i (G) for all i ∈ Z ≥0 inductively as follows: The subnormal series (γ i (G)) i is called the lower central series of G. We call G nilpotent of class c if γ c (G) is trivial.
Definition 2.2. The Heisenberg group H(Z) is the class 2 nilpotent group defined by the presentation Note that whilst the generator c is redundant, it is often easier to work with the generating set {a, b, c} than {a, b}.
The Mal'cev generating set for the Heisenberg group is the set {a, b, c}.
2.2.
Mal'cev normal form. We now define the normal form that we will be using to represent our solutions. This is used in [14], and we include the proof of uniqueness and existence for completeness.
The following facts about commutators in class 2 nilpotent groups will be used to induce the methods for 'pushing' bs past as in the Heisenberg group. Lemma 2.3. Let G be a class 2 nilpotent group, and g, h ∈ G. Then Proof. For (1), since commutators are central, Similarly, for (2), we have Using Lemma 2.3, we now have a number of useful identities for 'pushing' bs past as in expressions over the Mal'cev generating set.
Lemma 2.4. The following identities hold for the Mal'cev generators of the Heisenberg group: The following lemma allows us to define the Mal'cev normal form for the Heisenberg group.
Lemma 2.5. For each g ∈ H(Z) there exists a unique word of the form a i b j c k that represents g, where i, j, k ∈ Z.
To transform w into an equivalent word in the form a i b j c k , first note that c is central, so w is equal to uc k , where u ∈ {a, b, a −1 , b −1 } * , and k ∈ Z, which is obtained by pushing all cs and c −1 s in w to the right, then freely reducing. We can then look for any bs or b −1 s before as or a −1 s, and use the rules of Lemma 2.4 to 'swap' them, by adding a commutator.
After doing these swaps, we can push the 'new' cs and c −1 s to the back, to assume our word remains . By repeating this process, we will eventually have no more as or a −1 s occurring after any b or b −1 , and so will be in the form a i b j c k , where i, j, k ∈ Z.
As 1 ∈ c , we have that the above word lies in c . But since c commutes with a and b, a i b j ∈ c if and only if i = j = 0. Thus i 1 − i 2 = j 1 − j 2 = 0. It follows that the above word equals c k 1 −k 2 . Since this is a freely reduced word in c as a power of c, this represents the identity if and only if k 1 − k 2 = 0. Thus k 1 − k 2 = 0, and the two words represent the same element of H(Z).
Definition 2.6. The Mal'cev normal form for the Heisenberg group is the normal form that maps an element g ∈ H(Z) to the unique word of the form a i b j c k , where i, j, k ∈ Z, that represents g.
2.
3. Space complexity. We give a short definition of space complexity. For a more detailed introduction, we refer the reader to [31].
Definition 2.7. Let f : Z ≥0 → Z ≥0 . We say that an algorithm runs in non-deterministic fspace (often written as NSPACE(f )) if it can be performed by a non-deterministic Turing machine with a read-only input tape, a write-only output tape, and a read-write work tape such that no computation path in the Turing machine uses more than O((n)f ) units of the work tape, for an input of length n.
Group equations. We start with the definition and some examples of equations in groups.
Definition 2.8. Let G be a finitely generated group, V be a finite set and F (V ) be the free group on V . An equation in G is an element w ∈ G * F (V ) and denoted w = 1. A solution to w = 1 is a homomorphism φ : G * F (V ) → G that fixes elements of G, such that wφ = 1. The elements of V are called the variables of the equation. A system of equations in G is a finite set of equations in G, and a solution to a system is a homomorphism that is a solution to every equation in the system.
We say that two systems of equations in G are equivalent if their sets of solutions are equal.
Given a choice of generating set Σ for G, we say the length of w = 1 is the length of the element w in G * F (V ), with respect to the generating set Σ ∪ V . This will be our input size for any algorithm that takes a group equation as an input.
Remark 2.9. We will often abuse notation, and consider a solution to an equation in a group G to be a tuple of elements (g 1 , . . . , g n ), rather than a homomorphism from G * F (X 1 , . . . , X n ) → G, where X 1 , . . . , X n are variables. We can recover such a homomorphism φ from a tuple by setting gφ = g if g ∈ G and X i φ = g i . The action of φ on the remaining elements is now determined as it is a homomorphism.
Example 2.10. Equations in the group Z are linear equations in integers, and thus elementary linear algebra is sufficient to show that their satisfiability is decidable. A similar argument works for any finitely generated abelian group.
For example, if we use a as the free generator for Z, then X 2 a 2 X −3 aY 2 = 1 is an equation in the group Z. We can rewrite this using additive notation to get 2X + 2 − 3X + 1 + 2Y = 0.
Using the fact that Z is commutative, the above equation is equivalent to −X + 2Y + 3 = 0. Thus the set of solutions can be written as {(2y + 3, y) | y ∈ Z}.
2.5.
Equations in the ring of integers. We briefly define an equation in integers.
Definition 2.11. An equation in the ring of integers is an identity (X 1 , . . . , X n )f = 0, where (X 1 , . . . , X n )f ∈ Z[X 1 , . . . , X n ] is a polynomial. The indeterminates X 1 , . . . , X n are called variables. An equation is called quadratic if the degree of (X 1 , . . . , X n )f is at most 2.
A system of equations in integers is a finite set of equations. A solution to the system is any ring homomorphism that is a solution to every equation in the system.
When we create algorithms that take equations in integers as input, we will explicitly state the size of the input. Remark 2.12. As with group equations, we will usually use a tuple (x 1 , . . . , x n ) rather than a ring homomorphism φ : Z[X 1 , . . . , X n ] → Z. The homomorphism φ can be obtained from the tuple by defining X 1 φ = x i for all i, and nφ = n for all n ∈ Z. Since φ is a ring homomorphism, the action of φ on the remainder of Z[X 1 , . . . , X n ] is now determined.
2.6. Solution languages. Since solutions to group equations are homomorphisms (or tuples of group elements), in order to express our sets of solutions as languages we need a method of writing our solutions as words. We start by defining a normal form.
Definition 2.13. Let G be a group with a finite generating set Σ. A normal form for G with respect to Σ is a function η : G → (Σ ± ) * that fixes Σ ± , such that gη is a word representing g for all g ∈ G.
Note that as our definition of a normal form uses functions, our normal form associates a unique word representative for each group element.
We now define the solution language to a group equation, with respect to a specified normal form.
Definition 2.14. Let G be a group with a finite generating set Σ and a normal form η with respect to Σ. Let E be a system of equations in G with variables X 1 , . . . , X n . The solution language of E, with respect to Σ and η, is the language Note that the solution language to an equation in a single variable will not require the use of the letter #, as it is used to separate words representing the solutions to individual variables.
We now define an analogous notion for systems of equations in the ring of integers. We pick a letter as a generator, and write the non-negative integer n as this letter to the power of n. For negative integers, we introduce an 'inverse' of this letter, and express each n < 0 as the inverse letter to the power of |n|. Let E be a system of equations in the ring of integers, with variables X 1 , . . . , X n . The solution language to E is the language {(X 1 )φµ# · · · (X n )φµ | φ is a solution to E} over {a, a −1 , #}. 2.7. EDT0L languages. We now define EDT0L languages, which are the class of languages we will use to represent solutions. For a more detailed description of EDT0L languages, and where they fit in within the collection of languages called L-systems, we refer the reader to [35].
(1) Σ is an alphabet, called the (terminal) alphabet; (2) C is a finite superset of Σ, called the extended alphabet of H; (3) ω ∈ C * is called the start word; (4) R is a regular (as a language) set of endomorphisms of C * , called the rational control of H.
A language that is accepted by some EDT0L system is called an EDT0L language.
We continue with an example of an EDT0L language. Rational control for L = {a 2 n +1 | n ∈ Z ≥0 }, with start state q 0 and accept state q 1 .
Example 2.17. The language L = {a 2 n +1 | n ∈ Z ≥0 } is EDT0L over the alphabet {a}. To see this, consider the EDT0L system ({a}, {a, c}, ac, R) where R is defined by the finite-state automaton in Figure 1, where φ, θ ∈ End({a, c} * ) satisfy Alternatively, R can be defined by the rational expression φ * θ.
The class of EDT0L languages is stable under five of the six standard operations on languages. The fact that there are non-EDT0L languages which are pre-images of EDT0L languages under free monoid homomorphisms means that EDT0L languages are not a full algebraic family of languages, like regular, context-free or ET0L languages. When dealing with equations, or other 'parallel' languages, these five operations often prove to be sufficient. Moreover, if the EDT0L systems used in any of these operations can be constructed in NSPACE(f ), for some f : Z ≥0 → Z ≥0 , then there is a computable EDT0L system accepting the resultant language that can also be constructed in NSPACE(f ). For images under homomorphisms (5), this requires the homomorphism to be able to be written down in NSPACE(f ).
'Dividing' EDT0L languages by a constant
The purpose of this section is to show that given an EDT0L language where all words are of the form a i #b j , 'dividing' the number i of as and the number j of bs in a given word by constant values γ and δ respectively, and removing all words a i #b j where i is not divisible by γ and j is not divisible by δ yields an EDT0L language. We proceed in a similar fashion to the arguments used in [27], Section 3, using #-separated EDT0L systems.
The concept of #-separated EDT0L systems was used in [27] to show that solution languages to systems of equations in direct products of groups where systems of equations have EDT0L solution languages are also EDT0L. We use a slightly different definition here: we only need a single # rather than arbitrarily many, so our definition is less general, and we also insist that the start word is of a specified form. The latter assumption does not affect the expressive power of these systems; preconcatenating the rational control with an appropriate endomorphism can convert a #-separated system with an arbitrary start word into one with a start word of the form we use.
Definition 3.1. Let Σ be an alphabet, and # ∈ Σ. A #-separated EDT0L system is an EDT0L system H with: an extended alphabet C, a terminal alphabet Σ and a start word ω of the form ω =⊥ 1 # ⊥ 2 where: ⊥ 1 , ⊥ 2 ∈ C\{#}, and cφ = # if and only if c = #, for every c ∈ C and φ in the finite set of endomorphisms over which the rational control is a regular language.
For space complexity purposes, we will need bounds on the size of extended alphabets, and the size of images of letters under endomorphisms in the rational control in many of the EDT0L systems we use. We define the term g-bounded to capture this.
be a #-separated EDT0L system, and let g : Z ≥0 → Z ≥0 be a function in terms of a given input size I. Let B be the finite set of endomorphisms of C * over which R is regular. We say that H is g-bounded if We will need the fact that the class of languages accepted by #-separated EDT0L systems is closed under finite unions, with space complexity properties being preserved when taking these unions.
Lemma 3.3. Let L and M be languages over an alphabet Σ, accepted by #-separated EDT0L systems H and G, that are both g-bounded and constructible in NSPACE(f ), for some f, g : Proof. Let H = (Σ, C, ⊥ 1 #$ 1 , R) and G = (Σ, D, ⊥ 2 #$ 2 , S). Let B 1 and B 2 be the finite sets of endomorphisms over which R and S are regular. We can assume without loss of generality that endomorphisms in B 1 ∪ B 2 fix elements of Σ, and also that C\Σ and D\Σ are disjoint.
Let ⊥ and $ be symbols not already used, and let Note that θ 1 and θ 2 can both be constructed in constant space, and thus the rational control of F is constructible in NSPACE(f ). The start word is constructible in constant space. As a union of C and D with a constant number of additional symbols, E can be constructed using the same information required to construct C and D, and is thus constructible in NSPACE(f ).
We can now prove the central result of this section, about 'division' of certain EDT0L languages by a constant. To show the space complexity properties, we need the EDT0L system we start with to be exponentially bounded by the space complexity in which is can be constructed. We will need the following notation: Notation 3.4. Let Σ be an alphabet, a ∈ Σ, and w ∈ Σ * . Define # a (w) to be the number of occurrences of the letter a within w.
The idea of the proof of the following lemma is to index every letter in the start word with k(|γ|−1) ¢s and k $s, for some k ∈ Z ≥0 , and ensure this fact is preserved under the action of the rational control (possibly changing the value of k). We then add a new endomorphism to map ¢-indexed letters to ε, and $-indexed letters to a (or a −1 if γ < 0).
(1) If L is EDT0L, then so is the language Proof. We will use H to define an EDT0L system for Firstly note that if |γ| = 1, then M = L, and thus M is accepted by H, which satisfies the conditions in (2). So assume |γ| ≥ 2. Let ¢ and $ be symbols not already used. Letĉ ν be a distinct copy of c for each c ∈ C and ν ∈ {¢, $} * . Let where F is a new symbol. We will use F as a 'fail symbol'.
Let B ⊆ End(C * ) be the finite set over which R is a regular language. For each φ ∈ B, the finite set Φ φ ⊆ End((C ind ) * ) is defined as follows. If ν ∈ {¢, $} * satisfies |ν| ≤ |γ|, and c ∈ C is such that cφ = d 1 · · · d n , with n ≥ 1, d 1 , . . . , d n ∈ C (in particular, cφ = ε), then let ψ ∈ End((C ind ) * ) be defined byĉ ν ψ =d α 1 1 · · ·d αn n , for some α 1 , . . . , α n ∈ {¢, $} * such that |α i | ≤ |γ| for all i, and one of the following holds: In addition, ψ fixes F , and acts the same way as φ on letters in C. We take Φ φ to be the set of all such ψ, as α 1 , . . . , α n vary for each c ∈ C, satisfying the stated conditions. LetR be the regular set of endomorphisms defined by replacing each occurrence of φ within R with Φ φ . Now define θ ∈ End((C ind ) * ) bŷ . By construction, any word in⊥ 1R either contains an F , or is a word in ⊥ 1 R with hats on letters and indices that concatenate to form a word ν ∈ {¢, $} * of length n|γ| for some n ∈ Z ≥0 , with # ¢ (ν) = n(|γ| − 1), and # $ (ν) = n. Thus the set of words in We now consider the space complexity in which G can be built. Firstly, note that to output C ind we simply need to output (2 |γ|+1 − 1) (the number of words of length at most |γ| over a two letter alphabet) additional copies of C, plus the letter F . Doing this simply requires us to track the copy we're on, and since log(2 |γ|+1 − 1) is linear in |γ|, this can be done in NSPACE(f ). The start word can be output in constant space.
To construct Φ φ , we simply need to store the information required to construct φ, together with a counter to tell us how many ψ in Φ φ we have already constructed. Since log |Φ φ | is bounded by f g for some linear function g in |γ|, we can construct Φ φ , and henceR in NSPACE(f g). As θ can be constructed in constant space, it follows that the rational control, and hence G, can be constructed in NSPACE(f g).
To see that the language accepted by G is in fact M , first note that for anyφ ∈R,⊥ 1φ will be obtained from a word ⊥ 1 φ, for some φ ∈ R by attaching k(|γ| − 1) ¢ indices and k $ indices, for some k ∈ Z ≥0 . This will only be accepted if (⊥ 1 # ⊥ 2 )φ ∈ Σ * , and every letter in ⊥ 1 φ has precisely one index on it. In such a case, | ⊥ 1 φ| = k|γ| (in fact ⊥ 1 φ = a ±k|γ| ), and precisely k of these letters will be indexed by $, the rest being indexed by ¢. Hitting such a word with θ will delete all letters indexed with a single ¢, and map the $-indexed as to a and $-indexed a −1 s to a −1 , leaving the word a ±x #b y to be accepted. Thus M is accepted by G.
We now show that N = {a x γ #b y | (x, y) ∈ X, γ|x} is accepted by an EDT0L system, constructible in NSPACE(f g). Note that if γ ≥ 0, then M = N , and there is nothing to prove. Otherwise, γ < 0. Define π ∈ End((C ind ) * ) by aπ = a −1 , a −1 π = a and all other letters are fixed by π. Then (Σ, C ind , ⊥ 1 # ⊥ 2 ,Rθπ) accepts N , as we have just flipped the sign of the as in M . Moreover, as G is constructible in NSPACE(f g), so is our system for N . In addition, the stated bounds on the size of the extended alphabet and the images of endomorphisms of G hold for our system for N as well.
To obtain an EDT0L system for {a x γ #b y ζ | (x, y) ∈ X, γ|x, ζ|y}, we simply apply the same method we used to obtain N from L to N , except modifying ⊥ 2 and b, rather than ⊥ 1 and a.
Pell's equation
The purpose of this section is to study solutions to Pell's equation, which eventually allows us to show that the solution language to a quadratic equation in the ring of integers is EDT0L.
We start with a lemma that shows languages that arise as part of recursively defined integer sequences with non-negative integer coefficients are EDT0L. We will later show that solutions to Pell's equation are of this form.
Lemma 4.1. Let (p n ) n≥0 , (q n ) n≥0 and (r n ) n≥0 be integer sequences, defined recursively by a relation The system H is f -bounded for some linear function f ; (4) The rational control of H is of the form θϕ * ψ, and ⊥ θϕ n ψ = a pn , where ⊥ is the start word of H.
Proof. We will define an EDT0L system to accept L. Let Σ = {a, a −1 }. Our extended alphabet will be C = Σ ∪ {a p , a −1 p , a q , a −1 q , a r , a −1 r , ⊥}, and our start word will be ⊥. Define θ ∈ End(C * ) by r and fix all other letters. Finally, define ψ ∈ End(C * ) by , and all other letters are fixed. Our rational control will be θϕ * ψ.
First note that u =⊥ θϕ n contains either a p or a −1 p , but not both, and the same holds for a q and a −1 q , and a r and a −1 r . So we can abuse notation and take the definition of # ap when applied to such a word to be # ap (u) if u contains an a p , −# a −1 p (u) if u contains an a −1 p , and 0 if it contains neither. We similarly abuse notation with # aq and # ar .
We will show by induction that u =⊥ θϕ n satisfies # aq (u) = p n , # aq (u) = q n , and # ar (u) = r n . This holds by definition for n = 0. Inductively suppose it is true for some k − 1. Then ⊥ θϕ k = u, for some u ∈ {a p , a −1 p , a q , a −1 q , a r , a −1 r } * , with # aq (u) = p k−1 , # aq (u) = q k−1 , and # ar (u) = r k−1 . Using the definition of ϕ, and our inductive hypothesis we have It now follows that ⊥ θϕ n ψ = a pn , and thus (1) and (4) are true.
We now show that the EDT0L system (Σ, C, ⊥, θϕ * ψ) is constructible in non-deterministic linear space. Writing down Σ, C, ψ and the start word can be done in constant space. Writing down θ can be done by remembering p 0 , q 0 and r 0 , and thus can be done in non-deterministic logarithmic space, since storing an integer r requires log(r) plus a constant bits. It remains to show that ϕ can be defined in non-deterministic logarithmic space. To write down ϕ, we simply need to know the coefficients α i , β i and γ i for i ∈ {1, 2, 3}. Since these can all be stored using log α i , log β i and log γ i bits, respectively plus constants, (2) follows.
To show that the solution language to a general quadratic equation in two variables is EDT0L, we follow Lagrange's method to reduce it to the generalised Pell's equation, and then to Pell's equation. This reduction is detailed in [37]. We start with the definition of Pell's equation. There are infinitely many solutions to Pell's equation X 2 − DY 2 = 1, and these are {(x n , y n ) | n ∈ Z ≥0 }, where (x 0 , y 0 ) = (1, 0), and (x n , y n ) is recursively defined by where (x 1 , y 1 ) is the fundamental solution.
We give an explicit example of Pell's equation and its solutions.
Example 4.4. Consider Pell's equation X 2 − 2Y 2 = 1. It is not hard to check using brute force that the fundamental solution is (3, 2) (although there are more efficient methods of doing this: see for example [1]). Thus by Lemma 4.3, we can construct the set of all solutions using the sequence (x n , y n ) ⊆ Z 2 , defined recursively by (x 0 , y 0 ) = (1, 0), and x n = 3x n−1 + 4y n−1 , y n = 2x n−1 + 3y n−1 .
At this point, we could just apply Lemma 4.1 and Lemma 2.18 to show that the language {a x #a y | (x, y) ∈ Z 2 ≥0 is a solution to X 2 − 2Y 2 = 1} is EDT0L, however we will explicitly construct an EDT0L system. Our extended alphabet will be C = {a x ,ā x , a y ,ā y , a, #} and our start word ≥0 is a solution to X 2 − 2Y 2 = 1}, with start state q 0 and accept state q 1 .
will be a x #ā x . Let ϕ ∈ End(C * ) be defined by Our rational control will be ϕ * θ (alternatively, see Figure 2). By construction, # ax (a x #ā x ϕ n ) = #ā x (a x #ā x ϕ n ) = x n and # ay (a x #ā x ϕ n ) = #ā y (a x #ā x ϕ n ) = y n , and thus a x #ā x ϕ n θ = a xn #a yn .
In addition to the recursive structure of all solutions, we need a bound on the size of the fundamental solution. This allows us to give a bound on the space complexity in which the EDT0L system can be constructed. Understanding solutions to arbitrary two-variable quadratic equations using Lagrange's method requires us to have an understanding of the images of the solutions to Pell's equation under linear functions: that is αx + βy + γ for constant α, β, γ ∈ Z, where (x, y) is a solution. If α, β and γ are either all non-negative or all non-positive, this corresponds to concatenating EDT0L languages in parallel, which is not too difficult using standard EDT0L constructions.
On the other hand, if the signs of these three integers are not all the same, more work needs to be done. This occurs because we represent the integer n ∈ Z by a n , where a is a letter. Thus if we want to 'add' −3 and 5, this corresponds in language terms to trying to concatenate a −3 and a 5 , which results in a −3 a 5 , which is not equal (as a word) to a 2 . One cannot, in general, freely reduce all words in an EDT0L language to form an EDT0L language. There are in fact cases where such a reduction will result in a language that is not recursive; that is a language which is not accepted by a Turing machine, or whose complement is not accepted by a Turing machine.
To tackle the harder cases presented to us by 'subtraction', we instead study the integer sequences themselves, and show they satisfy recurrence relations that can be used to define EDT0L systems.
Proof. We will proceed by induction on n to show (1) and (2). First note that Additionally, (1) and (2) hold when n = 2. Suppose the result holds when n = k. Then It remains to show (3). We have, using (1) and (2), Using Lemma 4.6, we can now prove some results about the sequence (z n ) that show that it is indeed a type of sequence as mentioned by Lemma 4.1.
We have that z n ≥ 0 if and only if z n (γx n + α √ Dy n ) ≥ 0. Note that If α ≥ γ, the above expression must be at least 0, so z n ≥ 0 for all n ∈ Z ≥0 , and there is nothing to prove. Otherwise, suppose γ > α, and write γ = α + δ for some δ > 0. Then It follows that z n < 0 if and only if α 2 + αδ − √ Dx n y n (δ 2 + 2αδ) < 0. That is, Noting that x n and y n are both strictly increasing, and if n ≥ 1, x n y n > 1, it suffices to find N ∈ Z >0 such that if n = N the above inequality holds. By Lemma 4.3, we have that x n ≥ x 1 x n−1 and y n ≥ x 1 y n−1 , and so x n y n ≥ x 2n−1 1 y 1 . Noting that x 1 ≥ 2 and y 1 ≥ 1, it follows that x n y n ≥ 2 n .
For (4), we show (w n ) n≥N is monotone. First note that (w n ) n≥N and (z n ) n≥N are both sequences of non-negative integers or sequences of non-positive integers. In addition, w n = w n−1 + 2x 1 z n−1 for all n ∈ Z >0 . So if n ∈ Z ≥N , then |w n | = |w n−1 | + |2x 1 z n−1 | ≥ |w n−1 |. As (w n ) n≥N is a sequence of non-negative integers or a sequence of non-positive integers, it must be monotone.
We finally consider (5). It suffices to show that |z M | > |γ| and |w M | ≥ |γ|, then together with the fact that M ≥ N , and using the fact that (z n ) n≥N is monotone by (3), and w n is monotone by (4), we have that (z n ) n≥M and (w n ) n≥M are both sequences of non-positive or non-negative integers. We know that |z n+N | > 2 n−1 and w n ≥ 2 n−2 using (2), together with the fact that x 1 > 1, and so 2x 1 − 1 > 2, so taking any M ≥ N + log 2 (|γ| + 2) suffices. As N = log 2 α β , taking M = log 2 (|γ|+3)α β , as per the statement of the lemma, satisfies the desired condition.
Before we apply Lemma 4.1 to show that some of these solution languages are EDT0L, we need to add constants to the differences of multiples of solutions.
To allow us to show space complexity properties, we need bounds of many of the integers we have introduced. . Let z n = αx n − βy n and t n = z n + γ for all n ∈ Z ≥0 . Let w n = z n − z n−1 and s n = w n + γ for all n ∈ Z >0 .
Then there is a function f that is logarithmic in α, β and |γ|, and a function g that is linear in D, such that log(x M ), log(y M ), log |z M |, log |w M |, log |t M | and log |s M | are all bounded by f g.
We have now completed the set up to show that the solution language to Pell's equation is always EDT0L. More than that, we can show that applying linear functions to the variables will still give this outcome. We need the bounds on the size of our extended alphabet and images of endomorphisms so that we can apply Lemma 3.5 later on. (1) The language L = {a αx+βy+γ #b δx+ǫy+ζ | (x, y) ∈ S} is EDT0L; (2) A #-separated EDT0L system H for L is constructible in NSPACE(f g), where f is logarithmic in max(|α|, |β|, |γ|, |δ|, |ǫ|, |ζ|), and g is linear in D; where h 1 is linear in max(|α|, |β|, |γ|, |δ|, |ǫ|, |ζ|), and h 2 is exponential in D.
Proof. Let z n = αx n + βy n and t n = z n + γ for n ∈ Z ≥0 , and s n = z n − z n−1 + γ for n ∈ Z >0 . Let , and M = max(M γ , M ζ ). We will first construct an EDT0L system for If α ≤ 0 and β ≥ 0, or α ≥ 0 and β ≤ 0, Lemma 4.8 tells us that the sequences (t n ) n≥M , (z n ) n≥M and (s n ) n≥M satisfy the conditions of Lemma 4.1, and thus K is accepted by an EDT0L system H = ({a, a −1 }, C, ⊥, θϕ * ψ).
This, together with the recurrence relations in Lemma 4.3, gives that (z n ) n≥M , (x n ) n≥M and (y n ) n≥M satisfy the conditions of Lemma 4.1, and so in we also have in this case that L is accepted by an EDT0L system H = ({a, a −1 }, C, ⊥, θϕ * ψ).
We next consider the space complexity in which H can be constructed. By Lemma 4.5, log(x 1 ) and log(y 1 ) are both bounded by 2 + 3D. By Lemma 4.9, log(x M ), log(y M ), log |z M |, log |t M | and log |s M | are all bounded by f g, where f is logarithmic in |α|, |β|, |γ| and |ζ|, and g is linear in D.
Without loss of generality, we can assume that C and D are disjoint, and # / ∈ C ∪ D. For each endomorphism φ ∈ {ψ, θ, ϕ, σ, ρ, τ }, letφ ∈ End (C ∪ D ∪ {#}) * be defined to be the extension of φ to C ∪ D ∪ {#} which acts as the identity on wherever it was not previously defined on. It follows that Since H andĤ are constructible in NSPACE(f g), so is G. In addition, |C ∪ D ∪ {#}| is bounded by Redefining h 2 to be h 2 + 1, gives that G satisfies all of the conditions of the lemma.
We now consider the language Q = {a tn #bt n | n ∈ {0, . . . , M − 1}} Note that using Lemma 3.3, it now suffices to show that Q is accepted by a #-separated EDT0L system that is constructible in NSPACE(f g), and whose extended alphabet and images of letters under endomorphisms in the alphabet of the rational control are bounded by h 1 h 2 .
Let E = {⊥ 1 , ⊥ 2 , a, a −1 , b, b −1 , #}. We will use E as our extended alphabet, and ⊥ 1 # ⊥ 2 as our start symbol. For each n ∈ {0, . . . , M − 1}, define π n ∈ End(E * ) by It follows that Q is accepted by the #-separated EDT0L system Note that t 0 = α + γ,t 0 = δ + ζ. Thus log |t 0 | and log |t 0 | are both bounded by a logarithmic function f 1 in terms of |α|, |β|, |γ|, |δ|, |ǫ| and |ζ|. By redefining f to be f + f 1 , we have that log |t 0 | and log |t 0 | are bounded by f g. In addition, log |t M | and log |t M | are both bounded by f g. Since (t n ) and (t n ) are monotone, and terms are effectively computable by Lemma 4.8, each π n can be constructed in NSPACE(f g). As E and ⊥ 1 # ⊥ 2 are constructible in constant space, it follows that F is also constructible in NSPACE(f g).
Quadratic equations in the ring of integers
Having completed the work on Pell's equation, we now consider more general quadratic equations in the ring of integers, working up to an arbitrary two-variable equation. Our main goal is to show that the solution language to an arbitrary two-variable quadratic equation is EDT0L, with an EDT0L system that is constructible in non-deterministic polynomial space. We start with the general Pell's equation.
Then (x n , y n ) is a primitive solution to X 2 − DY 2 = N for all n ∈ Z ≥0 .
We will put an equivalence relation on the set of primitive solutions to a general Pell's equation. This will allow us to consider one class at a time, then use Lemma 3.3 to take the union. The fundamental solution of a class of primitive solutions to a general Pell's equation is the minimal element of the class.
We will need the following bounds for the space complexity results. .
Since the size of fundamental solutions to a general Pell's equation is bounded, there can only be finitely many, and hence only finitely many classes.
Lemma 5.7. There are finitely many classes of primitive solutions to a general Pell's equation.
We now show that the results stated in Lemma 4.10 hold for primitive solutions to a general Pell's equation. We use the characterisation in Lemma 5.2 to reduce the problem to Pell's equation, and then apply Lemma 4.10.
Note that the above inequalities also hold with δ replaced by α, and ǫ replaced by β. The result now follows from Lemma 4.10.
We now consider all solutions to a general Pell's equation. We start with a reduction from a non-primitive solution to a primitive solution.
Lemma 5.9. Let (x, y) ∈ Z 2 ≥0 , and let k = gcd(x, y). Then (x, y) is a solution to the general Pell's equation X 2 −DY 2 = N if and only if k 2 |N , and x k , y k is a primitive solution to the general Pell's equation It is now possible to generalise Lemma 5.8 to all solutions to a general Pell's equation. Proof. First note that the following are equivalent: (1) (x, y) ∈ S; (2) (x, −y) ∈ S; Since we can use Lemma 3.3 to take finite unions of EDT0L languages, and preserve space complexity of EDT0L systems, it therefore suffices to show that M = {a αx+βy+γ #b δx+ǫy+ζ | (x, y) ∈ is a non-negative integer solution to X 2 − DY 2 = N } is accepted by an EDT0L system that satisfies (2) and (3). Using Lemma 5.9, all non-negative integer solutions to X 2 − DY 2 = N are of the form (xk, yk) where (x, y) is a primitive solution to X 2 − DY 2 = N k 2 , for some k such that k 2 |N . Moreover, if (x, y) is a primitive solution to X 2 − DY 2 = N k 2 , then (xk, yk) is a solution to X 2 − DY 2 = N . We will therefore show two claims: (1) The language M k = {a αkx+βky+γ #b δkx+ǫky+ζ | (x, y) is a primitive solution to X 2 −DY 2 = N k 2 } is EDT0L for all k ∈ Z >1 such that k 2 |N ; (2) The union of the languages M k is EDT0L, and accepted by a #-separated EDT0L system H that satisfies (2) Since M equals this union, the result follows.
First note that if k ∈ Z ≥2 is such that k 2 |N , then k < N . Thus log(αk) = log(α) + log(k) ≤ log(α) + log(N ). If we use β, δ or ǫ in place of α, this inequality will still hold. Thus the first claim follows from Lemma 5.8.
For the second claim, we can apply Lemma 3.3 repeatedly, once for each k ∈ Z ≥2 such that k 2 |N . We need to do this for all such k. This could be done by cycling through all k ∈ {2, . . . , N − 1}, checking if k 2 |N , and then applying the lemma in those cases. We would need to store the 'current' k to do this, which would use at most log(N ) bits.
Before attempting to tackle the general two-variable quadratic equations, we mention the result we can obtain so far for a general Pell's equation. The space complexity in this case is log-linear, which is better than the log-quartic space complexity we have for arbitrary two-variable quadratic equations.
Proposition 5.11. The solution language to the general Pell's equation X 2 −DY 2 = N is EDT0L, accepted by an EDT0L system that is constructible in non-deterministic log-linear space, with max(D, |N |) as the input size.
Proof. This follows by first taking α = β = δ = ǫ = 1 and γ = ζ = 0 in Lemma 5.10, and then applying a free monoid homomorphism that maps b to a, using Lemma 2.18.
In order to understand the solutions to a generic two-variable quadratic equation, we must first know the solutions to the equation X 2 + DY 2 = N , where N, D ∈ Z. Whilst we have considered the 'hardest' case of D < 0, −D non-square and N = 0, it remains to consider the remaining cases. We start with the case when D ≥ 0.
Lemma 5.12. Let S be the set of all solutions to the equation X 2 + DY 2 = N , with N ∈ Z, D ∈ Z ≥0 , and α, β, γ, δ, ǫ, ζ ∈ Z. Then Proof. If N < 0, there is nothing to prove, as the equation has no solutions. So suppose N ≥ 0. Then all solutions (x, y) to this equation satisfy |x| + |y| ≤ N . Let (x, y) be such a solution. Then {a αx+βy+γ #b δx+ǫy+ζ } is accepted by the EDT0L system ({a, a −1 , b We have that this EDT0L system satisfies the conditions stated in (2) and (3). Thus we can use Lemma 3.3 to obtain the result.
We now consider the solutions to the equation X 2 − DY 2 = N when D is square.
We finally need to consider the case when N = 0.
Proof. First note that (x, y) is a solution if and only if x = √ Dy. It follows that if D is non-square, then this admits no solutions, and there is nothing to prove. If D is square, then the result follows from Lemma 5.13.
Proof. Let D = β 2 − 4αγ, E = βδ − 2αǫ and F = δ 2 − 4αζ, and define new variables U = DY + E and V = 2αX + βY + δ. Then Thus V 2 − DY 2 − 2EY − F = 4α 2 X 2 + 4αβXY + 4αδX + 4αγY 2 + 4αǫY + 4αζ. It follows that (2) can be rewritten as This is equivalent to DV 2 = (DY + E) 2 + DF − E 2 . By substituting U for DY + E, and setting N = E 2 − DF , we can conclude that (2) can be written as Let T be the set of solutions to (3). By Lemma 5.10, Lemma 5.12, Lemma 5.13 or Lemma 5.14 (dependent on whether D is positive and non-square, positive and square, or non-positive, and whether or not N = 0) we have that is accepted by a #-separated EDT0L system H, which is constructible in NSPACE(f g), where f is logarithmic in max(|D|, |β|, |βE − δD|, |E|), and g is linear in |D|. Let C be the extended alphabet of H, and let B be the finite set of endomorphisms of C * over which the rational control of H is regular. Using Lemma 5.10, 5.10, Lemma 5.12, Lemma 5.13 or Lemma 5.14, we also have that |C| and max{|cφ| | c ∈ C, φ ∈ B} are bounded by h 1 h 2 , where h 1 is linear in max(|D|, |β|, |βE − δD|, |E|), and h 2 is exponential in |D|.
Note that DY = U − E and 2αDX = DV − βU + βE − δD. Thus we have that By Lemma 3.5, it follows that L is EDT0L, and accepted by an EDT0L system that is constructible in NSPACE(n → n 4 log n).
Using Lemma 2.18 to apply the free monoid homomorphism that maps b to a to a language described in Theorem 5.15 gives the following: Corollary 5. 16. The solution language to a two-variable quadratic equation in integers is EDT0L, accepted by an EDT0L system that is constructible in NSPACE(n → n 4 log n), with the input size taken to be the maximal absolute value of a coefficient.
From Heisenberg equations to integer equations
This section aims to prove that the solution language to an equation in one variable in the Heisenberg group is EDT0L. We do this by showing that a single equation E in the Heisenberg group is 'equivalent' to a system S E of quadratic equations in the ring of integers. The idea of the proof is to replace each variable in E with a word representing a potential solution, and then convert the resulting word into Mal'cev normal form. The equations in S E occur by equating the exponent of the generators to 0.
We start with an example of an equation in the Heisenberg group.
Example 6.1. We will transform the equation XY X = 1 in the Heisenberg group into a system over the integers. Using the Mal'cev normal form we can write for variables X 1 , X 2 , X 3 , Y 1 , Y 2 , Y 3 over the integers. Replacing X and Y in XY X = 1 in these expressions gives After manipulating this into Mal'cev normal form, we obtain As this normal form word is trivial if and only if the exponents of a, b and c are all equal to 0, we obtain the following system over Z: Note that the variables corresponding to the exponent of c in X and Y , namely X 3 and Y 3 , only appear in linear terms in the above system.
In this specific example it is not hard to enumerate the solutions in a somewhat reasonable manner. We can start by replacing occurrences of Y 1 and Y 2 in the third equation of (6) with −2X 1 and −2X 2 , respectively, to give that (6) is equivalent to This simplifies to We can now enumerate all values of (X 1 , X 2 , X 3 ) (across Z), and each such choice will fix the values of Y 1 , Y 2 and Y 3 , for which there will always exist a solution. Using this method, we have that the solution set to (6) is equal to Translating this back into the language of the Heisenberg group gives that the solution set to XY X = 1 is The following definition allows us to transform an equation in a single variable in the Heisenberg group into a system of equations in the ring of integers. This is done by representing the variables as expressions in Mal'cev normal form, plugging these expressions back into the equation, and then converting the resulting word into Mal'cev normal form. After doing this, the exponents of the generators can the be equated to 0, which yields a system of equations in the ring of integers.
Definition 6.2. If w = 1 is an equation in a class 2 nilpotent group, consider the system of equations over the integers defined by taking the variable X, and viewing it in Mal'cev normal form by introducing new variables: where the X 1 , X 2 and X 3 take values in Z.
The resulting system of equations over Z obtained by setting the expressions in the exponents equal to zero is called the Z-system of w = 1.
Example 6.3. The Z-system of the equation (4) from Example 6.1 is We now explicitly calculate the Z-system of an arbitrary equation in one variable in the Heisenberg group.
Proof. We proceed as in Example 6.1. Replacing each occurrence of X in (7) with a X 1 b X 2 c X 3 gives (a X 1 b X 2 c X 3 ) ǫ 1 a i 1 b j 1 c k 1 · · · (a X 1 b X 2 c X 3 ) ǫn a in b jn c kn = 1.
Since c is central, we can push all occurrences of c and c −1 to the right, and then freely reduce, thus showing that 7 is equivalent to (a X 1 b X 2 ) ǫ 1 a i 1 b j 1 · · · (a X 1 b X 2 ) ǫn a in b jn c n r=1 (ǫrX 3 +kr) = 1.
Note that for all x 1 , x 2 ∈ Z, (a x 1 b x 2 ) −1 = b −x 2 a −x 1 = a −x 1 b −x 2 c x 1 x 2 . Using this, together with the fact that c is central, gives that (9) is equivalent to a ǫ 1 X 1 b ǫ 1 X 2 a i 1 b j 1 · · · a ǫnX 1 b ǫnX 2 a in b jn c n r=1 (ǫrX 3 +kr+δrX 1 X 2 ) = 1.
We now push all as in (10) to the left. The as at the beginning do not need to move. The as with exponent i 1 will need to move past b ǫ 1 X 2 , thus increasing the exponent of c by i 1 ǫ 1 X 2 . The as with exponent ǫ 2 X 1 will need to move past b j 1 and b ǫ 1 X 2 , thus increasing the exponent of c by j 1 ǫ 2 X 1 + ǫ 1 ǫ 2 X 1 X 2 . This continues up to the as with exponent i n , which will need to move past all bs, thus increasing the exponent of c by i n ( n r=1 ǫ r X 2 ) + i n n−1 r=1 j r . Overall, we have that (10) is equivalent to a n r=1 (ǫ r X 1 + i r ) (11) b n r=1 (ǫ r X 2 + j r ) c n r=1 (ǫ r X 3 + k r + δ r X 1 X 2 ) + n r=1 r s=1 (ǫ r ǫ s X 1 X 2 + ǫ r X 1 j s ) + n r=1 r s=1 (i r ǫ s X 2 + i r j s ) = 1.
Equating each of the exponents to 0 (as we are now in Mal'cev normal form) gives that the Z-system of (7) is n r=1 (ǫ r X 1 + i r ) = 0 n r=1 (ǫ r X 2 + j r ) = 0 n r=1 (ǫ r X 3 + k r + δ r X 1 X 2 ) + n r=1 r s=1 (ǫ r ǫ s X 1 X 2 + ǫ r X 1 j s ) + n r=1 r s=1 (i r ǫ s X 2 + i r j s ) = 0.
We have now collected the results we need to prove the main theorem of this section.
Theorem 6.5. Let L be the solution language to a single equation with one variable in the Heisenberg group, with respect to the Mal'cev generating set and normal form. Then (1) The language L is EDT0L; (2) An EDT0L system for L is constructible in NSPACE(n → n 8 (log n) 2 ).
We consider two cases: when n r=1 ǫ r = 0 and when n r=1 ǫ r = 0.
The first two of the above identities only involve constants. If one of these is not satisfied, then (12) has no solutions. In such a case, L is empty, and there is nothing to prove. So we suppose that these are satisfied. It follows that they are redundant, and the above system is equivalent to the third equation in it (with the addition that X 3 can be anything, regardless of X 1 and X 2 ). Note that this is a quadratic equation in integers, with variables X 1 and X 2 . So by Theorem 5.15 (14) for (X 1 , X 2 )} is EDT0L, and accepted by an EDT0L system that is constructible in NSPACE(n → n 4 log n) in terms of the coefficients of the equation. These are Note that |ǫ r | = 1 and |δ r | ≤ 1 for all r. In addition, as exponents of constants in (12), each sum n r=1 i r , n r=1 j r and n r=1 k r is linear in our input. It follows that the above expression is quadratic in our input, and so an EDT0L system for K is constructible in NSPACE(n → n 8 (log n) 2 ). Applying the monoid homomorphism that maps # to ε, followed by concatenating the above language with the EDT0L language {c} * , which is constructible in constant space, allows us to apply Lemma 2.18 to show {a x 1 b x 2 c x 3 | (x 1 , x 2 , x 3 ) is a solution (14)} is EDT0L, accepted by an EDT0L system that is constructible in NSPACE(n → n 8 (log n) 2 ). Since this language is L, the result follows.
If either of the first two equations have no solution, then neither does (12), and so L is empty, and there is nothing to prove. We will therefore suppose that both of these equations admit a solution.
Since these are both single linear equations with one variable, they can both admit a single solution. Let x 1 be the solution for X 1 , and x 2 be the solution for X 2 . Plugging these into the third equation gives δ r x 1 x 2 + n r=1 r s=1 (ǫ r ǫ s x 1 x 2 + ǫ r x 1 j s ) + n r=1 r s=1 (i r ǫ s x 2 + i r j s ) = 0. (16) Note that this is a linear equation in integers with single variable X 3 . Hence by [18], Corollary 3.13 and Proposition 3.16, the language M = {c x 3 | x 3 is a solution to (16)} is EDT0L, and accepted by an EDT0L system that is constructible in non-deterministic quadratic space in terms of an input of length |α| + |ζ| + n r=1 |δ r x 1 x 2 | + n r=1 r s=1 (|ǫ r ǫ s x 1 x 2 | + |ǫ r x 1 j s |) + n r=1 r s=1 (|i r ǫ s x 2 | + |i r j s |).
As the sums of the lengths of constants in our original equation, |α|, |β|, |γ| and |ζ| are all linear in our input. As the number of constants in our equation, n is also linear in our input. We have that |x 1 | = β α ≤ |β| and |x 2 | = γ α ≤ |α| are both linear in our input. Since |ǫ r | = 1 and |δ r | ≤ 1 for all r, and the above expression is quartic in our input, it follows that M is constructible in NSPACE(n → n 4 ). Applying Lemma 2.18 to concatenate M with the singleton language {a x 1 b x 2 }, which is constructible in linear space, gives that (14)} is EDT0L, and accepted by an EDT0L system that is constructible in NSPACE(n → n 4 ). Since this language is L, the result follows.
Department of Mathematics, Alan Turing Building, University of Manchester, M13 9PL
Email address: alex.levine@manchester.ac.uk
|
2022-03-10T03:15:37.670Z
|
2022-03-09T00:00:00.000
|
{
"year": 2022,
"sha1": "22834d50a89cecc32177139da0d730eeedf86528",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "22834d50a89cecc32177139da0d730eeedf86528",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
259345345
|
pes2o/s2orc
|
v3-fos-license
|
High apoptotic endothelial microparticle levels measured in asthma with elevated IgE and eosinophils
While asthma is considered an inflammatory-mediated airway epithelial and smooth muscle disorder, there is increasing evidence of airway capillary endothelial dysfunction associated with vascular remodelling and angiogenesis in some individuals with this condition. The inflammation is typically characterized as type-2 high (eosinophilic) vs type 2-low (neutrophilic and pauci-granulocytic); we hypothesized that the type-2 high group would be more likely to evidence endothelial dysfunction. As a biomarker of these processes, we hypothesized that nonsmokers with allergic asthma may have elevated plasma levels of endothelial microparticles (EMPs), membrane vesicles that are shed when endothelial cells undergo activation or apoptosis. Total and apoptotic circulating EMPs were measured by fluorescence-activated cell analysis in patients with allergic asthma (n = 29) and control subjects (n = 26), all nonsmokers. When the entire group of patients with asthma were compared to the control subjects, there were no differences in total circulating EMPs nor apoptotic EMPs. However, patients with asthma with elevated levels of IgE and eosinophils had higher levels of apoptotic EMPs, compared to patients with asthma with mildly increased IgE and eosinophil levels. This observation is relevant to precision therapies for asthma and highlights the importance of sub-phenotyping in the condition.
Introduction
While most attention on the pathogenesis of asthma has focused on the role of airway epithelial inflammation and smooth muscle hypertrophy [1], asthma is also associated with enhanced airway wall angiogenesis and microvascular remodeling [2]. These airway vascular abnormalities are associated with increased blood flow, microvascular permeability and edema, contributing to the influx of inflammatory cells and airway narrowing. The increase in airway wall vascularity likely reflects a local increase in angiogenesis, an active process involving endothelial cell activation, proliferation and apoptosis [3]. Based on this background, we hypothesized that these changes in bronchial wall blood vessels in patients with asthma may be reflected by increased levels of circulating endothelial microparticles (EMPs), membrane vesicles that are shed when endothelial cells undergo activation or apoptosis [4]. Interestingly, the data demonstrates that a subset of patients with asthma with elevated levels of blood immunoglobulin (Ig) E and eosinophils have increased numbers of circulating EMPs derived from endothelial cells undergoing apoptosis.
Study population
Subjects were recruited under protocols approved by the Weill Cornell Medicine Institutional Review Board and provided written informed consent, as previously described [5]. The asthma group (n = 29) were all lifelong nonsmokers. All had evidence of reversible airflow obstruction and/or positive methacholine challenge, and a history of allergy with elevated serum IgE (> 165 IU/ml) and/or elevated blood eosinophils (> 0.45 × 10 3 /μl) and/ or positive skin prick test to at least 1 common allergen. The non-allergic, healthy controls (n = 26), all lifelong nonsmokers, had normal serum IgE and blood eosinophil levels. All subjects underwent clinical assessment and evaluation of plasma EMPs.
Endothelial microparticle analysis
As previously described [6], total numbers of circulating EMPs were characterized by fluorescence-activated cell analysis as 0.2 to 1.5 μm particles that were CD42b − CD31 + based on detection of CD42b (platelet glycoprotein Ib alpha chain) and CD31 (P-selectin, endothelial cell specific). To assess whether the circulating EMPs were derived from "activated" or "apoptotic" endothelial cells, measurement of CD62E (E-selectin) was added to the analysis, with elevated levels (compared to control subjects) of CD42b − CD62 + /CD42b − CD31 + representing EMPs derived from "activated" endothelial cells and reduced levels (compared to control subjects) representing EMPs derived from "apoptotic" endothelial cells. P values comparing parameters between the groups were calculated using 2-sided Student's t-test with unequal variance.
Results
Control subjects were older and had a lower body mass index compared to the patients with asthma (p < 0.05, both comparisons, Table 1). Gender and ethnicity distributions were similar in both groups (p > 0.4, both comparisons). Serum IgE and blood eosinophils were significantly higher in patients with asthma vs control subjects (p < 0.002, both comparisons). Forced vital capacity (FVC) % predicted, forced expiratory volume in 1 s (FEV1) % predicted and FEV1/FVC were significantly lower, and % change in FEV1 post bronchodilator was significantly higher in patients with asthma compared to controls (p < 0.03, all comparisons). When the entire group of patients with asthma were compared to the controls, there were no differences in total circulating EMPs nor in activated or apoptotic EMPs (p > 0.6, both comparisons). However, since the process of increased bronchial wall angiogenesis and remodeling in asthma may only occur in a subset of patients with asthma [7], the study population of patients with asthma was divided into two groups based on IgE and eosinophils levels using the criteria: (1) a mildly allergic group with positive skin prick testing to ≥ 1 common allergen but lower IgE (≤ 165 IU/ml) and eosinophils (< 0.45 × 10 3 /μl), n = 12; and (2) a highly allergic group with positive skin prick testing to ≥ 1 common allergen and elevated IgE (> 165 IU/ml) and eosinophils (≥ 0.45 × 10 3 /μl), n = 17. These cutoffs were based on the upper limit of normal in our university hospital clinical laboratory, where the lab tests were run. Demographics and clinical characteristics including inhaled corticosteroid usage were similar between the asthma subgroups and there was no difference in the number of patients with mild, moderate or severe asthma within each group (p > 0.2, all comparisons). Compared to the mildly allergic group, the highly allergic group (referring to the severity of the allergy, not the clinical asthma symptoms) had significantly higher IgE (1023 ± 1002 IU/ml vs 66 ± 21 IU/ml, p < 0.003) and eosinophils (0.8 ± 0.3 × 10 3 / μl vs 0.2 ± 0.1 × 10 3 /μl, p < 10 -4 ). Interestingly, while the patients with asthma with elevated IgE and eosinophil levels had less total EMPs compared to the patients with asthma with mild levels of IgE and eosinophils (509 ± 215/ µl vs 698 ± 269/µl, p < 0.005; Fig. 1), they had elevated levels of apoptotic EMP as defined by a CD42b − CD62 + / CD42b−CD31 + ratio lower than the mean observed in control subjects (0.49 ± 0.20 vs 0.85 ± 0.44, p < 0.03; Fig. 2), implying that there is active pulmonary capillary apoptosis ongoing. Within the subgroups of patients with asthma, there were no significant differences in total or activated/apoptotic EMPs whether they were on or were not on steroid treatment, or whether they had mild, moderate or severe asthma (p > 0.05, all comparisons). There was no correlation of total or apoptotic EMP levels with any of the demographic or lung function parameters (p > 0.09 and r 2 ≤ 0.2, all parameters).
Discussion
There is increasing evidence that some patients with asthma have airway wall vascular changes associated with airway remodeling [2]. The vascular abnormalities include increased numbers of capillaries in the bronchial wall and vascular remodeling, a process associated with vasodilation, capillary leak and edema, contributing to airway constriction [2]. As a correlate of this process, the present study identifies a subgroup of highly allergic patients with asthma with elevated plasma levels of EMPs derived from endothelial cells undergoing apoptosis.
Microparticles released by activated or apoptotic cells including the pulmonary capillary endothelium have been proposed as biomarkers in other chronic airway diseases [8]. Elevated apoptotic EMPs levels have been observed in smokers with low DLCO [6] and chronic obstructive pulmonary disease [9]. In asthma, it has been reported that platelet-derived microparticles are significantly elevated compared to healthy subjects [10]. In the Fig. 2 Comparison of activated plasma endothelial microparticles (EMP) levels in patients with elevated (red) vs mild (green) allergic asthma. As previously described [6], CD42b − CD62 + / CD42b − CD31 + above normal controls are considered to be derived from "activated" endothelial cells while CD42b − CD62 + /CD42b − CD31 + below normal controls are derived from "apoptotic" endothelial cells. Circles = asthmatics not on steroid treatment, triangles = asthmatics on steroid treatment same study, EMPs from the entire asthma population were unchanged, consistent with our observations in the patients with asthma were not sub-grouped by allergicrelated criteria. Angiogenesis and microvascular remodeling have been linked to inflammation in asthma [2], and many inflammatory mediators including histamine, prostaglandins and leukotrienes contribute to angiogenesis, vasodilation and microvascular leakage, potentially leading to endothelial cell activation, dysfunction or apoptosis.
Conclusions
In summary, assessment of plasma levels of apoptotic EMPs in nonsmoker patients with asthma suggests a novel endotype of highly allergic patients with asthma with elevated IgE and eosinophil levels having higher levels of apoptotic EMPs, compared to mildly allergic patient with asthma with lower IgE and eosinophil levels, an observation that should be further investigated in relevance to precision therapies for allergic asthma. Our analysis demonstrated difference in levels of apoptotic EMP in asthmatics with elevated levels of IgE and eosinophils compared to asthmatics with mild levels of IgE and eosinophils as endothelial cell activation or dysfunction could be considered as a potential therapeutic target in high allergic asthma. Importantly, circulating apoptotic EMPs are potential candidates as diagnostic or therapeutic biomarkers in highly allergic asthmatics, which might speed the development of new therapies specifically targeting this subgroup of asthmatics.
|
2023-07-07T13:47:03.947Z
|
2023-07-07T00:00:00.000
|
{
"year": 2023,
"sha1": "ae24b0d1faa24d734c7cc59d7695aa1d9d488238",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ae24b0d1faa24d734c7cc59d7695aa1d9d488238",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
40056165
|
pes2o/s2orc
|
v3-fos-license
|
A Structural Comparison Approach for Identifying Small Variations in Binding Sites of Homologous Proteins
A method for analyzing the protein site similarity was devised aiming at understanding selectivity of homologous proteins and guiding the design of new drugs. The method is based on calculating Cα distances between selected pocket residues and subsequent analysis by multivariate methods. Five closely related serine proteases, the coagulation factors II, VII, IX, X, and XI, were studied and their pocket similarity was illustrated by PCA clustering. OPLS-DA was then applied to identify the residues responsible for the variation. By combining these two multivariate methods, we could successfully cluster the different proteases according to class and identify the important residues responsible for the observed variation.
Introduction
Drug discovery includes several crucial steps before clinical testing and alluding to an optimal process may render saved time and efforts.Thus identifying and front-loading mitigation of sub-optimal compound properties before clinical phases is highly desirable [1].Next to lack of efficacy, safety issues are the leading cause for drug rejection in late stage clinical trials which represents an estimated 70% of the total cost for a drugs clinical development [2]- [4].Drug promiscuity, to some extent, has been correlated to ligand flexibility, although mainly to structural similarity between targets [5] [6].The problem of structural similarity can furthermore be broken down into two parts, the identification of binding pockets and the aspect of protein similarity.Numerous methods have been developed for identifying druggable targets and predicting their binding pockets.These often involve geometry-based [7], energy-based [8], or physicochemical property-based methods [9], or a combination of these [10], and have been reviewed in detail elsewhere [11] [12].In this study we focus on the issue of structural protein pocket similarity.Early drug target selectivity mitigation by inspection of identified analogue drug targets might provide early compound modification for implementation of more relevant molecular design by ensuing generated information [1].
There are three key steps involved in similarity searches: representation of the binding site, comparison, and scoring.Since proteins can have varying overall structure and still maintain highly conserved function and binding site composition, such comparative methods usually focus on the binding site and more or less neglect the remainder of the protein [13].Binding site representation is a crucial step in the process of similarity detection.The representation can for example be made using the atomic position of the Cα carbons of the residues in the site.It can be complemented with the Cβ carbons, and even pseudo atoms representing an amino acid residue.The latter is done in order to incorporate structural information, yet at the same time ensuring a simplified model over the more complex and time consuming inclusion of the exact positions of all atoms in the representative residues.The binding site can moreover be represented by surface patches [14], pharmacophore features [15], or by physiochemical properties [16].A simplified representation of the site is again favorable in order to reduce the complexity of the comparison.In fact, Fleedman and Labute have shown that Cα carbons give adequate information to enable efficient comparison of binding sites and that use of Cβ (and Cγ) carbons generate essentially identical models to this [17].The use of Cα carbons hence forms the basis of the present study.
Binding site comparison also involves finding the best superposition of the sites involved.This is dependent on how the site is represented, the similarity metric, and the algorithm used for the comparison.Algorithms can for example include alignment methods such as iterative searches [18], geometric matching [19], geometric hashing [20], and clique detection [9] [21] which are not always favorable because of the uncertainty that comes with the alignment.Principal component analysis (PCA) [22] has been used in several fields for interpreting different datasets and analyzing variations in the data.It has also been applied when comparing protein cavities such as in the GRID consensus principal components (GRID/CPCA) approach which does not rely on alignments [23].GRID/CPCA is applied to improve ligand selectivity towards a particular target by identifying potential modifications in the ligand.However, this similarity based method is dependent on the availability of structural data of proteins with an active molecule present in the cavity.PCA has recently also been implemented to characterize and map the cavities of proteins without ligands present [24], and to examine the dynamics of cavity geometry evolution by selecting structures from molecular dynamics simulations (MD) [25], in this way dealing with the protein flexibility issue.
To our knowledge, our current study is the first time orthogonal partial least squares discriminant analysis (OPLS-DA) [26] has been applied to explore binding sites.The present method is intended as an initial step when comparing similar targets, in order to identify even subtle variations in homologous binding sites.To demonstrate the approach, we evaluated coagulation factors II, VII, IX, X, and XI which all share high structural similarity, aiming at discriminating between the different classes.The model is based on calculations of distances between the Cα atoms of selected amino acid residues located in the vicinity of the catalytic site.By applying PCA, the retrieved distance data could be visualized in an easily interpretable fashion separating the protein classes into clusters based on cavity backbone variation.Furthermore, OPLS-DA was implemented to derive more accurate loadings, as compared to the PCA generated ones, in order to identify the specific amino acid residue(s) responsible for the observed clustering in the PCA.Our aim is to develop a method that provides insight regarding similarity between protein cavities in order to guide molecular modeling or drug design based on 3D structures.The strength of the present approach is its straightforwardness regarding both describing and comparing the included target sites, which is of value in the early stage of drug design.
Selection of 3D Structures
A set of 86 protein structures of the five coagulation factors II, VII, IX, X, and XI was used in the study as obtained from the Protein Data Bank (www.rcsb.org)[27] and are named according to their PDB codes.These five proteins were mainly selected because of sharing high structural similarity.Structures having a covalently attached ligand were discarded as it could interfere with the calculations.Structures from different organisms; human, bovine, and mouse were included, as well as dimers, trimers, and tetramers.Both apo and holo forms of the enzymes were included (See Table S1, Supporting Information).
Selection of Amino Acid Residues (AAs)
To incorporate the vicinity of the catalytic site, AAs representing all three binding pockets (S1, S2-Sn, and S1'-S2') were selected (Figure 1), resulting in a total of seven initial AAs.These include the catalytic triad His57, Asp102 and Ser195 (thrombin numbering), and the aspartate residue Asp189, located at the bottom of the S1 pocket, which is responsible for the specificity of the proteases [28].These four residues were furthermore complemented by Leu41 located in the S1'-S2' region, Trp215 in S2 and Tyr228 in the S1 pocket.The selection parameter was set so as to add two AAs before and two after each respective initial residue, e.g., for His57 amino acids 55, 56, and 58, 59 were also included in the distance calculations.This parameter was set in order to sample more of the sites, but one can chose to include as many or as few AAs as desired.In the current study, a total of 35 AAs were thus included, that efficiently span all three regions of the binding pockets.
Calculation of Distances
The Cα atom coordinates of the selected AAs were obtained from their respective PDB files.The distances between the Cα atoms were calculated in an all-against-all fashion (Figure 2) creating a descriptor matrix (X) with all distances given in Ångström (Å).
Multivariate Analysis
Principal component analysis (PCA) [29] was used to compress the systematic variation in the descriptor matrix (X), containing N observations (86 protein structures) and K variables (Cα-Cα distances), into two low-dimensional matrices T (scores matrix) and P' (loadings matrix), and is generally illustrated according to Equation (1).
T is composed of the principal components ( ) , , , n t t t and represents the variation between the different coagulation factors while P' contains the loadings of the components ( ) , , , k p p p and represents the variations in the calculated distances and defines the orientation of the PC plane.In this way the combination of T and P' defines the PC model and the overall variation of the descriptor matrix by orthogonal factors.The residual matrix E, containing noise, is discarded from the PC model.
The first extracted PC accounts for the largest variance in the data.To this, additional PCs are added, orthogonal to the previous, to improve the approximation of the data.Each PC was further evaluated based on its eigenvalue, multiple correlation coefficients (R 2 ), and cross-validation (Q 2 ) [22].
In order to identify the amino acid residue(s) responsible for the obtained differences in the scores plot, orthogonal projections to latent structures (OPLS) was applied to remove the Y-orthogonal variation from X as described in Equation ( 2) [30].T P is the predictive score matrix and T P P is the predictive loading matrix for X.The Y-orthogonal score and loading matrices are denoted T O and T O P , respectively.E comprises the residual matrix.The method can in our case be defined as OPLS discriminant analysis (DA) because the response vector, Y, was set as a discriminant defining each protease class in order to find the discrimination of that particular class against all others.The model was further evaluated by cross validation after each generated orthogonal component.
All multivariate calculations were performed in the SIMCA 13.0 software [29] [30].All data was subjected to centering but remained un-scaled before any multivariate calculations.
Target Protein Preparation
In the present study a total of 86 protein structures of the five main coagulation factors FII, FVII, FIX, FX, and FXI were analyzed.The protein pockets were sampled by calculating distances between the Cα atoms of selected AAs.
A first PCA including all coagulation factors was calculated, resulting in five main outliers, 3sqe, 3sqh, and the two dimer structures of 2afq, all from FII, and 1jbu from FVII (Figure 3), plus three additionally possible outliers, 3edx (trimer), 3hk3 (monomer), and 3hk6 (dimer) of FII.By inspection of the loadings, the distances generating the outliers could be explained.For 3sqe and 3sqh the catalytic residue Ser195 was mutated to alanine which gave rise to the deviation [31].The two dimers structure 2afq were found to be altered in several areas of the protein structure leading to overall structural differences.Taking a closer look at the crystallization data it was found that 2afq was expressed in the absence of coordinating Na + .The Na + free environment thus induces a conformational change which ultimately blocks the S1 and S2 pockets making it stand out from the proteins expressed in Na + containing environment [32].The structure of 1jbu was found to be in complex with the exosite binding inhibitory peptide A-183.This peptide occupies the binding site and thereby alters its structure whereby 1jbu displays large differences in the binding region compared to other serine proteases, in particular in the loop which defines the S1 pocket [33].The identification of these five outliers was enabled by PCA using data centering without scaling, making it possible to identify deviating data visualized as outliers in the scores plot.To obtain more relevant resolution the five first outliers were excluded from all subsequent calculations.The second PCA was calculated keeping 3edx, 3hk3, and 3hk6 in the model (Figure 4(A)) since they appeared closer to the remaining clusters.These three structures all share the mutations of Trp215 and Glu217 to Ala, which were two of the AAs included in the Cα distance calculations.The mutated structures assume a conformation similar to the inactive form of thrombin recently shown by Gandhi et al., which explains the deviation in the PCA [34].Removing these from the model (Figure 4(B)) did not result in any significant difference from the previous PCA (Figure 4(A)) with regards to the clustering of the remaining factors.However, in the OPLS scores plot, the structures of FII were divided into two groups where the mutated structures all clustered into one.
Multivariate Analyses for Protein Cavity Characterizations
The recalculated PCA model, after excluding all above mentioned structures seen as outliers, could successfully identify and cluster the factors by class using a 4-component model (Figure 4 ) and the cross-validation, after PC4, gave a cumulative Q 2 value of 0.75 (Table 1).The rather small difference between 2 The relatively large eigenvalues further indicate that there was substantial systematic variation present in the dataset.Aside from the clustering of the individual factors there also seems to be a division between two groups (Figure 4) where FIX and FX make up one of the two.Their similarity is described in literature as sharing the same homology with protein C [35], which explains their resemblance.When inspecting, also the third component of the PCA, further clustering was revealed for all factors, as seen in the 3D scores plot (Figure 5).
In addition to PCA, OPLS-DA was applied to identify unique variations in the distances of the protein binding cavities for each protease class.By removing the non-correlated variation from X the predictive model complexity is reduced which improves the interpretation of the resulting one component giving a more accurate correlation between the factors.All coagulation factors showed to be discriminated mainly by distance variations in the S1 pocket which gave rise to the main variance seen in the original PCA (Figure 4(B)).When assigning FII as the discriminant Y, in addition to S1, variation was also found in the hydrophobic proximal S1'-S2' pocket (Figure 6).The largest variation obtained from the loadings indicate that residues Glu39 and Leu40 are responsible for the discrimination in S1'-S2'.It was also found that, among others, the distance to His230 in the S2-Sn pocket deviates from the rest of the proteases.
In the case of FIX, the OPLS loadings indicate a variation in loop 2, residue Glu217, as the main contributor to the discrimination of this protein.This agrees with the fact that FIX possesses a glutamate at position 219 in loop 2, close to 217, whereas the other serine proteases have glycine in that same position [36].Additionally, since this residue (Glu219) is located right at the entrance of the S1 pocket it may very well be the cause of the difference seen in the PCA.
It has earlier been shown by Shirk et al. that FVII differentiates itself from FII and FX in both the S2-Sn and S1'-S2' pocket [37], which was also confirmed in this study.FVII contains large distance variation in Asp100, Arg101, Asp102, and Ala104 which are all located in the S2-Sn region, and in Leu41 and Glu39 located in the S1'-S2' region.FX was discriminated in the loop 1 region which constitute parts of the S1 pocket, and in the S2'-Sn' pocket by the same residues as for FVII.Lastly, FXI also displayed a large region of possible selectivity sites where, aside from the S1 pocket, variations were found in distances located in both the S2-Sn and the S1'-S2' pockets.
Method Example Rationale for Early Selectivity Identification
In order to exemplify how one could apply the described approach in drug design based on 3D structures, and achieve drug selectivity, we examine the difference in volumes and side chains of the respective amino acids important for discriminating FII from the rest of the coagulation factors.The most important deviation for FII can be identified as located in the S1 pocket.This appears larger in FII than in the other factors, because Arg187 is located further away from most of the included AAs.This opens up the possibility to design a ligand that reaches further down in the S1 pocket in FII, whereas it would be sterically hindered from binding to the other factors.In addition, residue 39 was identified as a possible "selectivity filter" as it is negatively charged in FII (Glu) as opposite to FXI which has an amino acid with a positively charged side chain (Arg) or VII where a hydrophobic alanine occupies that position.In this case, a ligand with a positively charged group pointing towards Glu39 would provide strong binding in FII.In order to target Leu40 we can consider halogenated ligands, as these have a high propensity to interact with hydrophobic or hydrogen bonding amino acids such as Leu, Phe, Ser and Thr [38].Based on the above analysis, we have thus very easily identified three possible sites where drug design could be directly applied in order to selectively target FII.
Conclusions
PCA and OPLS-DA were applied to analyze the variation between similar protein structures.The approach was able to group the structures into clusters according to the common classifications of serine proteases.Our results show that the Cα coordinates contain sufficient information to distinguish between subtle variations in the proteases with good accuracy.In this case, further details of exact side chain conformation are not needed to make the initial comparison.Adequate information to group the proteases appears to be encoded in the protein backbone as single mutations such as the one of Ser to Ala in 3sqe and 3sqh clearly stand out in the PCA/OPLS analyses.Although this may be seen as a limitation in the sense that the method may be overly sensitive to small variations, it is important to remember that the method is intended for precisely this purpose, namely to identify and expose the smallest variations in structures.
The proposed method complements already existing methods in analysis of closely related binding sites, and at the same time adds the possibility of rapidly and readily identifying residues responsible for variations found in the analysis.The advantage of the present approach is that it be applied even if only backbone information is available, and since it is a structure based method, it is also independent of the availability of active ligands.
Providing detailed knowledge about variations in cavities can assist when selecting targets for molecular modeling studies or for designing ligands, be it highly selective as well as highly promiscuous compounds, based on 3D structures of targets as herein exemplified.Thus an early intervention, using the present approach for identification of possible compound structure selectivity sources, might provide crucial remuneration during following development drug discovery phases.
Figure 2 .
Figure 2. Schematic figure of the protein active site comparison method.Input PDB-files are used for calculating the distances between 3D coordinates of AAs.The difference in distances is illustrated by clusters plotted in a PCA.
Figure 3 .
Figure 3. PCA score plot of t1 vs t 2 based on distances between the selected AAs.The colors represents the different coagulation factors; II (blue), VII (yellow), IX (red), X (turquoise) and XI (purple).
(B)).The first component PC1 explained 40% of the data (R 2 = 0.40).Four PCs together explained 79% ( cross-validation was successful since the model was able to accurately predict the missing data.
multiple correlation coefficients and cross validations, respectively.
Figure 5 .
Figure 5. 3D scores plot of the PCA model B, excluding outliers.The coloring schemes are the same as in Figure 3 and Figure 4.
Table 1 .
Statistical values for the first four principal components (PCs) a .
|
2017-11-07T16:25:09.671Z
|
2015-09-23T00:00:00.000
|
{
"year": 2015,
"sha1": "6e7b656562386354fad8aa2766532528d795bb5e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=60116",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6e7b656562386354fad8aa2766532528d795bb5e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
270116971
|
pes2o/s2orc
|
v3-fos-license
|
ENT Manifestations of Sjögren’s Syndrome: A Comprehensive Narrative Review
: Background: Sjögren's Syndrome (SjS) is a chronic autoimmune disorder predominantly affecting exocrine glands, leading to symptoms such as keratoconjunctivitis sicca and xerostomia. This review focuses on the ear, nose, and throat (ENT) manifestations of SjS, which significantly impact patient quality of life and pose diagnostic challenges. Aim: The review aims to consolidate current knowledge on the ENT manifestations of SjS, exploring pathophysiological underpinnings, clinical presentations, and treatment strategies, while addressing the diagnostic challenges associated with the disease. Review Summary: ENT manifestations in SjS include nasal dryness, recurrent sinusitis, otitis, and laryngeal dryness, which may precede other systemic manifestations, aiding in early diagnosis and management. This review highlights the importance of recognizing these symptoms for timely intervention, which can significantly improve disease prognosis. Future Implications: Understanding ENT manifestations can enhance multidisciplinary management approaches and foster development in diagnostic and therapeutic strategies, potentially improving patient outcomes and quality of life. Clinical Policy and Development: Enhanced awareness and training on the ENT aspects of SjS are recommended for healthcare professionals. Development of more sensitive diagnostic tools and personalized treatment plans could also address the variability in symptom presentation and response to treatment.
Introduction
Sjögren's Syndrome (SjS) is a systemic autoimmune disorder primarily characterized by the infiltration of lymphocytes into the exocrine glands, leading to significant dryness primarily in the eyes and mouth [1].This disease typically manifests as keratoconjunctivitis sicca (dry eyes) and xerostomia (dry mouth), but it can affect multiple other organ systems, presenting a variety of symptoms.It is more prevalent in middleaged women but can occur in any demographic.Diagnosis is often confirmed through various serological tests and biopsy of the salivary gland.patients' quality of life and the potential diagnostic challenge they present.Early ENT symptoms often precede other systemic manifestations and can lead to earlier diagnosis and management of the disease.Recognizing these symptoms, which include nasal dryness, recurrent sinusitis, otitis, and laryngeal dryness, is crucial for timely intervention and can significantly affect disease prognosis [2].Thus, a comprehensive understanding of these manifestations not only aids in the comprehensive care of patients but also enhances the multidisciplinary approach necessary for managing SjS effectively.
Understanding and addressing the ENT manifestations of SjS are therefore essential components of both the diagnostic process and the holistic treatment approach needed for these patients.
The aim of the review was to thoroughly synthesize existing research on the ENT manifestations of Sjögren's Syndrome, examining the pathophysiological underpinnings, clinical presentations, challenges in diagnosis, and current treatment strategies.The review endeavors to answer key questions about the prevalence and impact of ENT symptoms on patient quality of life, the diagnostic hurdles that complicate early detection and accurate diagnosis, and the effectiveness of both established and novel treatments in managing these symptoms.Addressing these questions will not only deepen the understanding of SjS within the medical community but also enhance patient care by highlighting crucial aspects of ENT involvement in this complex autoimmune disorder.
Methodology
The methodology of this narrative review commenced with a systematic search of peer-reviewed literature on the ENT manifestations associated with Sjögren's Syndrome.Key databases such as PubMed, Scopus, Web of Science, and Google Scholar were utilized for the search.
Relevant studies were identified using a combination of keywords including "Sjögren's Syndrome", "ENT manifestations", "otolaryngological symptoms", "xerostomia", "dry mouth", "keratoconjunctivitis sicca", "hearing loss", "sinusitis", and "laryngitis".The search focused on literature published from January 2014 to the present to ensure inclusion of the most recent advancements and findings.Only articles published in English were considered.randomized controlled trials, while qualitative studies were evaluated using CASP (Critical Appraisal Skills Programme) checklists.Additionally, the review included an assessment of potential biases within the studies and the evidence base to ensure the reliability and applicability of the review's conclusions.
Epidemiology of ENT Manifestations in Sjögren's Syndrome Prevalence and Incidence Rates
In India, Sjögren's Syndrome, particularly primary Sjögren's syndrome, has historically been reported less frequently compared to Western countries.However, recent studies have begun to provide more data.It has been observed that even in specialized clinics for rheumatic diseases, the prevalence of SjS among all patients is quite low, approximately 0.5% [3].This suggests that the disease may be underdiagnosed or possibly underreported, reflecting a significant discrepancy in recognition or diagnostic criteria compared to global data.
Demographic Patterns
The demographic patterns of SjS in India show some unique characteristics.The disease tends to present at an earlier age compared to Western populations, nearly a decade earlier.Common presentations include dry eyes, dry mouth, and systemic features similar to those observed internationally, with delayed complications like renal tubular acidosis sometimes leading to diagnosis.The gender distribution remains consistent with global trends, predominantly affecting women [3].
These findings highlight the importance of developing localized diagnostic criteria and increasing awareness among healthcare professionals to improve the identification and management of SjS in India.Enhanced training and resources could lead to more accurate epidemiological assessments and better patient outcomes.
Pathophysiology of Sjögren's Syndrome Autoimmune Process in Sjögren's Syndrome
Sjögren's Syndrome is an autoimmune disorder characterized by the immune system's attack on its own exocrine glands, primarily the salivary and lacrimal glands.This autoimmune reaction leads to chronic inflammation and eventual destruction of glandular cells, causing decreased production of saliva and tears.The pathophysiology involves multiple immune pathways, including both the innate and adaptive immune systems.Key features include the activation of T and B lymphocytes, the presence of autoantibodies, and the upregulation of proinflammatory cytokines which further drive the inflammatory process in glandular tissues [4].
Specifics of ENT Region Involvement
In Sjögren's Syndrome, the ENT manifestations primarily arise from the dysfunction of the salivary glands (part of the exocrine system), which leads to xerostomia (dry mouth).The dryness can exacerbate dental caries, oral candidiasis, and difficulty in swallowing and speaking.The pathophysiological basis for these symptoms includes both the direct effects of immune-mediated glandular destruction and functional impairment due to cytokineinduced disruptions in salivary secretion.
Exocrine Gland Dysfunction Related to ENT Manifestations
The exocrine gland dysfunction in SjS is predominantly due to lymphocytic infiltrates that disrupt the glandular structure and function, leading to reduced secretion.This process is compounded by the presence of autoantibodies against ribonucleoproteins (anti-Ro/SSA and anti-La/SSB), which are thought to further interfere with glandular cell function.Additionally, there is evidence to suggest that cytokines like IL-4 might play a crucial role in glandular dysfunction, influencing the severity and progression of symptoms such as dry mouth and contributing to the overall pathogenesis of the syndrome [5].
These insights into the pathophysiology of Sjögren's Syndrome, particularly in the context of India, highlight the complex interplay between immune dysregulation and glandular dysfunction that underpins the various ENT symptoms observed in this condition.Further research focused on local demographic and environmental factors could provide deeper understanding and better management strategies for those suffering from this syndrome.
Manifestations: Hearing Disturbances and Their Causes
In Sjögren's Syndrome, ear manifestations primarily include sensorineural hearing loss and otalgia.The hearing disturbances are often attributed to inflammatory processes that affect the nerves or can be secondary to the dryness affecting the mucosal linings of the ear canal and middle ear.Sensorineural hearing loss in SjS might be linked to autoimmune inner ear disease, a condition where the body's immune response mistakenly targets the inner ear structures [6].
Nasal and Sinus Manifestations: Dryness, Crusting, and Recurrent Sinus Infections
Patients with SjS frequently experience nasal and sinus symptoms due to the dryness of the nasal mucosa.This dryness can lead to crusting, nasal congestion, and an increased vulnerability to recurrent sinus infections.The lack of adequate mucosal moisture and the consequent thickening of mucus impair the natural sinus drainage and create an environment prone to bacterial growth [7].
Oral and Pharyngeal Manifestations: Xerostomia, Dysphagia, and Salivary Gland Dysfunction
Xerostomia, or dry mouth, is one of the most common and early presenting symptoms of Sjögren's Syndrome.It results from the dysfunction of salivary glands due to lymphocytic infiltration, which significantly reduces saliva production.This reduction in saliva can lead to complications such as difficulty in swallowing (dysphagia), increased dental caries, and oral candidiasis.The decrease in saliva also affects taste and can cause a burning sensation in the mouth [8].
Laryngeal Manifestations: Hoarseness, Dry Cough, and Voice Changes
The laryngeal involvement in SjS includes hoarseness, a persistent dry cough, and changes in the voice.These symptoms are primarily due to the dryness affecting the laryngeal mucosa and vocal cords, making them less flexible and more prone to irritation.In some cases, laryngopharyngeal reflux (LPR) may exacerbate these symptoms by causing further inflammation and irritation in the larynx and pharynx.These manifestations highlight the diverse and significant impact of SjS on the ENT region, affecting multiple aspects of patients' quality of life and requiring a comprehensive approach to management and treatment.
Diagnostic Challenges in Sjögren's Syndrome Early Detection and Differential Diagnosis
Early detection and accurate differential diagnosis of SjS present significant challenges due to the often subtle and nonspecific nature of its symptoms, which can be easily attributed to other causes such as aging or medication side effects.This can lead to diagnostic delays and mismanagement of the condition.SjS is often identified during differential diagnoses involving multiple exocrine manifestations across various organ systems, making it crucial for a multidisciplinary approach to recognize and differentiate it from similar disorders [9].
Role of Imaging and Biopsies in Diagnosis
Imaging techniques such as salivary gland ultrasonography (SGUS) and sialography play critical roles in diagnosing SjS by identifying gland abnormalities indicative of the disease.SGUS is particularly valuable due to its non-invasive nature and ability to detect early changes in salivary gland structure.Despite its advantages, the technique requires experienced clinicians for accurate interpretation and may not always be conclusive, necessitating further investigation through minor salivary gland biopsies which remain the gold standard for diagnosis [10].
Diagnostic Criteria Specific to ENT Manifestations
ENT manifestations specific to SS, such as dry mouth and dry eyes, are critical components of the diagnostic criteria, often assessed through tests like Schirmer's test for tear production and unstimulated salivary flow rate measurement.The American-European Consensus Group criteria also integrate these assessments, underscoring their importance in the diagnostic process.However, the variability in clinical presentation and overlap with other conditions can complicate the application of these criteria, requiring a careful and comprehensive evaluation to confirm the diagnosis [9].These diagnostic challenges highlight the need for heightened awareness among healthcare providers and the adoption of a multidisciplinary approach to accurately identify and manage SjS effectively.
Conservative Management Strategies
Conservative management of SjS focuses primarily on alleviating the sicca symptoms, which include dry mouth and dry eyes.Key strategies include meticulous oral hygiene, use of saliva substitutes, and intensive eye care with lubricating eye drops.Environmental modifications such as humidifiers in the home or workplace and avoiding medications that exacerbate dryness are also recommended to improve daily comfort and reduce symptoms.
Pharmacological Treatments
Pharmacological treatments aim to manage both the symptoms and the underlying autoimmune processes.Muscarinic agonists such as pilocarpine and cevimeline are widely used to stimulate saliva and tear production.For systemic symptoms, antiinflammatory medications, including nonsteroidal anti-inflammatory drugs (NSAIDs) and corticosteroids, are used.In cases of severe glandular or extraglandular manifestations, immunosuppressants such as hydroxychloroquine, methotrexate, and cyclophosphamide may be employed.Additionally, recent advances have introduced biologics like Rituximab for refractory cases, particularly when there are serious systemic complications [11].
Surgical Interventions
Surgical interventions are relatively limited in the management of SjS but may include procedures such as punctal occlusion to manage severe dry eye.This procedure involves closing the tear ducts to conserve tears and improve eye moisture.In severe cases of dental decay or loss resulting from xerostomia, dental implants or reconstructive surgery may be considered to restore function and appearance [12].
Multidisciplinary Approach to Management
A multidisciplinary approach is critical for effectively managing SjS due to its systemic nature and the variety of organ systems it affects.Collaboration among rheumatologists, dentists, ophthalmologists, and other specialists is essential to address the comprehensive needs of patients.This collaborative approach ensures that all aspects of the disease, from ocular and oral symptoms to systemic manifestations, are adequately managed and treated [13].
These management strategies collectively aim to reduce symptom burden, manage the underlying autoimmune activity, and improve quality of life for individuals living with Sjögren's Syndrome.
Long-term Outcomes of ENT Manifestations
The long-term outcomes for patients with SjS concerning ENT manifestations generally involve persistent and chronic symptoms such as xerostomia (dry mouth) and keratoconjunctivitis sicca (dry eyes).These conditions are usually progressive with the potential to significantly impair daily functions and increase the risk for secondary complications, including oral and ocular infections.The chronic nature of these symptoms necessitates ongoing management strategies to mitigate their impact.Patients may experience varying degrees of symptom severity over time, influenced by both the progression of the disease and the effectiveness of the treatment regimens employed [14].
Quality of Life Considerations
The quality of life (QoL) for patients with SS, especially those with pronounced ENT manifestations, can be substantially affected.Chronic dryness can lead to difficulties in speaking, eating, and swallowing, which in turn impact social interactions and personal well-being.Moreover, the persistent discomfort and the need for continual management of symptoms (such as using artificial saliva or tear substitutes) can lead to psychological distress.Studies have shown that the health-related quality of life in SjS patients is often lower compared to the general population, with these effects being more pronounced in patients experiencing greater symptom severity or those with additional systemic manifestations of the disease.Managing these symptoms effectively and improving the quality of life for these patients requires a tailored approach, considering both medical treatments and supportive care to address the physical discomfort and psychological impacts of living with SS.
Understanding the long-term outcomes and the quality-of-life impacts is crucial for optimizing the management strategies for Sjögren's Syndrome, aiming not only to treat the physical symptoms but also to support the overall well-being of the patients.
Conclusion
Sjögren's Syndrome presents significant diagnostic and therapeutic challenges due to its complex presentation and impact on various exocrine glands, leading to persistent ENT manifestations such as xerostomia and keratoconjunctivitis sicca.These chronic conditions profoundly affect patients' quality of life, necessitating comprehensive management strategies that address both the physical symptoms and the broader psychosocial impacts.Effective management requires a multidisciplinary approach to ensure that both the symptomatic relief and the emotional and social well-being of the patients are adequately supported.Long-term outcomes vary, largely dependent on the severity of symptoms and the effectiveness of ongoing therapeutic interventions.Thus, understanding and addressing the multifaceted nature of SjS is essential for improving overall patient outcomes and enhancing quality of life.
Limitations:
The management of SjS faces limitations due to the variability in disease presentation and the lack of universally effective treatments that target the underlying autoimmune processes.Diagnostic challenges persist with the need for more sensitive tools to detect early glandular involvement.Additionally, the
H.I. Farrukh International Journal of Medical and Biomedical Studies pg
. 11
|
2024-05-30T15:13:28.966Z
|
2024-05-28T00:00:00.000
|
{
"year": 2024,
"sha1": "2db85f26cdeef2ca52853e6784c873e09bd66082",
"oa_license": "CCBY",
"oa_url": "https://ijmbs.info/index.php/ijmbs/article/download/2794/2288",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cb00c6c6ba7cd75c9d61fa4585dbc74a9a063555",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
235459666
|
pes2o/s2orc
|
v3-fos-license
|
Social, health, and working conditions among hospital workers
Objectives : to compare social, health and working conditions among nursing, nutrition and hospital cleaning service workers. Methods : a quantitative cross-sectional, descriptive and correlational study carried out in a public hospital in the countryside of São Paulo, including workers from nutrition, cleaning and nursing services. Data collection occurred during working hours. Validated questionnaires and Karasek’s Demand-Control model were used to assess psychosocial dimensions and the Self-Reporting Questionnaire-20 to measure the common mental disorder. As dependent variable, the groups professionals and chi-square test were used for associations with p≥ 0.05. Results: 227 workers participated. Positive associations were found between professional groups and socioeconomic, health and work characteristics. Conclusions : social, health, and working conditions differ between the professional groups studied.
INTRODUCTION
The most radical changes in the world of work occurred in the period of industrialization through the different forms of management, in an attempt to increase productivity, with a view to the concomitant strengthening of capitalism (1) . Nowadays, workers are conditioned to strenuous jobs, afraid of losing their jobs (2) . Moreover, outsourcing has brought some instabilities such as the fragility of the relationship, which contributes to the worsening of working conditions (3) .
The relationship between the health-disease process and work is complex and depends on multiple factors that have to do with individual values and purposes and how he reacts and copes with the situations to which he is exposed (4) . It results, therefore, from the intersection between internal characteristics, the socially constructed ones, working conditions and coping management (4) .
Health work has its specificity in the relational aspect, and depending on the way in which the organization takes place, allowing or not the participation of workers, which leads to the production of live or dead work. Dead work is performed rigidly with time control, in line with production lines, while live work is creative, allowing workers, using their potential and creativity, to concretize the objectives of their work (5) .
Therefore, depending on how the work is organized, both physical and mental health may be compromised. The deficient or absent interpersonal relationship between workers and between managers and workers compromises mental health, and accidents at work or occupational diseases are related to impaired physical health (3) .
Although all working groups are exposed to a greater or lesser degree to pleasure and suffering, some are more at risk of falling ill in the context of work, as is the case in the hospital area. It is a space covered with singularities, which encompasses a complex organization composed of different groups of workers and knowledge (5)(6)(7) . It is an environment that offers several risks to workers' health, being considered "unhealthy and dangerous" (7) . One of the risk factors that has contributed as one of the main causes of morbidity among hospital workers is the psychosocial (8) .
In this context, nursing (NS), nutrition (NUS), hygiene and cleaning (HCS) services should be highlighted. NS does not always have enough qualitative and quantitative workers to meet the demands of care, being exposed to work overload with little effective participation in issues that include decisions and interferences in the service organization, compromising their empowerment (6)(7) . Similarly, NUS is composed of workers with different levels of education, most of whom have no specific training. It is a job that requires prompt service and is related to complaints of joint pain, repetitive tasks, physical effort, noise, temperature, long standing, lack of definition of tasks, among other factors (7,9) .
In this setting, HCS, which has little social recognition and low remuneration, is relevant. Outsourcing, practiced in several countries, has been increasing the precariousness of working conditions (10) , and the cleaning service, since it is a medium activity, that is, one that is not inherent to the main objective of the institution, has been focus of this type of contract mainly in the hospital area.
Based on the hypothesis that health, living and working conditions differ among hospital workers, especially among HCS professionals, when compared to those from NUS and NS, due to the fact that they perform work with less social and through an outsourced contract.
OBJECTIVES
To compare social, health and working conditions among workers from nursing, nutrition and hospital cleaning services.
Ethical aspects
The study was approved by the Research Ethics Committee and by the Research Center of the Hospital under study, in accordance with the recommendations of Resolution 446/2012 of the Brazilian National Health Council (Conselho Nacional de Saúde). Participants invited to participate in the study read and signed the Informed Consent Form.
Study design, place, and period
This is a quantitative, cross-sectional, descriptive and correlational study carried out in a public teaching hospital in the countryside of São Paulo, a regional reference for high complexity with 318 beds. The study took place from August 2015 to December 2016.
Population, sample, inclusion and exclusion criteria
The study population has 582 workers from NS, 78 from NUS and 94 from HCS, the first two of whom are hired by CLT (Consolidation of Labor Laws) and HCS through an outsourced company. The sample was stratified proportionally. A calculation was performed to define the sample only for NS in order to ensure proportionality in relation to the other professional groups. A prevalence of 50% was defined, with a margin of error of 5% and 95% confidence, corrected by the finite population and stratified by working groups, which resulted in 35 nurses and 61 technicians. For NUS and HCS, the entire population was used, due to the small number of professionals. In this regard, a number of 268 participants was obtained, and this sample was obtained for convenience among those who agreed to participate, in an equal number for each work shift. Included were workers from NS, NUS with an open-ended employment contract and outsourced HCS, all of whom had more than six months at the institution. Workers from these teams who were in administrative functions or who were absent at the time of data collection were excluded. From NUS, 22 refused to participate and 03 were on sick leave. Among HCS professionals, 04 were on sick leave and 12 did not accept to participate in the research. Therefore, 227 professionals were interviewed.
Study protocol
Data collection took place in the morning, afternoon and evening working hours individually and privately. The printed instrument was delivered to the participants, and the main investigator remained on site to collect the completed questionnaire and support participants, if necessary. Filling it took an average of 30 minutes. A questionnaire was made up of open and closed questions: a) identification: age, sex, marital status, skin color and education; b) socioeconomic data: individual monthly income supported by the current minimum wage of R $ 937.00 (nine hundred and thirty-seven reais), grouped domestic activities (I do the least part/share and do the most part) and, finally, Economic Classification Brazil Criteria (11) (CCEB -Critério de Classificação Econômica Brasil), adopted to classify the population into socioeconomic strata in Class A, B1, B2, C1, C2 and D-E; c) lifestyle: cigarette consumption (yes/no), leisure being able to obtain more than one answer, physical activities measured through the International Physical Activity Questionnaire (IPAQ) being active (> 150 minutes/week with at least moderate activity), non-active (<150 minutes/week, considering activities performed at work as well) (12) and participation in conflicts involving family members or co-workers grouped, rarely or frequently (13) ; d) health conditions: reported health problems (assessed by means of a list of diseases with more than one answer), regular use of medications (yes/no), presence of common mental disorders (CMD) assessed from the Self-Report Questionnaire-20 (SRQ-20), (translated and validated for use in Brazil by Mari and Williams (14) with twenty dichotomous responses (yes/no). The cut-off point for the identification of CMD was six or more positive responses for men and eight or more for women (15)(16)(17) and Body Mass Index (BMI) based on the reported weight and height, later classified as obese with BMI greater than or equal to 30. Self-perceived health status was collected by asking: in general, compared to people your age, how do you consider your state of health? The answers were very bad/bad/regular, good/ very good, being grouped in negative and positive, respectively; e) working conditions: time, working period, overtime at the institution, having another job, occupational accidents at the institution (yes/no) and type of accident (16)(17) .
The psychosocial aspects of work were verified through the Job Content Questionnaire (JCQ), a version translated and validated into Portuguese by Araújo and Karasek (18) , which assesses psychological demand and control over work, in addition to physical effort, social support and insecurity at work on a scale with a score of 1 to 4 (1-4) (strongly disagree, disagree, agree, and strongly agree). The sum of each dimension varied from 10-41 for demand and 20-52 for control. In addition to the "demand" dimension, psychological and physical demands are added. The psychological demands consider the pressure to carry out the activities in relation to the time, level of concentration, frequency of interruption of their work and the need to wait for the completion of other workers' tasks to carry out their work. The physical demand is made up of physical effort, speed, excessive amount of tasks and physically uncomfortable position in carrying out the tasks. The "control" dimension comprises the use of skills, makes it possible to learn new things, creativity, repetitiveness, variety of tasks, individual special skills and decision-making authority, influences both with its group of workers and in the institution's management policy (16)(17)(18) . The sum of the items related to the demand and the controller was performed according to the recommended model (19) . For dichotomization of control (low/ high) and demand (low/high), a cut-off point was established in the mean, which, in this study, was 31 for both, according to the Job Content Questionnaire User's Guide recommendations (19)(20) . Indicators were built from the grouping of variables in the questionnaire, as proposed by Karasek, which consider occupational stress when demands at work exceed the response conditions of workers and the level of control available (16)(17)(18)(19)(20)(21)(22)(23) . The results of the JCQ allow us to constitute four different situations that involve work: 1) high demand: it comprises high demand and low control; 2) active work: it comprises high demand and high control; 3) passive work: it comprises low demand and low control; 4) low demand: composes low demand and high control (17)(18)(19)(20)(21) . These four situations were grouped into a single variable.
Analysis of results, and statistics
For data organization and analysis, a Microsoft Excel spreadsheet and Statistical Package for the Social Science (SPSS) version 23.0 for Windows, respectively, were used. Descriptive analyzes of absolute (n) and relative (%) and inferential frequencies of the variables were performed. The Kolmogorov-Smirnov test was performed to verify the normal distribution of data. To test the association between categorical variables, Pearson's chi-square test or Fisher's exact test was used. Professional groups were used as the dependent variable and the other variables as independent. The level of significance adopted was 5%.
There were significant differences between the professional groups in terms of age over 50 years, with NS in a smaller proportion; male gender, to a lesser extent in NUS; living with a partner, in a higher percentage in NS; brown or black skin color and low education, prevailing among HCS professionals. In relation to the socioeconomic profile, in the comparative analysis between the three professional groups, the differences in relation to individual income were significant, since nurses have a higher income, they belong to socioeconomic classes A, B1 and B2 and perform less domestic activity compared to other groups. Domestic activities are performed mostly by HCS workers who belong to socioeconomic classes C1, C2, D-E. NUS is presented with most workers earning up to two current minimum wages and more than 30% are in the social classes C1, C2, D-E, this proportion being even higher among HCS workers ( Table 1).
As for life habits, there was a significant difference (p <0.05) between the groups in relation to leisure activities: resting/sleeping, NS being the one who most performs them, and HCS has less participation, especially in relation to resting/sleeping, going to bars, church and traveling. Nursing had a higher percentage of non-active people in relation to physical activity, and the nutrition service had a higher percentage regarding the frequency of involvement in conflict (Table 2).
Among the reported health problems, osteoarticular problems appeared in a greater proportion among HCS workers, and irritability, in the NS workers group. Higher prevalence of diabetes and obesity was observed among NUS workers ( Table 3) When checking work characteristics and comparing it with the three groups, the results showed significant differences, with the following items being more frequent in the nursing team: length of service at the institution; the number of overtime hours greater than 5h/week; have another job; having night work; having suffered piercing accidents previously; decision-making authority; social support at work. HCS workers were the ones who most mentioned low decision-making authority, high conditions of job insecurity, as well as passive and demanding work characteristics (Table 4).
DISCUSSION
The present study sought to compare the living and health conditions between three different groups of workers who work in the hospital, in order to highlight the similarities and differences between the conditions that can lead to their illness.
Other studies among health workers were also possible to observe the predominance of the female sex, the age group of 36 to 50 years old, who lives with a partner and completed high school (15)(16)(17)21) , not corroborating only in relation to the self-reported skin color, which, in this case, white color prevailed. The results for HCS workers are in line with another study in which the majority have brown or black skin and with incomplete primary education (17,24) .
The high percentage of women in these professions occurs because socially these attributions are recognized as being eminently female because they are linked to the care with life and health. In this regard, professions are less valued in terms of wages when compared to the male domain, socially understood as professions linked to competitiveness and achievement (8) . Also, the lowest individual income between groups is from HCS, as it is an occupation that does not require specific training or high education, even if they carry out activities that involve risks to people's health and life, as they essentially deal with hospital infection control. This work is viewed socially as simple, of lower status and little recognized, often going unnoticed, which generates negative feelings and interference in the health-disease process (24) .
Since HCS is performed by a third-party company, workers have different employment contracts than other groups. Outsourcing, which came with the proposal to make companies more competitive, contributed to job insecurity, since most have reduced wages and have reduced labor benefits, when compared to workers hired for an indefinite and statutory period (16)(17)25) . Currently, outsourcing has grown a lot, representing more than 20% of the entire labor market, wage reduction of around 27% and an increase in hours worked weekly, totaling an average of three hours (26) . This situation can be verified when verifying that the workers of the outsourced HCS were the ones that presented smaller percentages regarding the possibility of leisure. The preference for resting/sleeping over other leisure activities revealed by the majority, in the three professional groups, may indicate physical or mental exhaustion due to work in the hospital environment that requires living with death, pain, and illness (8) .
It is necessary to analyze health workers separately because both health behavior and lifestyle can be different between groups according to occupations. As for health workers, even considered as an example to be followed, they do not always correspond to this expectation, since they also suffer the same social and environmental influences as other people (27) . Weighting physical activities, the three professional groups were considered to be active, and HCS had the highest percentage. The literature points out that physical activity, for at least 30 minutes, of moderate intensity performed throughout life brings benefits to physical and mental health, preventing chronic non-communicable diseases such as hypertension and diabetes (28) .
Conflict involvement was greater among NS and NUS professionals when compared to HCS professionals. Although conflict is inherent in human relationships, it can lead to both positive and negative consequences, and it is necessary to consider the situation, as there may be significant losses. When the conflict occurs in the work context, it has been found that many of them result from the distribution of power and prestige that is typical of hospital organizations (13) . It is worth mentioning that the teams of the nutrition and nursing services have greater diversification in their functions, in addition to different levels of training, comprising functions performed by both technical and higher education professionals. It can be seen from this fact that there is a greater dispute for power, differentiating both the levels of autonomy and the levels of control over work.
Osteoarticular problems were found predominantly in HCS professionals, which is in line with the finding that musculoskeletal diseases result from the workload and the type of service performed, a fact also found in the results of a study conducted in the interior of the state of Rio Grande do Sul with workers from the cleaning service of a hospital public (15) . Services that require a lot of physical effort, loading and unloading of materials, people or food are related to the appearance of musculoskeletal problems. HCS encompasses activities with intense work rhythm, physical effort related to the disposal of garbage, lifting of furniture, among others, which, according to literature, has contributed to the development of occupational diseases (15)(16)(17)24) . Greater physical effort at work, combined with domestic activities among HCS professionals, corroborates a study that found greater burden among these professionals who performed domestic services more often during the week (17,29) . Socially, domestic service is still the responsibility of women who are responsible for the care of the family. This role is not recognized as work because it has no pay and many researches do not add this type of work when women are included in the samples for analysis of the burden.
Among NUS professionals, there was a higher percentage of diabetes and obesity. Obesity has been growing year by year among countries as an epidemic, and is recognized worldwide as a public health problem. The World Health Organization ensures that obesity and overweight are responsible for metabolic and cardiovascular complications that contribute to the reduction of life expectancy, with type II diabetes being one of the main complications (30) . The presence of obesity among kitchen workers was verified by Boclin and Blank, when comparing BMI and work in the kitchen and laundry team at eight public hospitals in southern Brazil. According to the authors, the fact that workers eat little by little during work may have contributed to obesity and overweight of employees (31) .
Regarding the health of workers, it was identified that 22.9% had scores above the cut-off line for CMD. Brazilian studies have found percentages between 20 and 56% of CMD with a predominance of female workers, attributing this to hormonal causes and domestic overload (24,30,32) . It is inferred that these causes may also have contributed to the results related to the higher percentage of irritability reported by the nursing team.
Anxiety and depression are important disorders, and people with low education, low socioeconomic status and black skin are more vulnerable (24,27) .
Mental disorder, whether due to anxiety, depression or some other, compromises the quality of work, since it favors the increase in the rate of absenteeism or presenteeism at work, being the main cause of early retirements in many countries and a great burden on the economy (29,32) . Work situations that lead to stress, such as high demands, reduced number of workers, intimidation, harassment, among others, cause damage to workers' health, generating dissatisfaction with work (21,(33)(34) .
Job satisfaction is related to the degree to which workers' needs are met in the face of work organization. Poor working conditions, such as insufficient number of workers and intense workload, lead to dissatisfaction and abandonment of the profession (35) . In the present study, although without statistically significant differences between groups of workers, almost a third of them are dissatisfied or very dissatisfied. Possibly, this data stems from the fact that all groups are exposed to health risks and that participants are mostly women, since they are more dissatisfied with work than men (35) .
The work was recognized among the groups differently in terms of their characteristics. In NS and NUS, workers mostly classify work as active and of low demand, which characterizes it as "live work", as it allows workers to be creative and have autonomy (5)(6) , while the work developed in HCS was classified as passive and highly demanding. This type of work contributes strongly to the worker's mental illness, since it does not allow the construction of autonomy and decision making in the exercise of their activities, being recognized as dead work (5,21,33) . Control over work, which allows workers to use skill and decision-making authority, was greater among NS and lesser among HCS.
As for insecurity, there was a significant difference between the groups of workers in which the HCS team had a higher percentage and a lower percentage in relation to social support. Relationships with colleagues and managers in the work environment are extremely important for coping with everyday problems, directly interfering with well-being, mental health and job satisfaction. Good quality interpersonal relationships are related to clear and objective information about work, social support that collaborate positively in job satisfaction (35) .
Social support has been important because it interferes with work relationships, reducing stress and helping to cope with adversity at work. However, the lack of companionship, individualism, the lack of recognition of the potential and individual creativity at work as well as the lack of autonomy and freedom are drivers of negative feelings that compromise relationships at work (21,33,35) . In this context, there is evidence that precarious work, focused on productivity, fragmented and stressful is associated with suicide (36) . Underreporting and the absence of public policies aimed at preventing and identifying factors that may lead to suicide at work put workers' health at risk (36) .
Study limitations
One limitation of the study is the fact that there are different levels of training among the groups of workers mentioned, which certainly requires more detailed and in-depth studies. Furthermore, the cross-sectional design makes it difficult to verify changes in health and working conditions over time. The fact that it was carried out in a single hospital and did not consider workers already on leave or on leave due to illness may generate potential biases.
Contributions to health
The findings, however, highlighted the context of groups of workers who demand specific care, guiding the planning of health promotion actions that should be placed as a priority by the institution, especially by the sectors responsible for workers' health.
CONCLUSIONS
In the present study, we sought to compare the living, working and health conditions of three groups of workers who work in the hospital, who, to a greater or lesser extent, remain in patients' surroundings, exercising essential functions for maintaining life. When starting with sociodemographic characteristics, HCS workers are those with the lowest education and black or brown skin color.
Regarding lifestyle, practices are unfavorable to good conditions in all groups, but with significant differences between them. HCS, in comparison with the other groups of workers, has as main leisure watching television. As for physical activity, the nursing team was less active, being the one that remains sitting longer.
Concerning health problems, the nutrition team showed a greater proportion of obesity and diabetes. In the nursing team, the complaint of irritability stands out. HCS professionals manifested osteoarticular problems, which shows an association with activities performed at work, indicating that health promotion interventions in the context of work should be directed according to the professional category.
In the present study, although health risk factors were confirmed in all groups studied, there was a marked precariousness when it comes to HCS professionals in terms of devaluation of the activity and outsourcing of services. This condition is even more worrying when analyzing the recent changes in labor legislation, those that expand the possibility of outsourcing and temporary work.
Thus, even though the worker is recognized as a fundamental part of the institution to achieve good quality and success in the work process, workers' health has not received sufficient investment by the companies. The public policies created in this area have not yet brought satisfactory results, since the charges do not yet include worker health surveillance. In this regard, the number of sick professionals in need of temporary or permanent leave is growing and is not contemplated with actions that could positively contribute to their physical or mental health, increased pleasure, satisfaction and self-realization.
FUNDING
We are grateful to the Center for the Improvement of Higher Education Personnel (CAPES -Centro de Aperfeiçoamento de Pessoal de Nível Superior) for the scholarship granted that enabled exclusive dedication and the sandwich doctorate in Spain.
|
2021-06-18T06:16:22.522Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "06985d5a24232aedb768c5988acde9b5db6fd4c7",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/reben/a/BxpDLTkpsVytcsvkkh4GJcd/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ab1a90be1b163b1440bec37ac6e3a0b607b67dc7",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231769550
|
pes2o/s2orc
|
v3-fos-license
|
Genomic Characterization of Clinical Extensively Drug-Resistant Acinetobacter pittii Isolates
Carbapenem-resistant Acinetobacter pittii (CRAP) is a causative agent of nosocomial infections. This study aimed to characterize clinical isolates of CRAP from a tertiary hospital in Northeast Thailand. Six isolates were confirmed as extensively drug-resistant Acinetobacter pittii (XDRAP). The blaNDM-1 gene was detected in three isolates, whereas blaIMP-14 and blaIMP-1 were detected in the others. Multilocus sequence typing with the Pasteur scheme revealed ST220 in two isolates, ST744 in two isolates, and ST63 and ST396 for the remaining two isolates, respectively. Genomic characterization revealed that six XDRAP genes contained antimicrobial resistance genes: ST63 (A436) and ST396 (A1) contained 10 antimicrobial resistance genes, ST220 (A984 and A864) and ST744 (A56 and A273) contained 9 and 8 antimicrobial resistance genes, respectively. The single nucleotide polymorphism (SNP) phylogenetic tree revealed that the isolates A984 and A864 were closely related to A. pittii YB-45 (ST220) from China, while A436 was related to A. pittii WCHAP100020, also from China. A273 and A56 isolates (ST744) were clustered together; these isolates were closely related to strains 2014S07-126, AP43, and WCHAP005069, which were isolated from Taiwan and China. Strict implementation of infection control based upon the framework of epidemiological analyses is essential to prevent outbreaks and contain the spread of the pathogen. Continued surveillance and close monitoring with molecular epidemiological tools are needed.
Introduction
Acinetobacter calcoaceticus-baumannii complex (ACB complex) includes A. baumannii, A. calcoaceticus, A. pittii, A. nosocomialis, A. seifertii, and A. dijkshoorniae [1][2][3]. They are the primary bacteria causing nosocomial infection [1][2][3]. Among these, A. baumannii is known as the most clinically relevant and common nosocomial infection worldwide. So far, most studies have focused on A. baumannii, with relatively fewer studies on A. pittii because of its low prevalence and low rates of resistance in past decades. However, recently, A. pitii has shown increased carbapenem resistance and changes in its resistance mechanisms. Carbapenem-resistant A. pittii (CRAP) has been extensively reported and disseminated worldwide [4,5]. It is associated with human infection and intestinal carriage and is recognized as a significant cause of nosocomial infection in various countries, particularly in intensive care unit settings [1,4,5]. In Taiwan, the percentage of A. pittii increased by 4.6%, and the rates of resistance to carbapenems increased from 4.5% in 2010 to 9.3% and 25.8% in 2012 and 2014, respectively [6]. A study in a French hospital from January 2010 to December 2017 revealed 73 out of 120 cases were classified as hospital-acquired bacteraemia; 54.8% (n = 40) were associated with A. pittii, 39.7% (n = 29) were associated with A. baumannii, and 5.5% (n = 4) were associated A. nosocomialis [5].
Horizontal gene transfer is an important contributor to the spread of carbapenemhydrolyzing class D β-lactamases (CHDLs) among other Acinetobacter species, and particularly of A. pittii, mainly in Asia [7]. Previously, OXA-58-like and metallo-β-lactamase (MBLs) were primarily responsible for CRAP, but bla OXA-23-like and bla OXA-24-like have recently become more common [6]. The major mechanisms of resistance in CRAP found in Thailand include production of OXA-23 and OXA-58 [7,8]. Apart from bla OXA genes with MBLs, genes such as bla IMP-14a have been reported in CRAP isolates from Thailand, while bla NDM -carrying organisms have been reported in countries like Malaysia, Taiwan, South Korea, Japan, and Brazil, but not in Thailand [4,6,[8][9][10][11]. Genomic characterization of metallo-β-lactamase harboring A. pittii has not yet been investigated in the isolates from Thailand.
In this study, we characterized the antimicrobial susceptibility, resistance genes, plasmid typing, and genetic relationships of CRAP harboring bla NDM and bla IMP isolated from patients in Northeast Thailand. We demonstrated that almost all the CRAP isolates used in this study showed extensive drug resistance (XDR). In addition, all genomic sequences of extensively drug-resistant Acinetobacter pittii (XDRAP) strains were comparative analyzed.
Ethics
This study was reviewed and approved by the Roi Et Hospital Ethics Review board (ERB). The ethic approval number is 034/2560. The medical records of seven patients were reviewed by the attending physicians at the hospital using the clinical case record form approved by ERB. The ERB waived the requirement for informed consent for patient signatures; however, the attending physicians provided written informed consent for all cases as the study satisfied the conditions of the policy statement on ethical conduct for research involving humans. This study was conducted according to the principles of the Declaration of Helsinki.
Bacterial Identification
From April 2017 to March 2018, we established laboratory-based surveillance to determine carbapenem-resistant Gram-negative bacteria in an 800-bed tertiary-care hospital in Roi Et province, northeastern Thailand. A criterion in this study was that all carbapenemresistant Acinetobacter calcoaceticus-baumannii complex (CRACB) were collected from any specimens during the surveillance program. A total of 832 nonrepetitive carbapenemresistant ACB (CRACB) isolates were collected. Presumptive identification was performed at the hospital using a conventional biochemical test [12]. All CRACB isolates were sent to our laboratory to identify species levels using gyrB-multiplex PCR [13], and to confirm A. baumannii using PCR for the blaOXA-51-like gene, which is intrinsic of A. baumannii [14].
PCR-Based Replicon Typing
The plasmid replicons were determined in all CRAP isolates by PCR-based replicon typing method (Table S2; [19]). The nineteen different homology groups (GRs) were detected based on similarities of nucleotide sequence in 27 replicase genes.
Multilocus Sequence Typing
Multilocus sequence typing (MLST) was performed according to the Pasteur scheme (https://pubmlst.org/abaumannii/) using seven housekeeping genes (gltA, gryB, gdhB, recA, cpn60, rpoD, and gpi). The sequence types (STs) were identified by comparing the allele sequences in the MLST database. A goeBURST analysis for sequence types was performed using the PHYLOViZ 2.0 program [20]. Construction of phylogenetic trees for all STs using concatenated sequences was performed using MEGA-X (version 10.1.7) software [21].
For comprehensive genomic analysis, we used BacWGSTdb (http://bacdb.org/ BacWGSTdb), which allowed us to find the closest isolates that are currently deposited in the GenBank database [27]. The whole-genome sequences of 37 closely related to our A. pittii strains were downloaded from the GenBank database. The genomic comparison was conducted using a reference genome-based single nucleotide polymorphism (SNP) strategy with CSI Phylogeny [31]. The result was constructed phylogenetic trees using MEGA-X, via the neighbor-joining method with 500 bootstrap replicates by applying the Tamura three-parameter model [21]. The phylogenetic tree was visualized using the Interactive Tree of Life (iTOL) (http://itol.embl.de) [32].
Statistical Analysis
The clinical characteristics of XDRAP were analyzed by comparing with XDR A. baumannii (XDRAB), also collected during the current study. Of the total 832 CRACB cases, 6 were XDRAP, while 18 were XDRAB. Clinical data of these cases were analyzed by logistic regression using Stata version 12.0 software (StataCorp, College Station, TX, USA). Data were considered significant at p < 0.05.
Nucleotide Sequence Accession Numbers
The assembled genomic sequences were deposited in the NCBI Genbank Database under the Bioproject accession number of PRJNA602201.
Identification, Susceptibility, and Genotyping
The studied criteria included only carbapenem-resistant Acinetobacter calcoaceticusbaumannii complex (CRACB). Of the total 832 carbapenem-resistant CRACB isolates used in this study, 826 were identified as A. baumannii (99.3%), and 6 (0.7%) were identified as A. pittii. Among the 826 A. baumannii, 18 isolates were XDR (2.2%). All the A. pittii isolates in this study were resistant to carbapenem and showed presence of bla NDM-1 , bla IMP-1 , and bla IMP-14 genes, as well as oxacillinase genes like bla OXA-10 , bla OXA-58 , and bla OXA-23 . Table 1 shows the clinical data of these six patients, of which five were male (83%) and one female (17%), with an age range of 19-73 years. Three cases were classified as hospital-acquired infections, whereas the rest were classified as colonization. Five of the six patients survived, while no data were available for one case.
The results of antimicrobial susceptibility tests are shown in Table 2. All carbapenemresistant Acinetobacter pittii (CRAP) isolates were resistant to ceftazidime, cefepime, cefotaxime, ceftriaxone, doripenem, imipenem, meropenem piperacillin, and trimethoprimsulfamethoxazole. All the isolates were intermediately resistant to colistin. Three isolates were found to be susceptible to gentamicin and amikacin, while four isolates were susceptible to ciprofloxacin and tetracycline. Five isolates were identified as extensively drug-resistant (XDR) which is defined according to the guideline described elsewhere [33].
MLST analysis revealed that six CRAP isolates belonged to four STs: two (A864 and A984) were assigned to ST220, two (A56 and A273) were ST744, and one each belonged to ST396 (A1) and ST63 (A436), respectively, according to the Pasteur scheme ( Table 1). The goeBURST displayed a clonal complex of CRAP, as shown in Figure 1. ST396 and ST744 were closely related to ST839, whereas ST220 was related to ST207. ST63 was related to ST64 and ST208. A phylogenetic tree was constructed using the concatenated sequence of four STs as shown in Figure S1. It demonstrated that ST63 was closely related to ST208, while ST744 was closely related to ST122 and ST121. ST396 was closely related to ST839 and ST840. ST220 was related to ST207, ST666, ST227, and ST1206.
As shown in Figure 3, the whole-genome SNP using CSI Phylogeny revealed that isolates A984 and A864 were closely related to the reference A. pittii YB-45 (ST220) isolate from China, recovered from sputum, while isolate A436, which was related to A. pittii strain WCHAP100020, was isolated from China. By contrast, A273 and A56 isolates were clustered together; these isolates were related to strains 2014S07-126, AP43, WCHAP005069, which were isolated from Taiwan and China. The isolate A1 was clustered together with our isolate A436 and WCHAP100020; however, it is located at a different branch. Figure 3. Whole-genome phylogeny analysis of A. pittii generated by CSI Phylogeny and visualized with interactive tree of life tool. The whole genome sequence of A. pittii in our studies is shown in yellow highlight and A. pittii-ST220-China as a reference genome is denote in red square box. Sequence type (STs) and β-lactamase genes are shown in each isolate. The filled symbols reveal the presentation of the genes, whereas unfilled symbols reveal their absence.
Discussion
Over the last decade, the presence of carbapenemase-producing A. pittii has become dominant in several countries, and it is being increasingly considered a nosocomial pathogen [34,35]. A previous study in Thailand revealed that 6.4% (22/346) were A. pittii, of which 22.7% (5/22) were carbapenem-resistant [8]. Our study revealed 0.7% of A. pittii in a hospital in rural Thailand (lower than that reported previously), but all of them were carbapenem-resistant. All the patients survived. XDRAP showed a correlation with male and elderly patients; however, the small number of XDRAPs observed in this study limited their analysis. A retrospective study conducted at a teaching hospital in Taiwan revealed that the 14-day and 28-day mortality rates of A. pittii bacteremia were 14% and 17%, respectively [36]. A study in Thailand demonstrated that patients infected with carbapenemsusceptible A. nosocomialis and A. pittii had lower 30-day mortality than those infected with carbapenem-susceptible A. baumannii and carbapenem-resistant A. baumannii [37]. Moreover, a recent study demonstrated that A. seifertii and A. pittii presented higher pathogenicity in in vitro and in vivo models than A. baumannii and A. nosocomialis [38].
In the present study, four STs (ST63, ST220, ST396, and ST744) were assigned to CRAP, of which ST220 was the most predominant. This ST was reported in Japan and China, and carried bla NDM-1 , like our isolate [4,53]. ST744 was the second most predominant ST in this study; it was found in Germany from the MLST database (https://pubmlst.org/bigsdb? page=profileInfo&db=pubmlst_abaumannii_pasteur_seqdef&scheme_id=2&profile_id=74 4). ST63 was reported in Japan, Korea, and China [11,54,55]. ST396 was also reported in Korea [11]. Interestingly, ST220 seems to the most susceptible to aminoglycoside agents. Our study showed that 66.6% (2 isolates) of ST220 were susceptible to netilmicin, gentamicin, and amikacin. Two ST220 isolates reported elsewhere revealed that A. pittii SU1805 (ST220), isolated from a hospital sink in Japan, was susceptible to gentamicin and amikacin, whereas A. pittii YB-45 from China was susceptible to gentamicin and tobramycin [4,53].
Whole-genome sequences of A. pittii have been reported in ST119, ST207 (strain TCM292), ST220 (strain YB-45), ST865 (strain TCM156), and several strains deposited in GenBank [44,53,56,57]. Whole-genome SNP phylogeny revealed that our XDRAP isolates showed that the A436 (ST63) isolate was closely related to the strain WCHAP100020 from China. The XDRAP isolates A984 and A864 (ST220) were clustered with strain YB-45/ST220 from China and strain ASO12594 from the United States of America. A56 and A273 isolates were clustered together and are closely related to strains 2014S07-126, AP43, and WCHAP005069, isolated from Taiwan and China. Isolate A1 (ST396) was clustered together with isolates A436 and WCHAP100020. However, all of them have common ancestors for each cluster. Whole-genome sequencing is a powerful tool for source tracking, surveillance monitoring, and dynamic populations.
Acinetobacter baumannii is of concern to the World Health Organization because it resists most commercially available antibiotics and causes hospital-acquired infections. Increasing numbers of multidrug-resistant A. pittii and XDRAP worldwide require strengthening of official surveillance and close monitoring in order to prevent outbreaks and contain the spread in parallel with A. baumannii.
|
2021-02-03T06:17:16.315Z
|
2021-01-25T00:00:00.000
|
{
"year": 2021,
"sha1": "7a1d0477a29e0b092e85172f26b56b529eac59ea",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/9/2/242/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "80a12f636625f2f5a74c78bcd1726633a263cffe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221797505
|
pes2o/s2orc
|
v3-fos-license
|
A Novel Method for Measuring Serum Unbound Bilirubin Levels Using Glucose Oxidase–Peroxidase and Bilirubin-Inducible Fluorescent Protein (UnaG): No Influence of Direct Bilirubin
The glucose oxidase–peroxidase (GOD–POD) method used to measure serum unbound bilirubin (UB) suffers from direct bilirubin (DB) interference. Using a bilirubin-inducible fluorescent protein from eel muscle (UnaG), a novel GOD–POD–UnaG method for measuring UB was developed. Newborn sera with an indirect bilirubin/albumin (iDB/A) molar ratio of <0.5 were classified into four groups of DB/total serum bilirubin (TB) ratios (<5%, 5–10%, 10–20%, and ≥20%), and the correlation between the UB levels and iDB/A ratio was examined. Linear regression analysis was performed to compare UB values from both methods with the iDB/A ratio from 38 sera samples with DB/TB ratio <5% and 11 samples with DB/TB ratio ≥5%. The correlation coefficient (r) between UB values and the iDB/A ratio for the GOD–POD method was 0.8096 (DB/TB ratio <5%, n = 239), 0.7265 (5–10%, n = 29), 0.7165 (10–20%, n = 17), and 0.4816 (≥20%, n = 16). UB values using the GOD–POD–UnaG method highly correlated with the iDB/A ratio in both <5% and ≥5% DB/TB ratio sera (r = 0.887 and 0.806, respectively), whereas a low correlation (r = 0.428) occurred for ≥5% DB/TB ratio sera using the GOD–POD method. Our GOD–POD–UnaG method can measure UB levels regardless of the presence of DB.
Introduction
An increase in serum unconjugated bilirubin (indirect bilirubin, iDB) levels, most importantly free bilirubin not bound to albumin (unbound bilirubin, UB), is associated with the development of serious brain injury in newborns, called bilirubin encephalopathy [1,2]. In Japan, bilirubin encephalopathy is an etiology of dyskinetic cerebral palsy and abnormal auditory brainstem response. This is of particular
Study A: Impact of Serum DB Levels on the Correlations between iDB/A Ratio and Serum UB Levels in Clinical Data
In this study, we collected clinical data from 345 newborn patients. To evaluate the validity of UB values, 301 samples with an indirect bilirubin/albumin (iDB/A) ratio from 0.1 to 0.5 were analyzed when the linear correlation between the UB value and iDB/A ratio was maintained (as described in Section 4). These 301 samples were then classified into four groups according to the DB/TB ratio, namely <5% (n = 239), 5-10% (n = 29), 10-20% (n = 17), and ≥20% (n = 16). The correlation (slope and correlation coefficient) between UB values and the iDB/A ratio in each group is shown in Figure 1; the slope increases and the correlation coefficient decreases with increasing DB/TB ratio. The number of samples above the line corresponding to the upper limit of the 95% confidence interval (CI) in the DB/TB <5% group (outliers) was compared between each group. In comparison to the DB/TB ratio <5% group, in which 14% of samples were above the upper limit line of the 95% CI, the other groups had a significantly higher proportion of samples above this upper limit, with 31% of the 5-10% group, 65% of the 10-20% group, and 81% of the ≥20% group (p < 0.001, Figure 2). Comparison of the number of samples higher than the upper limit line of the 95% confidence interval of the DB/TB ratio <5% (outliers). CI, confidence interval; DB, direct bilirubin; TB, total serum bilirubin. The number of samples above the line corresponding to the upper limit of the 95% confidence interval (CI) in the DB/TB <5% group (outliers) was compared between each group. In comparison to the DB/TB ratio <5% group, in which 14% of samples were above the upper limit line of the 95% CI, the other groups had a significantly higher proportion of samples above this upper limit, with 31% of the 5-10% group, 65% of the 10-20% group, and 81% of the ≥20% group (p < 0.001, Figure 2). The number of samples above the line corresponding to the upper limit of the 95% confidence interval (CI) in the DB/TB <5% group (outliers) was compared between each group. In comparison to the DB/TB ratio <5% group, in which 14% of samples were above the upper limit line of the 95% CI, the other groups had a significantly higher proportion of samples above this upper limit, with 31% of the 5-10% group, 65% of the 10-20% group, and 81% of the ≥20% group (p < 0.001, Figure 2). Comparison of the number of samples higher than the upper limit line of the 95% confidence interval of the DB/TB ratio <5% (outliers). CI, confidence interval; DB, direct bilirubin; TB, total serum bilirubin. Comparison of the number of samples higher than the upper limit line of the 95% confidence interval of the DB/TB ratio <5% (outliers). CI, confidence interval; DB, direct bilirubin; TB, total serum bilirubin.
Study B: Impact of Serum DB Levels on Measuring Serum UB Levels Using the GOD-POD-UnaG Method
A total of 49 serum samples from 34 newborn patients were classified as having a low DB/TB ratio (<5%, n = 38) or a high DB/TB ratio (≥5%, n = 11) for analysis. The low DB/TB ratio samples were obtained from 30 newborn patients (median gestational age: 38 weeks and median birth weight: 2707 g), with a median age at sampling of five days (1-19 days), median TB level of 13.0 mg/dL (3.1-18.2 mg/dL), median DB level of 0.2 mg/dL (0.1-0.3 mg/dL), and median UB level of 0.48 µg/dL (0.04-0.87 µg/dL). Table 1 displays the details of the 11 samples with high DB/TB ratios. These samples were obtained from four patients (one patient with trisomy 18, two patients with congenital cytomegalovirus (CMV) infection, and one patient with methylmalonic acidemia). The median TB level for these patients was 8.60 mg/dL, median DB level was 3.50 mg/dL, median iDB level was 2.60 mg/dL, median DB/TB ratio was 69.4%, and median albumin level was 3.10 g/dL. Of these 11 samples, one sample resulted in a DB/TB ratio of 5-10%, two samples with DB/TB ratio of 10-20% and eight samples with DB/TB ratio of 20% or more. To test the validity of UB values determined by the GOD-POD method or the GOD-POD-UnaG method, we investigated the correlation between the UB values and iDB/A ratios of low DB/TB ratio samples (n = 38) and high DB/TB ratio samples (n = 11). When the GOD-POD method was used, the 38 samples with a low DB/TB ratio showed a significant correlation (r = 0.864, p < 0.0001), whereas the 11 samples with a high DB/TB ratio did not show any correlation (r = 0.428, p = 0.189). In contrast, when the GOD-POD-UnaG method was employed, the 38 samples with a low DB/TB ratio showed a significant correlation (r = 0.887, p < 0.0001) and the 11 samples with a high DB/TB ratio also showed a significant correlation (r = 0.806, p = 0.0003) ( Figure 3). More specifically, when dividing the DB/TB ratio into four groups as in Study A, applying the GOD-POD method to serum with a DB/TB ratio ≥ 5% resulted in deviation from the regression line of low DB/TB ratio serum (n = 38). Employing the GOD-POD-UnaG method resulted in samples plotted near the regression line of low DB/TB ratio serum (n = 38) regardless of the DB/TB ratio ( Figure S1). Table 1 shows the UB values for the 11 high DB/TB ratio samples obtained from the GOD-POD and GOD-POD-UnaG methods. Despite measuring the same samples, the resulting values differed considerably between the two methods, with the UB values obtained from the GOD-POD method being significantly higher than those from the GOD-POD-UnaG method (p = 0.0011). For example, in Sample 1, although the DB/TB ratio was 8.2%, the UB values determined from the GOD-POD and GOD-POD-UnaG methods were quite different (1.62 and 0.99 µg/dL, respectively). We then analyzed the correlation between UB values determined from the GOD-POD and GOD-POD-UnaG methods. The 38 samples with low DB/TB ratio showed a significant correlation (r = 0.935, p < 0.0001) between the two methods, whereas the 11 samples with high DB/TB ratio resulted in lower correlation (r = 0.623, p = 0.041). The UB values determined with the GOD-POD method were higher than those determined with the GOD-POD-UnaG method ( Figure 4). Figure S2 shows the graph of UB values determined from the GOD-POD and GOD-POD-UnaG methods with the DB/TB ratios classified into four groups as in Study A. Although there is no fixed trend depending on the DB/TB ratio values, sera having a DB/TB ratio of ≥ 5% often deviated from the correlation line between the two. Table 1 shows the UB values for the 11 high DB/TB ratio samples obtained from the GOD-POD and GOD-POD-UnaG methods. Despite measuring the same samples, the resulting values differed considerably between the two methods, with the UB values obtained from the GOD-POD method being significantly higher than those from the GOD-POD-UnaG method (p = 0.0011). For example, in Sample 1, although the DB/TB ratio was 8.2%, the UB values determined from the GOD-POD and GOD-POD-UnaG methods were quite different (1.62 and 0.99 µg/dL, respectively). We then analyzed the correlation between UB values determined from the GOD-POD and GOD-POD-UnaG methods. The 38 samples with low DB/TB ratio showed a significant correlation (r = 0.935, p < 0.0001) between the two methods, whereas the 11 samples with high DB/TB ratio resulted in lower correlation (r = 0.623, p = 0.041). The UB values determined with the GOD-POD method were higher than those determined with the GOD-POD-UnaG method ( Figure 4). Figure S2 shows the graph of UB values determined from the GOD-POD and GOD-POD-UnaG methods with the DB/TB ratios classified into four groups as in Study A. Although there is no fixed trend depending on the DB/TB ratio values, sera having a DB/TB ratio of ≥ 5% often deviated from the correlation line between the two. Table 1 shows the UB values for the 11 high DB/TB ratio samples obtained from the GOD-POD and GOD-POD-UnaG methods. Despite measuring the same samples, the resulting values differed considerably between the two methods, with the UB values obtained from the GOD-POD method being significantly higher than those from the GOD-POD-UnaG method (p = 0.0011). For example, in Sample 1, although the DB/TB ratio was 8.2%, the UB values determined from the GOD-POD and GOD-POD-UnaG methods were quite different (1.62 and 0.99 µg/dL, respectively). We then analyzed the correlation between UB values determined from the GOD-POD and GOD-POD-UnaG methods. The 38 samples with low DB/TB ratio showed a significant correlation (r = 0.935, p < 0.0001) between the two methods, whereas the 11 samples with high DB/TB ratio resulted in lower correlation (r = 0.623, p = 0.041). The UB values determined with the GOD-POD method were higher than those determined with the GOD-POD-UnaG method ( Figure 4). Figure S2 shows the graph of UB values determined from the GOD-POD and GOD-POD-UnaG methods with the DB/TB ratios classified into four groups as in Study A. Although there is no fixed trend depending on the DB/TB ratio values, sera having a DB/TB ratio of ≥ 5% often deviated from the correlation line between the two.
Discussion
In Study A, the GOD-POD method had a lower correlation with the UB value in the presence of DB with a DB/TB ratio ≥5% in newborn serum. This impact became more profound as the DB/TB ratio became larger and the number of outliers increased. It is reported that it is difficult to obtain accurate UB levels from the GOD-POD method when the DB/TB ratio is ≥10% [1]. Our findings reveal that the influence of the DB/TB ratio indeed occurs at ratios lower than 10% (and at least 5%). In Study B, we validated the GOD-POD-UnaG method that we developed for comparison with the established GOD-POD method using clinically-obtained neonatal sera. With low DB/TB ratio (< 5%) sera, the UB values from the GOD-POD-UnaG method were similar to those of the GOD-POD method. Notably, when using the GOD-POD-UnaG method, we demonstrated for the first time that even a high DB/TB ratio (≥5%) of serum does not affect the UB value.
Due to the development of perinatal medical care in recent years, severely ill infants (e.g., extremely preterm infants and infants with gastrointestinal anomalies or chromosomal disorders) can now survive.
Neonatologists are encountering an increase in opportunities to treat and care for newborn patients exhibiting high DB/TB ratios in NICUs [8]. These newborn patients are more likely to develop hypoalbuminemia, with many possibilities to use drugs that alter albumin binding (e.g., antimicrobial drugs, fat formulations, and indomethacin), which consequently affect bilirubin binding to albumin to result in hyper-unbound bilirubinemia [1]. Therefore, accurate UB measurements have become paramount for such patients for a favorable prognosis. In infants with conjugated hyperbilirubinemia, there is an urgent need to develop a methodology that can measure UB levels regardless of the presence of DB.
Analysis of the correlation between UB and the iDB/A ratio was limited to serum with an iDB/A ratio <0.5; the relationship between the two certainly remains linear when the iDB/A ratio <0.5 [11], i.e., the equilibrium state of the first binding site of albumin is established. In fact, when the ratio was ≥0.6, the linear relationship between UB and the iDB/A ratio may not hold due to an equilibrium condition associated with the second binding site of albumin. As this study concerned the influence of DB in the GOD-POD method, we limited the analysis to serum samples with an iDB/A ratio <0.5 so that the analysis was simpler and easier to understand.
The UB-Analyzer (UA-2) that uses the GOD-POD method can measure serum TB and UB levels [7]. UB is rapidly oxidized to colorless compounds by POD in the presence of hydrogen peroxide derived from glucose by mediation of GOD. First, TB levels are determined by direct absorbance measurement at 460 nm. Next, under experimental conditions where bilirubin oxidation follows first-order kinetics, the rate constant is determined by measuring the oxidation velocity of bilirubin in the absence of albumin. The initial velocity is estimated from the time required for a 20% decrease in concentration from the initial TB concentration. UB is calculated from the initial velocity of bilirubin degradation and the ratio of the POD concentration to that in the standard assay of the albumin-free bilirubin solution [1, 6,7] (Figure 5a). As UnaG is a protein characterized by iDB concentration-dependent fluorescence [9,10], the GOD-POD-UnaG method involves the division of the serum sample into two samples to calculate the difference in TB level initially and after 20 s of POD reaction (TB reduction rate). Therefore, it is necessary to stop the POD reaction after 20 s, which can be accomplished by adding ascorbic acid (patent registration number: 6716108) (Figure 5b). As a result, we were able to establish a GOD-POD-UnaG method with a fixed reaction time (20 s), which is beneficial for the development of an automated measuring device that is currently under development in our laboratories.
Because UnaG emits very intense fluorescence and accurate measurements are hindered by an internal shielding effect when a high concentration of iDB is present, we diluted the solutions 800-fold. However, if we consider the equilibrium relationship between bilirubin and albumin, it is important to use a more concentrated solution for the UB measurement requiring the reaction with POD [12]; an 800-fold dilution for the reaction with POD would result in inaccurate UB levels. Therefore, a key feature of this GOD-POD-UnaG method is the use of a two-step dilution process, whereby the POD reaction proceeds first with a 51-fold dilution in the same manner as the GOD-POD method, and then a subsequent 800-fold dilution is used for the UnaG fluorescence measurement.
In clinical practice, a newborn patient with a high serum DB/TB ratio may require phototherapy and exchange transfusion for treatment, which is administered based on the UB values measured by the GOD-POD method; however, if judged by the results from the GOD-POD-UnaG method, this patient does not need such treatments. Out of the 11 high DB/TB ratio serum samples in Table 1, three serum samples (Samples 1, 4, and 11) had UB values ≥1.0 from the GOD-POD method, which indicates the need for exchange transfusion according to the 1992 Kobe University Treatment Criteria [13,14]. However, the GOD-POD-UnaG results for these samples indicated that not only exchange transfusion but also phototherapy would be unnecessary for the patients of serum samples 4 and 11. The results from the GOD-POD-UnaG method for serum sample 1 indicate criteria for exchange transfusion [13,14], even when taking the influence of DB into consideration. Therefore, the patient with serum sample 1 could be clearly diagnosed with serious unconjugated hyperbilirubinemia. Furthermore, while the results from the GOD-POD method for three other samples (samples 6, 7, and 8) indicated the need for phototherapy (UB ≥0.6) for these patients based on the 1992 Kobe University Treatment Criteria [13,14], the results from the GOD-POD-UnaG method indicate that phototherapy is unnecessary. Indeed, a clinical issue in Japan is the overestimation of UB values from the GOD-POD method that can lead to the overtreatment of hyperbilirubinemia. As there were patients with serious UB levels even after removing the impact of DB, it is desirable to confirm UB values of high DB/TB ratio serum samples with the GOD-POD-UnaG method. Because UnaG emits very intense fluorescence and accurate measurements are hindered by an internal shielding effect when a high concentration of iDB is present, we diluted the solutions 800fold. However, if we consider the equilibrium relationship between bilirubin and albumin, it is important to use a more concentrated solution for the UB measurement requiring the reaction with POD [12]; an 800-fold dilution for the reaction with POD would result in inaccurate UB levels. Therefore, a key feature of this GOD-POD-UnaG method is the use of a two-step dilution process, whereby the POD reaction proceeds first with a 51-fold dilution in the same manner as the GOD-POD method, and then a subsequent 800-fold dilution is used for the UnaG fluorescence measurement.
In clinical practice, a newborn patient with a high serum DB/TB ratio may require phototherapy and exchange transfusion for treatment, which is administered based on the UB values measured by the GOD-POD method; however, if judged by the results from the GOD-POD-UnaG method, this patient does not need such treatments. Out of the 11 high DB/TB ratio serum samples in Table 1, three serum samples (Samples 1, 4, and 11) had UB values ≥1.0 from the GOD-POD method, which indicates the need for exchange transfusion according to the 1992 Kobe University Treatment Criteria [13,14]. However, the GOD-POD-UnaG results for these samples indicated that not only exchange transfusion but also phototherapy would be unnecessary for the patients of serum samples 4 and 11. The results from the GOD-POD-UnaG method for serum sample 1 indicate criteria for exchange transfusion [13,14], even when taking the influence of DB into consideration. Therefore, the patient with serum sample 1 could be clearly diagnosed with serious unconjugated hyperbilirubinemia. Furthermore, while the results from the GOD-POD method for three other samples (samples 6, 7, and 8) indicated the need for phototherapy (UB ≥0.6) for these patients based on the 1992 Kobe University Treatment Criteria [13,14], the results from the GOD-POD-UnaG method indicate that phototherapy is unnecessary. Indeed, a clinical issue in Japan is the overestimation of UB values from Calculation method of UB using estimated initial velocity of the GOD-POD and GOD-POD-UnaG methods. (a) In the GOD-POD method, the UB value is calculated by estimating the initial velocity based on the time taken for TB to reduce by 20% (∆t). (b) In the GOD-POD-UnaG method, the UB value is calculated based on the rate of reduction of iDB (∆iDB) over 20 s. GOD, glucose oxidase; iDB, indirect bilirubin; POD, peroxidase; UB, unbound bilirubin; UnaG, bilirubin-inducible fluorescent protein from eel muscle.
A limitation of this study was the number of cases examined. However, we were able to show clearly from both clinical and fundamental science perspectives that the GOD-POD method affects the UB levels when the DB/TB ratio in patient serum is ≥5%. As the GOD-POD-UnaG method in this study involved manual labor and the use of expensive microplate readers, we were unable to examine numerous serum samples. Going forward, an automated device needs to be developed to perform GOD-POD-UnaG measurements with a larger number of newborn sera to verify these results and provide a clinically feasible alternative to the GOD-POD method. Finally, because other novel UB measurement methods using a fluorescence sensor or potentiometric sensor are developed [15,16], further studies are needed to compare the UB levels between their methods and our GOD-POD-UnaG method.
Setting and Study Design
We conducted a multicenter retrospective study using the clinical data of serum TB and DB values in newborns admitted at the NICU at Kobe University Hospital, Kakogawa City Hospital, and Hyogo Prefectural Kobe Children's Hospital, Japan, from 2011 to 2014. The study protocol was approved by a central institutional review board at Kobe University Hospital (approval no. 1825, date 27 October 2015). Formal written informed consent was not required due to the retrospective nature of the study, which used anonymized data generated from our regular practice. This study protocol had an announcement on our website.
Subjects
We retrospectively analyzed the blood test values of newborn patients admitted for jaundice management that exhibited TB levels ≥5.0 mg/dL (one test value per patient). A total of 345 patients, including 270 patients with DB <0.5 mg/dL and 75 patients with DB ≥0.5 mg/dL were analyzed.
Measurement Methods
Serum UB levels were measured using the UB-Analyzer (UA-2). TB and DB measurements were performed using the bilirubin oxidase method (IatroLQ T-bil and IatroLQ D-bil kits (Unitika Co., Okazaki, Japan) or Nescoat VL T-bil and Nescoat VL D-bil kits (Alfresa, Osaka, Japan)) or the vanadic acid oxidation method (Total Bilirubin E-HA Test and Direct Bilirubin E-HA Test (Wako Co., Osaka, Japan)). The IatroLQ D-bil kit detects only conjugated bilirubin as DB. Meanwhile, the Nescoat VL D-bil kit and Direct Bilirubin E-HA Test detect conjugated bilirubin and δ-bilirubin as DB. Serum iDB concentrations were calculated using the formula (iDB) = (TB) − (DB) with 1 mg/dL = 17.1 µM. Serum albumin was measured using a modified bromocresol purple method (Kainos Laboratories, Inc., Tokyo, Japan).
Study Methods
When the serum DB level can be regarded as almost zero and the TB/albumin (TB/A) molar ratio is between 0.1 and 0.5, the TB/A ratio is linearly correlated with serum UB [11]. In the present study, to work with sera with a high level of DB, we investigated the correlation between iDB/albumin (iDB/A) molar ratio and UB values. The correlation (correlation coefficient and the slope) between UB values and the iDB/A ratio was compared by classifying patients into four groups of DB/TB ratios (i.e., <5%, 5-10%, 10-20%, and ≥20%). In addition, the 95% CI was set using the DB/TB ratio <5% group, with the number of samples above the upper limit of the 95% CI (outliers) compared between the groups. Statistical analyses were performed with the Chi-square test, or Fisher's exact test as appropriate.
Setting
For study B, blood samples were obtained from newborns for routine laboratory tests for a variety of medical reasons at the NICU in Kobe University Hospital. Residual blood samples after performing regular laboratory tests were immediately centrifuged at 3000× g rpm for 10 min in the dark, and then the sera were stored at −20 • C in the dark until used. UnaG was provided by the Brain Science Institute, RIKEN, Japan [9]. The study protocol was approved by the ethical committee of Kobe University Graduate School of Medicine (approval no. 1618, date 20 August 2014). Informed consent was obtained from parents of newborns prior to blood sample collection. The methods were carried out in accordance with the approved guidelines.
GOD-POD Method Protocol
The UB-Analyzer (UA-2) was used in accordance with the recommended manufacturer instructions. Briefly, 1000 µL of phosphate buffered saline (PBS) with glucose was first mixed with 20 µL of artificial iDB solution or sera (51-fold dilution). The reaction was then initiated with the addition of 25 µL GOD-POD solution. The initial velocity of total bilirubin oxidation was monitored by absorbance spectroscopy, and then automatically calculated as the UB value [7].
Statistical Analysis
Statistical analyses were performed with the Mann-Whitney nonparametric rank test to compare the two independent datasets, which are expressed as the median (range) ( Table 1). Regression analysis was performed to linearly compare UB values using either the GOD-POD-UnaG or GOD-POD method with the iDB/A ratio and regression equations. Correlation coefficients (r) were calculated using JMP 13.0.0 (SAS Institute, Cary, NC, USA). Correlations and differences were deemed statistically significant when p < 0.05.
Conclusions
We clearly demonstrated that UB levels using the GOD-POD method can be affected when the DB/TB ratio >5%, suggesting that DB should be easily oxidized by POD and lead to an overestimation of the UB value. Importantly, we developed the GOD-POD-UnaG method as a novel UB measurement method. This method can measure the UB levels in newborn sera, regardless of the presence of DB, highlighting an attractive alternative method to the conventional GOD-POD assay, especially for the prognosis of newborns with high DB serum.
Patents
The GOD-POD-UnaG method was registered with the international patent office on 12 June 2020 (registration number: 6716108).
Conflicts of Interest:
The authors declare no conflict of interest. I.M. has received study grants and lecture and manuscript fees from Atom Medical Corp. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
|
2020-09-20T13:05:09.901Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "88b4599d087064d2f52d6de320e5f700320a1e23",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms21186778",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83ed0cda487db2ce5887be8dcd9000ca59033092",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
233035369
|
pes2o/s2orc
|
v3-fos-license
|
Cryo-EM Structures of CusA Reveal a Mechanism of Metal-Ion Export
The bacterial RND superfamily of efflux pumps mediate resistance to a variety of biocides, including Cu(I) and Ag(I) ions. Here we report four cryo-EM structures of the trimeric CusA pump in the presence of Cu(I). Combined with MD simulations, our data indicate that each CusA protomer within the trimer recognizes and extrudes Cu(I) independently.
IMPORTANCE The bacterial RND superfamily of efflux pumps mediate resistance to a variety of biocides, including Cu(I) and Ag(I) ions. Here we report four cryo-EM structures of the trimeric CusA pump in the presence of Cu(I). Combined with MD simulations, our data indicate that each CusA protomer within the trimer recognizes and extrudes Cu(I) independently. KEYWORDS CusA, antimicrobial resistance, cryo-EM, efflux pump, resistancenodulation-cell division I n Gram-negative bacteria, efflux systems of the resistance-nodulation-cell division (RND) superfamily significantly affect both the intrinsic and acquired tolerance levels of the organism to antimicrobial agents and toxic metal ions, including Cu(I) and Ag(I) (1). Typically, an RND efflux pump works in conjunction with a periplasmic membrane fusion protein and an outer membrane channel to form a functional tripartite protein complex. Escherichia coli harbors seven of these efflux pumps that can be characterized into two distinct classes, the hydrophobe-amphiphile efflux RND (HAE-RND) and the heavy-metal efflux RND (HME-RND) families (1).
The HAE-RND efflux pumps have been studied extensively. Several X-ray and cryoelectron microscopy (cryo-EM) structures have been determined within this family of membrane proteins, including E. coli AcrB (2)(3)(4)(5)(6), Pseudomonas aeruginosa MexB (7), Neisseria gonorrhoeae MtrD (8,9), Campylobacter jejuni CmeB (10), and Acinetobacter baumannii AdeB (11). It has been proposed that these RND efflux pumps utilize a rotating mechanism, where the three subunits within the trimeric pump are synchronized and coordinated to advance the transport cycle for drug extrusion (3,6,12). However, a direct observation of transport dynamics using single-molecule fluorescence resonance energy transfer (FRET) imaging indicated that each protomer of the trimeric CmeB multidrug efflux pump undergoes uncoordinated conformational transitions and function independently of each other (10). Therefore, the action mechanism of these HAE-RND multidrug efflux pumps still remains elusive.
In contrast, the structural and molecular mechanisms of the HME-RND efflux pumps are far less studied. At present, only two crystal structures belonging to the HME-RND pumps are available. These pumps are the E. coli CusA (13) and Cupriavidus metallidurans CH34 ZneA (14) heavy-metal efflux transporters. In addition, the co-crystal structure of the CusA-CusB transporter-adaptor complex has been reported (15), providing the first structural evidence that a trimeric efflux pump within the RND superfamily interacts with six adaptor molecules to assemble and function.
To elucidate the mechanisms of heavy-metal recognition and extrusion of the E. coli CusA efflux transporter, we define cryo-EM structures of this membrane protein embedded in lipidic nanodiscs in the presence of Cu(I) ions. Cryo-EM is an imaging technique that snapshots single-particle images at random orientations in a frozenhydrated state. It has a capability of recording different conformational states of proteins and other biomacromolecules within a single sample (16,17). Recently, it has also been shown that this technique is capable of simultaneously solving structures of a variety of membrane proteins from a heterogeneous, impure sample (18). With the cryo-EM approach, we are able to observe detailed structural information of various transient states that the CusA pump may need to adopt in order to recognize and extrude metal ions. We here present four cryo-EM structures of the trimeric CusA efflux pump, either alone or bound with Cu(I). We observe that CusA can form both symmetric and asymmetric trimers, with different CusA protomers within the trimer able to bind Cu(I) simultaneously. We also conduct molecular dynamics (MD) simulations to demonstrate transitions between different states captured from the cryo-EM structure and observe a proton permeation pathway at the transmembrane domain. On the basis of our findings, we propose a mechanistic model of transport that suggests each CusA protomer functions independently within the trimer.
RESULTS
Structural determination of the CusA heavy-metal efflux pump. Previously, the structures of CusA, both in the absence and presence of Cu(I) or Ag(I), have been determined by X-ray crystallography (13). In the absence of Cu(I) or Ag(I), the structure of apo-CusA presents the "resting" state conformation, where the periplasmic cleft is closed. However, upon the addition of metal ion, both the CusA-Cu(I) and CusA-Ag(I) structures are characterized by the open cleft conformation, where the PC2 subdomain is found to swing away from the PC1 subdomain by 30°when compared with the apo-CusA structure. This conformational shift allows the periplasmic cleft to open (13). In addition, a single Cu(I) or Ag(I) is found to bind in the middle of the transient methionine triad M573-M623-M672 situated deep inside the cleft. The CusA-Cu(I) and CusA-Ag(I) structures are nearly identical to each other and represent the "binding" state of the CusA pump.
To continue to elucidate the molecular mechanism of the HME-RND efflux pumps, we decided to solve cryo-EM structures of CusA embedded in nanodiscs in the presence of Cu(I). As the approach of cryo-EM captures single-particle images in a frozen solution state, these images should reflect the different conformations that the CusA pump can possibly achieve in the free solution environment before being frozen. These cryo-EM images should contain critical structural information of different transient states that the CusA pump must go through during the transport cycle. Extensive classification of the single-particle cryo-EM images of the CusA-Cu(I) complex revealed that there were four distinct populations of the CusA particles with different conformations that coexist in the nanodisc sample. Surprisingly, two of these structures illustrated that the trimeric CusA pump assembles as symmetric trimers, where either the three CusA protomers within the trimer are bound by Cu(I), or none of these CusA protomers are occupied by metal ions. The other two structures represent asymmetric trimers, where either one or two CusA protomers within the trimer are bound by Cu(I).
Structure of trimeric CusA with two closed periplasmic clefts and one open periplasmic cleft. The most abundant conformation of trimeric CusA in our cryo-EM sample represents the transient state with two of the periplasmic clefts formed by subdomains PC1 and PC2 closed and the third cleft completely open. We collected a total of 75,703 single-particle projections for this class of images and determined the structure to a resolution of 2.82 Å ( Fig. 1; see also Fig. S1 and Table S1 in the supplemental material). The structure of this conformational state of CusA has two protomers with their periplasmic clefts closed and appear identical. These two protomers represent the "extrusion" form of CusA, as a channel for extrusion is found in each protomer, similar to those found in the HAE-RND efflux pumps AcrB (3,6), CmeB (10), and MtrD (9) (Fig. 1A to C and Fig. S1). The conformations of these two CusA protomers are also quite distinct from the structures of the three identical protomers of apo-CusA, as determined by X-ray crystallography in the absence of Cu(I). This apo form corresponds FIG 1 Cryo-EM structure of the CusA trimer in the EEB state. (A) Ribbon diagram of the CusA trimer with two closed cleft "extrusion" state protomers and one open cleft "binding" state protomer viewed from the membrane plane with the "extrusion" and "binding" channels shown in yellow. (B) Ribbon diagram of the EEB CusA trimer viewed from the top of the periplasmic domain illustrating the channels (colored yellow) formed in the "binding" and "extrusion" states of the CusA protomers. (C) A cartoon displaying the conformation of the CusA trimer in the EEB form viewed from the top of the periplasmic domain. In panels A, B, and C, the two "extrusion" protomers and one "binding" protomer of CusA are colored pink, green, and blue, respectively. The bound Cu(I) ion in the "binding" protomer is represented as a dark orange circle. (D) The binding site of the Cu (I) ion (dark orange sphere) within the open periplasmic cleft of the "binding" protomer (blue). The bound Cu(I) ion is coordinated by residues M573, M623, E625, and M672 (gold sticks). The density of bound Cu(I) is shown in transparent light green.
Cryo-EM Structures of CusA ® to the "resting" state of the CusA pump. Superimposition of an "extrusion" protomer from the cryo-EM structure of CusA onto a "resting" protomer from the crystal structure of apo-CusA results in a root mean square deviation (r.m.s.d.) of 1.5 Å (for 1,029 Ca atoms). For the third CusA protomer, an elongated channel is observed within its periplasmic domain, where this channel leads through the opening of the periplasmic cleft and allows the interior of the protomer exposure to solvent ( Fig. 1A and B). This protomer depicts the "binding" state of the membrane protein, where its conformation is comparable to those "binding" protomers identified in AcrB (3,6), CmeB (10), and MtrD (9). The cryo-EM structure of this "binding" protomer is also nearly identical to the conformation of those protomers obtained from the X-ray structures of the CusA-Cu(I) and CusA-Ag(I) complexes (13). Superimposition of this "binding" protomer to a protomer of the X-ray structure of CusA-Cu(I) gives rise to an r.m.s.d. of 0.8 Å (for 1,029 Ca atoms). It should be noted that the conformations of the transmembrane domain of the CusA-Cu(I) complex in the cryo-EM and X-ray structures are nearly identical, suggesting that the nanodiscs and detergent micelles do not significantly alter the conformation of the transmembrane helices. The assignments of the CusA protomers are shown in Fig. S2.
In comparison with the "extrusion" conformers, the C-terminal end of the horizontal helix inside the periplasmic cleft of the "binding" protomer is found to tilt upward by ;20°, which allows residue M672 to shift toward M573 and M623 to form the threemethionine metal binding site. Interestingly, an extra sphere-shaped density corresponding to the bound Cu(I) ion is found to coordinate the three methionine residues M573, M623, and M672 (Fig. 1D). The nearby conserved acidic residue E625 is also involved in binding Cu(I), stabilizing this ion via charge-charge interaction. With a combination of two "extrusion" protomers and one "binding" protomer, we designate the conformational state of this CusA trimer as the EEB form.
Structure of trimeric CusA with one closed and two open periplasmic clefts. The second most abundant conformation we identified, with 43,395 single-particle counts in our cryo-EM data set, was the trimeric CusA conformation with one periplasmic cleft closed and two clefts open. We solved the cryo-EM structure of this CusA trimer to a resolution of 3.02 Å (Fig. 2, Fig. S1, and Table S1). For the CusA protomer with its periplasmic cleft closed, an "extrusion" channel extends perpendicular to the surface of the inner membrane and through the periplasmic domain. Again, this protomer depicts the "extrusion" state of the membrane protein. The conformations of the two protomers with the periplasmic cleft open are identical to each other, where a channel within the periplasmic cleft is found in each protomer ( Fig. 2A to C). Each channel extends in parallel to the surface of the inner membrane, allowing the interior surface of the cleft in each protomer to be solvent exposed. These two protomers are in the "binding" conformation, which is nearly equivalent to the "binding" protomer described above. Similar to the "binding" protomer of the EEB trimer, the C-terminal end of the horizontal helix inside the periplasmic cleft is observed to tilt upward by ;20°, leading to the formation of the M573-M623-M672 methionine binding site in these two protomers. A Cu(I) ion anchors at the center of the three-methionine binding site of each "binding" protomer, utilizing residues M573, M623, M672, and E625 to secure Cu(I) binding (Fig. 2D). As the structure of this trimeric CusA pump is characterized by one "extrusion" and two "binding" protomers, we label the conformational state of this trimer as the EBB form.
Structure of trimeric CusA with three closed periplasmic clefts. The third most populated conformation of trimeric CusA in our cryo-EM sample has all three periplasmic clefts closed. We obtained 21,108 single-particle images for this conformation in our cryo-EM data set. The cryo-EM structure of this CusA trimer was solved to a resolution of 3.20 Å (Fig. 3, Fig. S1, and Table S1). Interestingly, the conformations of these three protomers are identical, presenting a symmetric, trimeric structure. The conformation of these three protomers is very similar to the conformation of the "extrusion" protomers of the EEB and EBB structures, with an "extrusion" channel observed to extend vertically through the periplasmic domain of each protomer with respect to the membrane surface ( Fig. 3A and B). We, therefore, classify the conformational state of this symmetric trimer as the EEE form of CusA.
Structure of trimeric CusA with three open periplasmic clefts. The least abundant conformation of trimeric CusA has three protomer clefts open. We obtained 13,304 single-particle images for this conformation in our cryo-EM sample data set. We solved the cryo-EM structure of this CusA trimer to a resolution of 3.40 Å (Fig. 4, Fig. S1, and Table S1). In this trimeric CusA structure, the conformations of the three protomers Cryo-EM Structures of CusA ® are mostly equivalent, also presenting a symmetric, trimeric configuration. A "binding" channel is found in the periplasmic cleft of each protomer, indicating that the three protomers are in their "binding" conformation ( Fig. 4A to C). In addition, it is observed that a bound Cu(I) is anchored at the three-methionine binding site of each protomer, anchored by residues M573, M623, M672, and E625 (Fig. 4D). As the three CusA protomers are in their "binding" form, we label this trimer as the BBB conformation of the CusA pump.
The proton relay network. It has been well established that the proton motive force (PMF) of the cell powers RND efflux pumps to extrude drugs from the periplasmic domain. In the transmembrane domain of CusA, the conserved residues D405, E939, and K984 of the HAE-RND efflux pump family are necessary for the efflux of metal ions (13). The density of our cryo-EM maps unambiguously depicts the side chain positions of these conserved amino acids, allowing us to elucidate the mechanism of proton transfer within this proton relay network. Data have shown that a point mutation at D405, E939, or K984 of CusA renders this pump unable to transport metal ions across the membrane, suggesting that the proton relay network may be necessary for the transition between different conformational states (13). The cryo-EM structures of CusA Cryo-EM Structures of CusA ® clearly identify two distinct protomers, labeled as either "binding" or "extrusion" conformers, with distinct structural differences between the two states. When compared to the "binding" protomer, the "extrusion" protomer displays a rearrangement of side chains of the proton relay residues as well as side chain residue shifts within the transmembrane helices TM4, TM7, TM8, TM9, TM10, TM11, and TM12 (Fig. 5A). Coupled with the movement of closing the periplasmic cleft in the "extrusion" state, TM8 is found to shift toward the core by as much as 10 Å. Other transmembrane helices, TM7, TM9, and TM10, also shift horizontally by 4 Å, mimicking the motion of TM8. Interestingly, TM1 and TM4 undergo a 3-Å upward shift with respect to the inner membrane surface. The net result is that all of these transmembrane helices rearrange in a twisting motion constricting the transmembrane domain.
The movements of these transmembrane helices are accompanied by the reorientation of the side chain of the proton sweeper K984 toward the acidic residue D405 (Fig. 5A). In each "binding" conformer, D405 appears to form hydrogen bonds with T987, E939, and K984. Interestingly, the structure of the EEB CusA trimer indicates that there is an alternate conformation of the proton sweeper K984 within the "binding" conformer ( Fig. 5A). The side chain of K984 sweeps away from D405 and shifts toward the cytoplasm. This alternate side chain orientation also creates a new hydrogen bond with S1027. In this manner, the proton can be effectively passed from D405 to K984, which then sweeps and passes the proton to S1027, coupling with the opening of the periplasmic cleft to bind Cu(I). To advance the transport cycle, the CusA protomer may shift to the "extrusion" form as observed in the cryo-EM structures. At this state, the transmembrane helices shift in conformation to constrict the transmembrane domain. The side chain nitrogen of K984 is found to move away from S1027, interacting with M944 to form a dipole-dipole interaction and potentially stabilize this transient state. Additionally, in the "extrusion" state, E939 forms a hydrogen bond with D405 that may be necessary to reset the side chains of the proton relay network residues for the subsequent cycle (Fig. 5A). Taken together, these cryo-EM structures illuminate the role of the proton sweeper K984 in the proton relay network. It appears that K984 plays a major role in transferring protons from D405 to S1027 to advance this transfer process.
The transmembrane methionine relay channel. In the transmembrane region of each CusA protomer, six methionines align to form three pairs. These three methionine pairs are M410-M501, M403-M486, and M391-M1009, which line up with the methionine triad M573-M623-M672 and another methionine pair, M271-M755, within the periplasmic domain, to assemble the methionine relay network (13). Previously, it has been discovered that CusA is capable of transporting metal ions from the cytosol via these methionines. A single point mutation within this methionine network is able to completely abolish metal transport. The ability of RND transporters to pick up substrates from the cytoplasm has also been seen from the ZneA (14), CzcA (19), and AcrD (20) pumps. It appears that the three methionine pairs located at the transmembrane domain are made up of residues from TM4, TM6, and TM12. Therefore, it is likely that these transmembrane helices help shuttle metal ions across the transmembrane.
The three methionine pairs located within the transmembrane regions of CusA display significantly different conformations between the cryo-EM structures of the "extrusion" and "binding" states of each protomer. For example, the distances between the sulfur atoms of M410-M501, M403-M486, and M391-M1009 are 5.4 Å, 5.9 Å, and 5.7 Å in the "extrusion" state of CusA, respectively. These distances become 12.1 Å, 4.4 Å, and 11.4 Å in the "binding" state (Fig. 5B). As TM4 residues participate in creating both the proton relay network (e.g., D405) and the methionine relay network (e.g., M391, M403, and M410), this transmembrane helix is likely critical for coupling the processes of proton import and metal ion export. It appears that proton transfer via D405 may trigger a conformation change of TM4, which in turn initiates the metal ion transport process.
In each "binding" conformer of CusA, a channel spanning the entire transmembrane region up to the periplasmic domain is observed. This channel passes through the three methionine pairs in the transmembrane domain, presumably relaying the metal (Continued on next page) Cryo-EM Structures of CusA ® ion to shuttle across the membrane. However, this channel is absent in each "extrusion" conformer. Based on the cryo-EM structures, the metal ion from the cytosol would first encounter the M410-M501 pair that extends into the cytosol, forming the entrance of the methionine relay channel formed between TM3, TM4, and TM6 at the transmembrane-cytosolic interface. In the "binding" state, the S-methyl thioether side chain of M410 and M501 appear to face away from one another with the M501 side chain exposed to the cytosol, seemingly allowing the metal ion to enter this channel (Fig. 5B). In the "extrusion" state, this entrance is closed, largely due to the shift in locations of TM5 and TM4, which reorient the side chains of M410 and M501 toward one another, constricting this opening. The next methionine pair in the channel is M403-M486, located in the middle of the transmembrane domain between TM4 and TM6. At this area, Cu(I) or Ag(I) ions could pass between TM4 and TM6 and up to the third methionine pair M391-M1009 located near the outer leaflet surface of the cytoplasmic membrane. Interestingly, the S-methyl thioether side chains of M391 and M1009 are oriented away from each other in each "binding" conformer (Fig. 5B). This conformation allows this transmembrane channel to completely open to the periplasm. When compared with the "binding" protomers, the conformation of each "extrusion" protomer depicts that the transmembrane helices TM4, TM6, TM10, and TM12 move toward one another, constricting the channel and reorienting the side chains of M391 and M1009 to effectually shut the opening of the channel facing the periplasm.
Molecular simulations of CusA cryo-EM structures in both trimeric and monomeric state. We assessed the stability of the cryo-EM structures using molecular dynamics (MD) simulations (Table S2). Here, we calculate the stability of the protein using r.m.s.d. in all structures obtained with cryo-EM over the first 200 ns of the simulation. Our data indicate that the secondary structure obtained are stable (Ca r.m.s.d. , 3.0 Å) throughout the simulation (Fig. S3A and B). The root mean square fluctuation (r.m.s.f.) of residues are similar in both the extruded and bound states. By comparing trimeric simulation to the monomeric simulation, the r.m.s.f. value differs only between residues 220 to 230, which are located at the trimer interface. However, the r.m.s.f. between "binding" and "extrusion" conformations are similar ( Fig. S3B and C). This suggests that each monomer works independently and validates the quality of the structure obtained, which provides a platform for a subsequent study in this paper.
Cu(I)-dependent conformational change. To assess the role of Cu(I) in the conformational transitions of the periplasmic domain, we performed simulations of the "extrusion" and "binding" protomers of CusA, with and without Cu(I) added. Based on the major differences between the two states, the distance between the Ca atoms of N651 and R705 was used as a metric of calculating conformational change. Simulations of the "extrusion" state [2Cu(I)] and the "binding" state [1Cu(I)] of CusA resulted in a consistent Ca distance between these two residues measured throughout the simulations (Fig. 6A and B). In contrast, removal of Cu(I) from the "binding" state resulted in CusA transitioning toward the "extrusion" state in one of the three Cryo-EM Structures of CusA ® simulations performed (Fig. 6A and B). At the end state of this repeat, the Ca distance between N651 and R701 was equivalent to that of the structure of the "extrusion" state ( Fig. 6A and B and Fig. S4A).
Evaluation of this simulation suggests that the absence of Cu(I) leads to instability in the coordinating residues D671 and M672. D671 establishes a salt bridge with K678, and this interaction rigidly pulls the PC2 subdomain toward PC1. This transition is further supported by a hydrophobic collapse of the adjacent residues, including the helix that connects PC1 to PC2 (see Movie S1 in the supplemental material).
Conversely, the addition of Cu(I) to the "extrusion" conformation leads to a separation of R705 and N651 in one of the three simulation repeats, as the structure transitions toward a state equivalent to that of the bound structure ( Fig. 6C and D). The dynamics of the motions are similar to a reverse of what is described for the removal of Cu(I) from the "binding" state (Movie S2). These simulations suggest that the conformational changes observed for CusA are driven by Cu(I).
The role of a trimeric structure in a Cu(I)-dependent conformational change. To observe whether the transition in a monomer exists in a trimer, we conducted three repeats of 450-ns simulations on both the trimeric "extrusion" state (EEE) and "binding" state (BBB) with and without Cu(I). For both the EEE state and the BBB state without Cu (I) obtained from the cryo-EM structure, the simulations did not deviate from the starting structures (Fig. S4B). Unlike the monomeric simulations, removal of Cu(I) from the bound state did not result in any structural rearrangement of the PC1 and PC2 domains (Fig. S4B), indicating that the trimeric oligomerization is capable of stabilizing the structures of the BBB form both in the absence and presence of Cu(I).
To test the dynamics of the "extrusion" state of CusA, we added Cu(I) to the binding site in all three subunits of the EEE trimer. In these simulations, we observed transitions from the "extrusion" state toward the "binding" state (Fig. S4B). This transition is similar to that observed with the monomer simulation and occurs independent of the adjacent subunits (Fig. S4C). CusA functions as a trimer, and this trimeric oligomerization is probably capable of stabilizing each conformational state of the pump.
Water wires mediate proton transport through the transmembrane domain of CusA. In addition to evaluating conformational changes of the periplasmic domain, we also investigated the solvent accessibility of the CusA structures in lipid membranes, with the aim of proposing a putative proton transfer pathway through CusA that would be used to drive copper transport. In both the monomeric (E and B states) and trimeric (EEE and BBB states) structures, we observe a membrane-spanning water wire within the transmembrane (TM) domains, where the water wire directly connects the proton relay residues D405, E939, and K984 to the conserved lysine K482 (Fig. 7A and B and Movie S3). To evaluate the path of the water wire, and therefore, by inference, proton transfer, we calculated the pK a values of all acidic and basic residues present in the transmembrane region from the last 100 ns of three 450-ns trajectories of the trimeric structures. The analysis indicates that the pK a values of two pore-lining side chains, K482 and E939, are nearly 7, suggesting that both residues are proton labile (Fig. 7C). As both residues are at either end of the permeation pathway, we calculated the number of water molecules that were found to bridge the two side chains. The most likely number of bridging waters between K482 and E939 was four (Fig. 7D). On the basis of this finding, we suggest that a proton could start on the periplasmic side of the membrane and hop from K482 via the bound water to the proton relay triad D405-E939-K984, and eventually be delivered to the cytoplasmic side through the flipping in conformation of the side chain of the proton sweeper K984.
DISCUSSION
Here we have defined cryo-EM structures of the CusA metal ion efflux pump in the presence of Cu(I). These structural data allow us to depict four distinct structures of the CusA trimer within a single cryo-EM sample. We revealed that three CusA molecules can assemble as symmetric trimers as indicated by the EEE and BBB trimeric structures, where all three protomers are in identical "extrusion" (E) or "binding" (B) conformations within the trimer. In addition, we detected that the three CusA molecules can assemble as asymmetric trimers as suggested by the EEB and EBB structures. The EEB trimer delineates the assembly of two "extrusion" protomers and one "binding" protomer, whereas the EBB trimer contains one "extrusion" and two "binding" protomers within the CusA trimer.
We clearly observed from the cryo-EM structures that different CusA protomers within the trimer are able to simultaneously accommodate Cu(I). Each "binding" protomer of CusA from the EEB, EBB, and BBB conformational assemblies is occupied by a with protonatable residues highlighted in red for acidic residues (D405 and E939) and blue for basic residues (K482 and K984). Pore-lining residues are highlighted in green for polar residues and brown for hydrophobic residues. Different subdomains are colored in purple (PC2), blue (PC1), orange (DC), yellow (DN), green (PN1), and red (PN2). (B) A path of proton permeation across the membrane via a water wire from basic residues (blue) to acidic residues (yellow) based on the "extrusion" state of a protomer from the EEE trimer. The structure displays a solvated pore after 20-ns equilibration with Ca restrained. Putative hydrogen bonds are shown as black dashes with distances in angstroms. (C) Calculated pK a values of the four acidic and basic residues in the transmembrane region of the protein from the last 100 ns of the 450-ns simulation in the trimeric CusA pump with and without Cu(I). Data are shown for three repeats of the three subunits. Error bars display the standard errors of the means (n = 9). (D) Calculated number of water molecules involved in the water wire from the last 100 ns of the 450-ns simulation of trimeric CusA with (blue) and without (yellow) Cu(I). Data are shown for three repeats of the three subunits only in the subunit when water molecules are present in the pore. Error bars display the standard errors of the means (n = 8).
Cryo-EM Structures of CusA ® Cu(I) ion, where the metal ion is coordinated and secured by the familiar three-methionine binding site composed of M573, M623, and M672. The conserved negatively charged residue E625 also contributes to neutralize the formal positive charge of the bound Cu(I) ion. This observation highlights a phenomenon that individual protomers within the CusA trimer are capable of independently binding and exporting metal ions. Our experimental data are in line with results from MD simulations that each CusA protomer works independently. In addition, our cryo-EM and computational findings are in good agreement with the recent study of the C. jejuni CmeB HAE-RND-type multidrug efflux pump, where single-molecule FRET imaging indicated that the three CmeB protomers within the trimer can simultaneously bind and export substrates (10). Each CmeB subunit can undergo uncoordinated conformational transitions and function independently.
We believe that CusA is capable of picking up metal ions from both the periplasm (via the periplasmic cleft) and cytoplasm (via the three methionine pairs at the transmembrane region). As soon as the metal ion arrives at the three-methionine binding site (M573-M623-M672) deep inside the periplasmic cleft of CusA, this bound metal ion could then be released to the nearest methionine pair (M271-M755) directly above the three-methionine binding site. Subsequently, the ion could exit the CusA pump via the extrusion channel and eventually reach the CusB and CusC channels for final extrusion from the bacterial cell.
Single-molecule FRET imaging of CmeB efflux revealed that there are at least four distinct conformational states of the CmeB protomer transitioning within the substrate transport cycle (10). Previously, X-ray crystallography discovered that the CusA protomers take a "resting" state in the absence of Cu(I) or Ag(I) ions, where the three periplasmic clefts are closed. However, these protomers acquire a "binding" conformation with an open periplasmic cleft in the presence of Cu(I) or Ag(I) (13). In the present work, our cryo-EM study allowed us to observe that the CusA protomers are capable of forming an "extrusion" state with the periplasmic cleft closed and a "binding" state with the periplasmic cleft open in the presence of Cu(I). Based on our findings, it is likely that the CusA pump, belonging to the HME-RND family, may go through a simpler transport cycle when compared to that of the CmeB HAE-RND pump. This may be because only one metal ion binding site has been observed within the periplasmic cleft of CusA, whereas multiple binding sites, such as at the entrance, proximal, and distal drug binding sites, have been seen with the HAE-RND efflux pumps (3,5,21). Indeed, the secondary structural elements within the periplasmic clefts of HME-RND and HAE-RND efflux pumps are very distinct from each other. For example, deep inside the clefts of AcrB (21), CmeB (10), MtrD (9), and AdeB (11), there are two conserved flexible loops which are functionally important for these pumps. In each of these multidrug efflux pumps, the F-loop forms part of the proximal drug binding site and connects this proximal site to the cleft entrance, whereas the G-loop compartmentalizes the proximal and distal drug binding sites. In CusA, there is no G-loop in the structure. The space that is normally occupied by the G-loop of a HAE-RND pump becomes a free cavity in CusA. In addition, residues corresponding to the F-loop of a HAE-RND pump compile to form the horizontal helix which is critical for Cu(I) binding in CusA.
In the absence of Cu(I), the CusA protomers prefer the "resting" conformation, where the three periplasmic clefts are closed within the trimer. In the presence of Cu(I), we observed only the "binding" state with the periplasmic cleft open and the "extrusion" state with the periplasmic cleft closed. It is possible that the CusA pump can easily continue to advance the transport cycle from the "binding" to "extrusion" conformations by coupling with proton transfer via the proton relay network coupled with the proton wire. Our data allow us to propose a simple model for the transport mechanism of the CusA HME-RND pump (Fig. 8), where the CusA protomers can independently and uncoordinatedly function to export metal ion by progressing from the "binding" state to the "extrusion" state within the transport cycle.
It should be noted that CusA works with CusB and CusC to form the CusA 3 -CusB 6 -CusC 3 tripartite efflux assemblage (15) to export Cu(I) and Ag(I) ions, and the contacts between the CusA pump and CusB adaptor could influence the states of the CusA monomers, presumably tuning the efflux system to become more efficient. Indeed, a stopped-flow assay suggested that the reconstituted CusA-CusB proteoliposomes are two times more active than those proteoliposomes containing CusA only for metal transport (22).
MATERIALS AND METHODS
Expression and purification of CusA. The cusA gene, encoding the CusA heavy-metal efflux pump, from E. coli was cloned into the pET15b expression vector in frame with a 6ÂHis tag at the C terminus to generate the pET15bXcusA plasmid. The CusA protein was overexpressed in E. coli BL21(DE3)DacrB/ pET15bXcusA cells, which harbor a deletion in the chromosomal acrB gene. Cells were grown in 6 liters of LB medium with 100 mg/ml ampicillin at 37°C. When the optical density at 600 nm (OD 600 ) reached 0.4, the culture was treated with 0.2 mM isopropyl b-D-1-thiogalactopyranoside (IPTG) to induce CusA expression. Cells were then harvested within 3 h of induction. The collected bacteria were resuspended in low salt buffer containing 20 mM HEPES-NaOH (pH 7.0), 10% glycerol, and 1 mM phenylmethanesulfonyl fluoride (PMSF) and then disrupted with a French pressure cell. The membrane fraction was collected and washed twice with 20 mM HEPES-NaOH buffer (pH 7.0) containing 1 mM PMSF. The membrane protein was then solubilized in 1% (wt/vol) (Cymal-6). Insoluble material was removed by ultracentrifugation at 100,000 Â g. The extracted protein was then purified with an Ni 21 affinity column. The purity of the CusA protein (;95%) was judged using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) stained with Coomassie brilliant blue. The purified protein was dialyzed against 20 mM HEPES-NaOH (pH 7.0) and concentrated to 12.6 mg/ml in a buffer containing 20 mM HEPES-NaOH (pH 7.0) and 0.05% Cymal-6.
Nanodisc preparation. To assemble CusA into nanodiscs, a mixture containing 20 mM CusA, 45mM membrane scaffold protein (MSP) (1E3D1), and 930 mM E. coli total extract lipid was incubated for 15 min at room temperature. After the incubation, 1 mg/ml prewashed Bio-Beads (Bio-Rad) were added. The resultant mixture was incubated for 1 h on ice, followed by overnight incubation at 4°C. The protein-nanodisc solution was filtered through 0.22-mm nitrocellulose filter tubes to remove the Bio-Beads. To separate free nanodiscs from CusA-loaded nanodiscs, the filtered protein-nanodisc solution was purified using a Superose 6 column (GE Healthcare) equilibrated with 20 mM HEPES-NaOH (pH 7.0), and fractions corresponding to the size of the trimeric CusA-nanodisc complex were collected for cryo-EM analysis.
Cryo-EM sample preparation and data collection. The trimeric CusA nanodisc sample was concentrated to 0.7 mg/ml (2 mM) and incubated with 20 mM Cu(I). The sample was then applied to glow-discharged holey carbon grids (Quantifoil Cu R1.2/1.3, 300 mesh), blotted for 2 s, and then plunge-frozen in liquid ethane using a Vitrobot (Thermo Fisher). The grids were then transferred into cartridges. The images were recorded at 1-to 2.5-mm defocus on a K2 summit direct electron detector (Gatan) with superresolution mode at nominal 81,000 (81K) magnification, corresponding to a sampling interval of 1.08 Å/pixel (superresolution, 0.55 Å/pixel). Each micrograph was exposed for 7.7 s with 5.40 e-/s/physical pixel dose rate (total specimen dose, 50 e 2 /A 2 ), and 40 frames were captured per specimen area using Latitude.
Cryo-EM data processing. The image stacks in the superresolution model were aligned using cryoSPARC (1). The contrast transfer function (CTF) parameters of the micrographs were determined using Gctf (2). After manual inspection and sorting to discard poor images, ;1,000 particles were manually picked to generate templates for automatic picking. Initially, 1,769,806 particles were selected after autopicking in cryoSPARC (1). Several iterative rounds of two-dimensional (2D) classifications were performed to remove false picks and classes with unclear features, ice contamination, or carbon. The resulting 549,454 particles were further processed with local motion correction using cryoSPARC and local CTF reestimation by Gctf and used to generate a reference-free ab initio three-dimensional (3D) reconstruction. We applied a mask around CusA complex using standard automasking in RELION and generated a mask around the periplasmic cleft of a protomer to use for focused 3D classification. The resulting 3D classes were subjected to 3D reconstruction using an in-house script. For trimeric CusA with one "binding" and two "extrusion" protomers, 75,703 particles were chosen for nonuniform refinement followed by local focused refinement using cryoSPARC resulting in a 2.82-Å global resolution map. For trimeric CusA with two "binding" protomers and one "extrusion" protomer, 43,395 particles were chosen for nonuniform refinement followed by local focused refinement using cryoSPARC resulting in a 3.02-Å global resolution map. For the trimeric pump with three "binding" protomers, 13,304 particles were FIG 8 Proposed model of heavy-metal efflux mechanism. During heavy-metal export, each protomer of the trimeric CusA pump autonomously undergoes a sequence of conformational transitions. This schematic diagram indicates that each protomer within the CusA trimer is able to independently go through conformational transitions, leading to the extrusion of metal ions (B, "binding" state; E, "extrusion" state).
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. MOVIE S1, MP4 file, 7.4 MB.
|
2021-04-07T06:16:52.891Z
|
2021-03-30T00:00:00.000
|
{
"year": 2021,
"sha1": "dd0fbaf59973c1a2c05ce96d4839037467588de1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/mbio.00452-21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3256466def35b0a644db153c7bf5e019aed46248",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211108555
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of Voltage Stability in Different Operating Conditions
Received/Geliş: 06.08.2019 Accepted/Kabul: 10.10.2019 Abstract: In our developing world, the need for electric energy is increasing. In order to meet this demand, new power transmission systems as well as new sources are needed. However, the cost factor in creating power systems has led to the efficient, stable and reliable operation of the existing system. Therefore, how the existing power systems behave in extraordinary situations needs to be examined and known. In this study, the IEEE 6-bus power system is considered. Stability analysis of the system was made by creating various situations in this power system. This simulation work was carried out with the Power Systems Analysis Program (PSAT).
Introduction
One of the most important problems experienced in electric power systems is to provide reliable and continuous energy to consumers. Depending on the development of the technology, the need for electrical energy is increasing at everyday rates. The fact that production and consumption centers are far from each other has brought energy to move along long transmission lines. As a result, it has brought some obligations and problems in power systems. Problems caused by the loss of the losses and transmission of faults along transmission lines are also among the subjects of the researchers. One of the important problems that long-distance energy carrier has is voltage stability [1]. The instability caused by the distances to the consumption points of energy production centers is expressed as voltage instability. This voltage instability is directly related to the maximum power carrying capacities of energy transmission lines [2]. The graphs in which this voltage stability is most easily observed are P-V curves obtained from the load bus. The power drawn from the load bus increases the losses on the line and the voltage drop on the line. This voltage drop across the load bus must not fall below a certain value for the consumer. This value is expressed as the critical bus voltage value. Active power consumed at this time is expressed as critical power value. As the value of the voltage decreases, the operation of the system becomes more difficult. In this case, it is 180 understood that voltage stability is one of the main problems of power systems [3]. Decreasing the voltage values below the critical values disturbs the voltage stability. As a result, transmission lines, generators and loads can be disabled [4]. Voltage instability or more advanced voltage collapse is considered to be a dynamic phenomenon [5]. Despite the dynamic nature of voltage stability, many of its analyzes are made using static analysis methods. The problem of voltage stability occurs when power systems are overloaded, broken down, or when reactive power is inadequate. Analysis of this stability can be demonstrated by analysis of generation, transmission, and reactive consumption. Keeping the voltage at a certain value is a situation that concerns the whole power system, although the power system occurs in one region [6].
This study has been applied to the 6 bus test system of IEEE and various results have been obtained. In the analyzes, the power coefficients of the load, the length of the transmission line, the change of the mains voltages and the effects of the system shunt compensation were investigated. The simulations of the analyzes were carried out with the Power Systems Analysis Program (PSAT). Power systems were created under different operating conditions in the study. For each case, the critical values that determine the operating limits of the system have been reached by performing the load flow and the continuous load flow.
Load Flow and Continuous Load flow
You have information about the current state of the system with the load flow in the power system. As a result of the load flow, it is possible to determine the voltage amplitude and angle values of all the buses, the active and reactive powers flowing on the transmission lines, the losses on the lines. Power expressions used in power flow studies are nonlinear equations. Therefore, two different approaches are used to solve these equations. One is Gauss-Seidel and the other is Newton-Raphson algorithms [7]. In the Newton-Raphson algorithm used for load flow analysis, a series of nonlinear mathematical equations are expressed by Eq. (1). The solution for x, which is variable in this equation set, is sought. The Newton-Raphson algorithm equation is used to solve this equation.
The Newton-Raphson algorithm given in Eq.(2) is the J Jacobian matrix and vice versa. When this form is applied to the load flow, the x vector expressed in Eq.
181 These load-flow operations are incrementally maintained until the Newton-Raphson algorithm diverges. The reason for the divergence is that it is approaching the critical voltage point [8]. The point to which the divergence is made expresses the critical values for the system or for that bus. The voltage is expressed as the critical bus voltage, the angle, the critical bus angle, and the critical bus power. In other words, critical values are determined for the voltage stability of the bus.
Continuous load flow analysis is an algorithm that iteratively processes estimation and correction functions. The basic principle behind the continuous load flow technique is based on the estimation correction step. As shown in Fig. 1, the estimation step is performed along the tangential direction at the current operating point. As a correction vector, a plane perpendicular to the tangential direction is used [9]. The load value is assumed to be constant when these operations are performed. It also has the ability to automatically change the voltage to adverse conditions that would result from the singular resolution state of the system equations.
Figure 1. Predictive and corrective errors on the P-V curve
The graphs in which voltage stability is most easily observed are P-V curves obtained from the load bus. These values are expressed as critical bus voltage values. At this moment, active power is expressed as critical power value. As the voltage value decreases, the system becomes difficult to operate [10]. Voltage instability and the resulting events are a dynamic process. However, system and voltage stability are investigated with static analysis methods as well as dynamic events [11]. The relationship between the voltage-loading parameter (V-λ) of the power system and the bus, active and reactive power values is expressed in Eq.(4).
The P L0 and Q L0 values expressed in the equations are the initial active power and reactive power values. P L and Q L are the active power and reactive power values of the load. λ is the maximum load parameter value. In order to establish the relationship between the voltage and the maximum load parameter (V-λ), a continuous charge flow in the power system is required.
Simulation Study
This study has been applied to the 6-bar test system in Fig.2. The total load of the system is 280 MW and 190 MVAr and these powers are provided by three generators. There are also three load buses in the system. In the simulation, inductive, capacitive and capacitive loads are connected to the load buses separately. In the other case, the line length of the entire system has been changed. In the case of another operation, the line head voltages were changed. Finally, shunt compensation is connected to the load buses. The critical values of the power system were investigated by 182 performing load flow and continuous load flow in each operating state of the system. The simulation was carried out with the Power Systems Analysis Program (PSAT) [12].
Effect on Critical Values in Power System
It is possible to clearly see the effects of certain magnitudes in the power equations when determining the stability limits or critical values in the power system. These effects can be gathered in the main groups as effects of the power factor, effect of line length, effect of line head voltage and effects of shunt compensation [1]. These effects should be examined in detail in terms of voltage stability. The effects on the critical values of the power system of the different cases mentioned in this study were investigated by P-V curves with continuous load flow.
-Effect of Power Factor
In 6-bus system, normal and critical values of power system are obtained when inductive, ohmic and capacitive load are connected as load. The voltage amplitude values of the load buses are found by increasing the power of the load bus by gradually increasing the phase angle of the load, =34 0 , =0 0 , =-34 0 or the power factor is kept constant at 0.83, 0, -0.83. P-V curves are obtained as these values are displayed on a curve. The values obtained from the results of the load flow are given in Table 1 and the values obtained from the continuous load flow are given in Table 2.
It is seen that the voltage values of the load buses in the load flow result are within the limit values. Active and reactive losses are similar. However, negative excursions of reactive losses show that the capacitive effect of the line is greater than inductive losses. Continuous load flow by connecting inductance, ohmic and capacitive load to the power system resulted in critical values. When these obtained values are plotted on P-V curves, the curves in Figure 3 are obtained.
Figure 3. P-V Curves Obtained at Different Loads
In the P-V curves obtained as a result of the continuous load flow, the critical voltage values of all load types were found to be similar, but the maximum load capacity of the load was observed to be higher than the other loads. At all loads in terms of critical voltage, 4th bus 0.5pu with the most stable, the 6th bus is 0.8pu with the most unstable bus.
-Effect of Line Length
It is not possible later to change the critical values of the system by changing the line length in terms of stability. For this reason, in the design phase of the power system, especially in route selection, the effect of the line length on the critical values must be examined. Since the ohmic and inductive reactance of the transmission line is given as ohm / km and the susceptance is S / km (if the conductor is neglected), changing the length of the transmission line causes the parameters of the line to change. Depending on the change of these parameters, the P-V curve and therefore the critical values in the power system, also change. Voltage, angle, losses and maximum load values obtained from the results of the load flow and the continuous load flow depending on the various line lengths are given in Tables 3 and 4. While the voltage values of the short and medium-length lines remain at the normal limit as a result of the load flow, the voltage values of 0.8-0.7pu have been realized due to the active losses in the long transmission line. Losses increase in proportion to length. In the continuous load flow, the short line can be loaded the most, while the long transmission line can be loaded at least. Here, the system operates under constant power coefficient of cos=0.83 (end) and the P-V curves of three different length lines are shown in Figure 4.
The P-V curves reveal an unfavorable situation in terms of the stability of the voltage depending on the increase of the line length as seen from the results obtained. It is observed that the amount of power that can be transported decreases depending on the extension of the transmission line. The shortening of energy transmission lines is more stable in terms of voltage stability. Tables 5 and 6. If there is a drop in voltage as a result of voltage instability for any reason in the system, the magnitude of line input voltage can be increased by increasing the line end voltage. In this case, if it is assumed that the active power value of the line is not changed, the voltage of the load bus at the end of the line will be further reduced as seen from the P-V curve in Figure5. This is the conclusion that the under load tap changer transformer cause the line end voltage to decrease further in case of increasing the voltage in case of decrease of voltage. When the power system is exposed to a fastchanging disruptive effect, the slower response of the tap changers is insufficient to maintain the 186 voltage stability [13,14]. It is seen from the P-V curves given in Figure 5 that the increase in line input voltage causes the critical values to increase.
-Effect of Shunt Compensation
Where shunt compensation is not considered in a power system, the load-carrying voltage of the idle transmission line is at its steady state maximum. If there is not sufficient compensation, the capacitive currents flowing from the line and system will cause overvoltage in the devices connected to the system. Therefore, shunt reactors should be installed at appropriate locations. These reactors are usually connected directly to the phase neutral at the end of the transmission line 187 [15]. The shunt reactors are reduced in the transmission line y constant by the shunt compensation ratio.
The shunt compensation ratio (K d ) is expressed as a percentage. The variation of P-V curves for different shunt compensation percentages is shown Tables 7 and 8 and in Figure 6 According to this, although shunt compensation is performed on the transmission line, it is observed that the limits of voltage stability are reduced even though it prevents the voltage increases at the end of the line. It is also determined that the power to be withdrawn from the system increases when shunt compensation is performed.
Results
In this study, the 6-bar test system was run separately under different operating conditions. Power system; by varying the power coefficient of the load, by changing the length of the transmission line, by changing the values of the line input voltage and by shunt compensation to the load bus. For 188 each case, Newton-Raphson (NR) load flow was performed in the system to determine the normal operating voltage and load values of the system. Then the continuous flow of charge is made to reach the point where the power flow equation solutions are singular, and the critical bara values of the system are obtained.
As a result of the analysis; the voltage values and the angle values of the load buses have been found to be within the stability limits of the system under all conditions and during normal operation. According to different loads; the maximum power can be carried by the Ohmic load, and the 4th busbar is the most stable bus and the 6th bus is the most unstable bus.
Depending on the line length; maximum power transfer can be performed with short line, and minimum power transfer can be performed with long line. In addition, the short line has been the most stable in terms of voltage stability in terms of voltage.
In terms of line input voltage; although the amount of power that can be transported increases as the voltage increases, the voltage stability deteriorates. When the shunt capacity was examined, it was observed that the voltage amplitude values decreased while the shunt capacity increased.
|
2020-02-14T22:58:15.251Z
|
2020-01-31T00:00:00.000
|
{
"year": 2020,
"sha1": "52711dee9a7d19414371411a725cd59a5ec21645",
"oa_license": "CCBYNC",
"oa_url": "https://dergipark.org.tr/tr/download/article-file/949444",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "52711dee9a7d19414371411a725cd59a5ec21645",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
44750973
|
pes2o/s2orc
|
v3-fos-license
|
Hereditary Non-Polyposis Colorectal Cancer (HNPCC): Phenotype–Genotype Correlation Between Patients With and Without Identified Mutation
Affected members of hereditary non-polyposis colorectal cancer (HNPCC) families develop colorectal cancer at an early age (mean 45yr) and frequently get extracolonic cancers particularly in the uterus, urinary tract, and small intestine. They have a high risk of developing more than one primary colorectal cancer if not treated with subtotal colectomy at first operation and have more frequent right-sided colon cancers and less frequent rectum cancers, compared to patients with sporadic colorectal cancer. We have screened 31 families fulfilling the Amsterdam criteria and 54 families with a colorectal cancer clustering but not fulfilling the Amsterdam criteria for mutations in MLH1 and MSH2 by direct sequencing, and detected a mutation in 61% of the Amsterdam positive families but only in 15% of the Amsterdam negative families. Genotype–phenotype correlation was compared between 141 affected individuals with an identified mutation and 78 affected individuals from Amsterdam positive families in which a mutation was not identifiable in MLH1 or MSH2 . In the affected persons with identified mutations, all expected phenotypic traits were represented, whereas affected persons in whom no mutation was detected fell into two clearly distinguishable subgroups. The minor subgroup, in which no mutation was identified, generally had the same characteristics as found in affected persons with identified mutations. The major subgroup differed significantly in clinical features and exhibited phenotypic traits similar to those found in late-onset families, including abundance of rectal cancer, few HNPCC-related cancers, lower frequency of multiple colorectal cancers, and later age at onset. Finally, for six missense mutations and one single codon deletion, the pathogenic potential was evaluated by domain localization, lod score calculation or segregation analysis when possible, and mutation-induced biochemical change. The results indicate that the majority of missense mutations are pathogenic, although further characterization by functional assays is necessary before implementation in predictive testing programs. Hum Mutat 20:20–27, 2002. r 2002 Wiley-Liss, Inc.
INTRODUCTION
Colorectal cancer, which is one of the most frequently diagnosed cancers in Western societies [Storm et al., 1993], clusters in some families, indicating a hereditary pathogenesis. In the absence of specific biological manifestations to distinguish the sporadic cases from the hereditary forms the so-called Amsterdam criteria were internationally agreed on: 1) at least three family members with verified colorectal cancer in at least two generations, 2) one of the affected persons has to be a first-degree relative to two other colorectal cancer patients, 3) at least one should have had the diagnosis before the age of 50, and 4) familial adenomatous polyposis (FAP; MIM# 175100) should be excluded as the cause of colorectal cancer [Vasen et al., 1991].
The Amsterdam criteria opened up for international collaborative studies. Clinical characteristics of the affected individuals have been described as a significant overrepresentation of cancers of the uterus, the urinary tract, and small intestine. More than one colorectal cancer is often found at first operation and the risk of developing a new primary tumor is highly elevated in HNPCC (MIM# 114500) patients. Colorectal cancers in hereditary cases also differ from the sporadic forms in being more frequently localized to the right colon and less frequently in the rectum. The mean age at diagnosis in HNPCC families is expected to be relatively low considering the Amsterdam criteria, but the fact that the median age is as low as 40-45 yr as compared to 70 yr for sporadic cases indicates that more than one family member is diagnosed at a young age Voskuil et al., 1997].
When it became evident that clustering of colorectal cancer also occurred in families with a higher mean age at diagnosis, some families were described as late onset HNPCC families if they fulfilled the Amsterdam criteria (see above) except for one person being younger than 50 at first diagnosis [Vasen et al., 1994].
In approximately half the families fulfilling the Amsterdam criteria a mutation is identified in MLH1 (MIM# 120436) or MSH2 (MIM# 120435) [Liu et al., 1996;Wijnen et al., 1997], which are two of several genes involved in DNA mismatch repair [Fishel et al., 1993;Bronner et al., 1994], but a phenotype-genotype correlation remains to be described convincingly. Diseasecausing mutations are only rarely identified in any of the other mismatch repair genes [Lynch and de la Chapelle, 1999] and the molecular genetic basis of the remaining cases of familial clustering of colorectal cancer is still unclear.
In the present study we analyze phenotypic differences in affected persons in whom a disease-causing mutation is identified versus patients in whom a mutation is not identified in MLH1 or MSH2, aiming at evaluating the likelihood of different genetic bases of HNPCC. Finally, we report sites of the identified mutations and characterize the pathogenic potential in missense alterations.
MATERIAL AND METHODS Families
DNA from 31 affected families fulfilling the Amsterdam criteria (AMST+ families) and 54 families not fulfilling these criteria (AMST-families) were screened for mutations in MLH1 and MSH2. The 54 AMST-families comprised a variety of phenotypically different characteristics, so only the families in whom a mutation was identified were included in the phenotype-genotype analysis in the present study.
Patients
Data concerning age at onset, number and types of cancers, and cause of death was extracted from the Danish HNPCC register on 198 affected persons from 31 HNPCC AMST+ families and 21 persons from eight families in which a mutation was identified though the families were AMST-. The clinical data was compared for differences between the 141 affected persons from families with identified mutation and the 78 affected persons in whose families the disease causing mutation was not identifiable in MLH1 or MSH2 by direct sequencing.
Molecular Genetics
DNA isolation. Genomic DNA was extracted from whole blood by use of the PureGene TM DNA isolation kit (Gentra Systems, Minneapolis, MN), according to the manufacturer's instructions.
Mutation analysis. The MLH1 and MSH2 genes were screened for mutations by PCR amplification of individual exons, followed by direct sequencing in both orientations. M13-tailed primers covering the exon-intron boundaries were employed, as described in previous studies [Kolodner et al., , 1995. Direct sequencing was performed on an ABI 377 or 310 DNA sequencer (Applied Biosystems, Foster City, CA) by use of dye-terminator or dye-primer cycle sequencing. All mutations were confirmed by analysis of two independently collected blood samples. When possible other affected family members were tested for the same mutation, which was present in all cases.
Microsatellite instability. DNA was isolated from paraffin-embedded colorectal tumor tissue with DNeasy Kit (Qiagen Inc., Valencia, CA) and analyzed with the following markers: BAT25, BAT26, D2S123, D2S2378, D3S1266, D5S82, D5S346, and D17S250. The tumor was evaluated to demonstrate MSI when instability was found in two or more of these markers [Boland et al., 1998].
Statistics
Differences in age at onset for the groups with and without mutations were analyzed with an independent T-sample test, and number and types of cancers were tested with two-sided Chi-square test. Survival was described both as life expectancy and as crude survival after first operation by Kaplan Meier plot. For life expectancy, the starting point was the date of birth and the date of death was the event, while the date of surgery was the starting point and the date of death the event for the survival analysis after the first cancer diagnosis. In both cases persons alive on February 2, 2000, were censored. Associationto-survival was tested by a log rank test. The probability of detecting a mutation in MLH1 or MSH2 was calculated as described by Wijnen et al. [1998a], who found that young age at diagnosis, fulfillment of Amsterdam criteria, and presence of endometrial cancer were independent factors for mutation detection probability. Lod score was calculated by the use of MegaBase v2.05 [Fenton et al., 1990] and Linkage 5.10 [Lathrop et al., 1984].
RESULTS Mutations
A mutation was identified in 19 of 31 (61%) investigated families fulfilling the Amsterdam criteria and in eight of 54 (15%) families not fulfilling these criteria. Eighteen different mutations (seven in MLH1 and 11 in MSH2) were identified in 27 families. Four of the mutations occurred in more than one family. Twelve of the mutations were novel, while the remaining six mutations have previously been described (Table 1). A MLH1 splice site mutation was found in six apparently unrelated families, but haplotyping has revealed that at least five of the families have a mutual ancestor as previously described [J. ager et al., 1997]. This mutation is referred to as the ''founder mutation'' in the following. The mutation spectrum comprised two nonsense mutations, five deletions, one insertion, four splice site mutations, and six missense mutations (Table 1 and Fig. 1). In a large kindred (264) the missense mutation P648S in MLH1 segregated with the disease and a maximum Lod score of 4.0 was obtained at Theta=0.
Phenotype^Genotype Correlations
The persons with an identified mutation had a 3.0 odds ratio of having more than one colorectal cancer at some point in their lives as compared to persons without a known mutation (po0:01). The risk of acquiring HNPCC-related cancers (endometrium, urinary tract, and small intestine) was more than doubled (odds ratio=2.4) for persons with an identified mutation (p=0.052). We have previously demonstrated that affected individuals with the founder mutation have a reduced frequency of extracolonic cancers in comparison with individuals with other mutations [J. ager et al., 1997], and after exclusion of these affected persons (27) from this calculation we found that the risk of developing HNPCC-related cancers was significantly higher in families with identified mutation: odds ratio 2.9, p=0.02.
Cancer of the rectum was less frequent in families with an identified mutation, or put in other words, the odds ratio was 2.5 for the first cancer being in the colon and not in the rectum for individuals with a identified mutation, p=0.03.
The age of onset for affected family members whose first cancer was colorectal, was 43.5 years for persons with identified mutation (MUT+) and 49.6 years for persons without known mutation (MUTÀ), po0:01: When calculating the age at onset, irrespective of type of cancer, mean age for MUT+ was 45.3 as compared to MUTÀ 49.2, p=0.05 (Table 2).
There was no difference between the groups of persons with and without identified mutation when evaluating the cause of death, crude survival, or life expectancy (data not shown).
Persons with missense mutations and single codon deletions were phenotypically indistinguishable from persons with other kinds of mutations when analyzing for age at onset, number of colorectal cancers, HNPCC-related cancers, other cancers, and distribution of colon and rectum cancers. Likewise, they did not differ significantly when analyzing for cause of death, survival, and life expectancy (data not shown).
Wijnen has described a method to calculate the probability of identifying a mutation in HNPCC families according to which low average age at onset, presence of endometrial cancer, and fulfillment of the Amsterdam criteria increases probability of mutation identification [Wijnen et al., 1998a]. In agreement with his results we found that our families with identified mutations scored significantly higher probabilities as compared to families in which the mutation was not identified (po0:01). If screening for mutations only had been performed in families with mutation identification probabilities higher than 0.25 it would have resulted in a mutation detection rate of 84% (21 of 25 families), but mutation in six families would have been overlooked by omitting screening the 14 (36%) families with a mutation detection probability o0.25 (Table 3). It is noteworthy that in four of these six families with a low probability of mutation identification (below 0.25) only one person from each family had a cancer verified by histology, due to difficulties in obtaining either valid medical information or extended family information (Fig. 2). For 32 affected persons from four MUT-families with a high mutation identification probability, the clinical features (higher than 0.25 (Table 3)) were indistinguishable from the features described in 141 affected from MUT+ families, while differences in the clinical features were highly significant when comparing with the 46 affected from the eight families who had scored o0.25 in mutation detection probabilities (Table 2).
Microsatellite instability (MSI) was investigated in the 12 families in which a pathogenic mutation was not identified, and determined in seven of these. In the remaining five families acceptance to investigate tumor material could not be obtained (two families) or investigation failed most likely because of degraded DNA retrieved from paraffin-imbedded tissue. Still, it was noteworthy that all four investigated tumors from the eight families with a detection probability o0.25 demonstrated microsatellite stability, while this was only the case in one of three investigated tumors from individuals from families scoring 40.25.
DISCUSSION
Several phenotypic characteristics in Amsterdam positive families differed when comparing affected persons with and without identified mutations. The families without identified mutation had a significantly lower frequency of multiple colorectal cancers, as well as a lower frequency of HNPCC-related cancers (uterine, urinary tract, and small intestine). Furthermore, rectum cancer was significantly more often diagnosed and the age at onset tended to be FIGURE 1. A: A list of six missense mutations and one amino acid deletion identi¢ed in HNPCC families. ND could not be determined since only DNA from the proband was available. B: Sequence alignment of regions of MSH2 and MLH1 containing amino acids mutated in HNPCC families.''. . .'' designates that no sequence has been aligned. Highlighted residues indicate positions of mutated amino acids. Codon numbers for the human sequences are shown above the sequence alignment. Alignment was performed with ClustralW 1.8 [Higgins et al.,1996].
higher than for persons in whom a mutation was identified (Table 2: all cases). Eight of the 12 families in whom a mutation was not found are likely phenotypically but not technically to belong to the entity of ''late onset families,'' that is families fulfilling the international criteria except for the criterion of the one affected being diagnosed before the age of 50 (Table 2: cases with Wo0:25 and MSS). In fact, the average age at onset was 50 or more in these eight in whom a mutation was not identified, but as it takes just one affected to be diagnosed before the age of 50, they technically belong to the group of Amsterdam positive families. Previously described characteristics of late onset families are in concordance with our findings in these families, i.e., many rectum cancers and few HNPCC-associated cancers, and of course absence of mutations in MLH1 or MSH2 [Vasen et al., 1994]. It is highly unlikely that a mismatch repair gene is involved in the tumor genesis in at least four of these families, as MSI could not be demonstrated in tumors from affected members.
The remaining four families in which a mutation was not identified were phenotypically indistinguishable from the families in whom a mutation was identified (Table 2: cases with W > 0:25) as reflected by most tumors demonstrating MSI and a high score in mutation probability calculations (Table 3). It has been estimated that larger rearrangements in MSH2 are causing HNPCC in approximately 5% of the families [Wijnen et al., 1998b] and as these mutations often escape detection by direct sequencing, future analysis by southern blot will reveal the proportion of larger rearrangements in our families. The majority of the identified mutations result in a premature stop codon or exon deletion and can be implemented for predictive testing without further analysis, but this is not the case when dealing with missense mutations and single codon deletions. A lod score of 4.0 was found in one family, a result that does not prove the missense mutation to be pathogenic but facilitates its use in predictive testing in this family as the substitution can at least be classified as an informative marker, either an intragenic marker in MLH1 or a closely linked marker to a gene close to MLH1. In Figure 1 we describe the characteristics of the single codon deletion and the six missense mutations in order to evaluate their capacity as pathogenic mutations. The A45 V substitution is neither situated in a conserved domain nor causing a considerable biochemical change, and although the same mutation was not found in 50 normal alleles, it should be noted that the person in whom it was found was of Inuit origin and the alteration might represent a specific ethnic polymorphism for which we do not have control material. For the remaining mutations with unproven pathogenic potential, it is noticeable that they all are localized in conserved domains and result in major biochemical changes. Further, two of them have been described frequently in the literature (www.nfdht.nl) ( Table 2). Unfortunately, it was not possible to analyze tumors from these patients for microsatellite instability and allelic loss, which would have improved the quality of the evaluation of their pathogenic nature. Future application of in vitro assays might establish these mutations as pathogenic and make their use in predictive testing possible.
Evaluation of the clinical data in the families forms the basis for electing families in whom screening for a mutation should be performed and the mutation detection probability equation described by Wijnen et al. [1998a] has not proved to be the ultimate tool in this respect. It is of necessity that sufficient data on the families are available, which is not always the case. Families in which a mutation was identi¢ed, though the probability of identifying a mutation in MLH1 or MSH2 was calculated to be lower than 25%. Fam. 182 had a C deletion resulting in a premature stop in codon 16. Colorectal cancer was only veri¢ed in one person, but in the family history skipped generations appear. It cannot be excluded that families 141, 96, and 221 would beAmsterdam positive if family history could be traced further back and veri¢ed. All three families had missense mutations. Fam. 240 scores low solely because of high mean age at diagnosis. It is Amsterdam positive, endometrial cancer is present in one person, and a missense mutation was identi¢ed. Fam. 339 is not Amsterdam positive as the central person died from pancreas cancer at age 45. A deletion resulting in a premature stop at codon 662 was identi¢ed.
In four of six families with found mutation and a mutation detection probability o25%, histological verification was only possible in the proband. This indicates that if extended verification had been possible it is likely that at least some of them would have proved Amsterdam positive and reached a higher probability score. MSS and MSI status in tumors from affected individuals have previously proved to be valuable tools for selecting probands for mutation detection in mismatch repair genes [Aaltonen et al., 1998], but tumor tissue is often not available from families referred to the oncogenetic clinics. Two families in which a mutation was found were certain to be Amsterdam negative, at least in the generations from which we could obtain data. In one family (339) the central person died from pancreas cancer and, given time, she might have developed colorectal cancer. In the other family (182) both parents lived to be old without developing colorectal cancer or any other cancer and this family presents the first skipped generation in our material, Figure 2. The Amsterdam positive families in whom a mutation was not identified segregate into two well defined groups: one with a detection probability o25%, which for the majority are likely to be misclassified late onset families, and one group with 425% (actually more than 40%) probability of mutation detection (Table 3), which have clinical features indistinguishable from affected families with mutation identified in either MLH1 or MSH2. In a minor part of the mutation negative families larger rearrangements might be identified in the already analyzed genes or mutation might be found in other mismatch repair genes but, for the remaining majority of families, it is likely that genes other than mismatch repair genes or mechanisms are involved.
|
2018-04-03T05:52:46.973Z
|
2002-07-01T00:00:00.000
|
{
"year": 2002,
"sha1": "419a6e52867553a5646b1a0e12d7802c03b73248",
"oa_license": null,
"oa_url": "https://doi.org/10.1002/humu.10083",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "d45557e42f269cb9e4d683326d3b31085de071d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
251351061
|
pes2o/s2orc
|
v3-fos-license
|
Joint effects of polycyclic aromatic hydrocarbons, smoking, and XPC polymorphisms on damage in exon 2 of KRAS gene among young coke oven workers
Genetic polymorphisms may contribute to individual susceptibility to DNA damage induced by environmental exposure. In this study, we evaluate the effects of co-exposure to PAHs, smoking and XPC polymorphisms, alone or combined, on damage in exons. A total of 288 healthy male coke oven workers were enrolled into this study, and urinary 1-hydroxypyrene (1-OH-Pyr) was detected. Base modification in exons of KRAS and BRAF gene, and polymorphisms of XPC were determined in plasma by real-time PCR. We observed 1-OH-Pyr was positively related to damage in exon 2 of KRAS (KRAS-2) and in exon 15 of BRAF (BRAF-15), respectively, and KRAS-2 and BRAF-15 were significantly associated with increased 1-OH-Pyr. A stratified analysis found 1-OH-Pyr was significantly associated with KRAS-2 in both smokers and non-smokers, while 1-OH-Pyr was significantly associated with BRAF-15 only in smokers. Additionally, individuals carrying both rs2228001 G-allele (GG+GT) and rs3731055 GG homozygote (GG) genotype appeared to have more significant effect on KRAS-2. The high levels of 1-OH-Pyr were associated with KRAS-2 only in rs2228001 GG+GT genotype carriers and the high levels of 1-OH-Pyr were associated with KRAS-2 only in rs3731055 GG genotype carriers and the most severe KRAS-2 was observed among subjects carrying all four of the above risk factors. Our findings indicated the co-exposure effect of PAHs and smoking could increase the risk of KRAS-2 by a mechanism partly involving XPC polymorphisms.
Introduction
Polycyclic aromatic hydrocarbons (PAHs), producing from living environment including smoking, vehicle exhaust emissions, and fuel combustion, are a group of important components in the air pollution, and attract widespread concerns in China (1,2); PAHs existing in some occupational environments such as coke production are also measured and assessed for the early impacts on health risks (3)(4)(5). As is known to all, PAHs are a well-known mixture complex of carcinogen with toxicity and mutagenicity, and epidemiological evidence illustrated urinary 1-hydroxypyrene (1-OH-Pyr) is highly associated with the total concentration of PAH metabolites in both smokers and non-smokers (6) and several evidences suggested that urinary 1-OH-Pyr, used as a measure of total absorbed dose, could be a comprehensive biomarker of exposure to PAHs (7,8). Therefore, urinary 1-OH-Pyr is considered as a suitable indicator to evaluate the degree of PAHs exposure due to its convenience, accessibility and effectiveness.
It has been demonstrated that PAHs exposure can lead to early deleterious alters on DNA including oxidative DNA damage (9), double-strand DNA breaks (10), reactive oxygen species generation and oxidative stress (11), genetic exon damage (12), which may accumulate genotoxic damage, change cell functions, and make people susceptible to mutagenesis and carcinogenic processes. Also, many researches have revealed that excessive exposure to PAHs may increase the risk of lung cancer (13,14). In particular, an investigation analyzing the association between lung cancer somatic mutations and occupational exposure in never-smokers shows that patients exposed to PAHs were mostly diagnosed with gene v-raf murine sarcoma viral oncogene homolog B1 (BRAF) mutation and gene kirsten rat sarcoma viral oncogene homolog (KRAS) mutation (15). The gene KRAS is one of the RAS family members and is the most frequent mutation of them (16). More importantly, KRAS is the most frequent oncogene in non-small cell lung cancer and the lung cancers activated by KRAS mutation have serious outcomes in both early-stage and advanced metastatic settings (17,18). More evidences revealed KRAS gene mutation, existing in the lung tumor patients, is related to the PAHs exposure from smoking and coal combustion (19,20). Also, another research showed that particular matter 2.5 aggravates the DNA damage and apoptosis involving the upregulating of expression of p-BRAF/BRAF (21). However, the association of co-exposure to PAHs and smoking and exon damage in KRAS or BRAF gene still could contain further proof.
Epidemiologic evidence indicated that gene damage can be also regulated by genetic factors. Studies supported that single nucleotide polymorphisms (SNPs) of genes can modulate diseases and gene mutation (22,23). The SNP of the xeroderma pigmentosum group C (XPC), which is responsible for global nucleotide excision repair, an important human DNA repair system, have been investigated that it can modulate the DNA damage level following exposure to PAHs (24). A study reveals the important role of XPC protein, which may effect the inflammation, oxidative stress, and DNA damage process in the protection against carcinogenic potential of urban air pollution (25). It is also reported that the genetic polymorphisms of XPC may predict inter-individual variation in DNA damage levels due to exposure (26). In addition, XPC is considered to help repair DNA damage induced by KRAS (27). However, the function of the polymorphisms of XPC gene on the KRAS and BRAF gene damage is still unknown.
We have previously demonstrated that individuals with FEN1 rs174538 GA+AA genotype have greater effects of urinary 1-OH-Pyr on exon damage in EGFR gene compared to those with rs174538 GG genotype after controlling for various confounders, and the statistically significant interactive effect between rs174538 genotype and urinary 1-OH-Pyr on exon damage in EGFR gene was observed (12). Nevertheless, neither the co-exposure effects of PAHs and smoking on exon damage in KRAS and BRAF nor their effects modified by XPC genetic polymorphisms had been investigated. The present analysis on whether the co-exposure effect of PAHs and smoking was involved in increasing the risk of exon damage in KRAS and BRAF gene and whether their effects were modulated to some extent by XPC genetic polymorphisms was complementary to our previously published data.
Study Subjects
As described in our previous study (12), a total of 295 male coke oven workers were included in our investigation, aged between 19 and 35 years, from a coking plant in south region of China. The subjects excluded from the study were as follows: a) If the subjects had prior history of major diseases such as cancer; b) if they were treated with radiotherapy and chemotherapy as classical DNA damaging agents within the past 6 months; and c) if the subjects had continuous exposure in the workshop < 3 months. Each of the participants signed informed consent and filled in the occupational health questionnaire concerning demographic information, occupational history, medical history, and lifestyle including working years and smoking status and other data. Those who smoke < 1 cigarette a day are considered non-smokers; otherwise, subjects are considered smokers. Following the face-to-face interviews of questionnaires, all participants provided 20 ml of spot urine samples in 50 ml polyethylene tubes at the end of each work shift and 5 ml of venous blood in disposable ethylenediaminetetraacetic acid anticoagulant tubes, and all samples were stored at −80 • C until laboratory examinations. After excluding 2 participants with no available urinary samples and 5 with inadequate plasma sample volume, the left 288 male coke
Measurement of urinary -OH-Pyr and urinary creatinine
The concentration of urinary 1-OH-Pyr was detected by gas chromatography-mass spectrometry (GC/MS) as previously described (12). A 3 ml of urine was mixed with 20 µl of 1-OHP-d9 solution, 1 ml of acetate acid buffer (0.5 M, pH 5.0), and 20 µl of β-glucuronidase/sulfatase (Sigma-Aldrich, Munich, Germany) for all night at 37 • C. 1.5 mg of MgSO4·7H 2 O was added to saturate the hydrolyzed urinary samples. After extracting twice using 1.5 ml of n-hexane and centrifugating at 300 g for 10 min, we used nitrogen to treat with the organic extracts, and mixed 100-µl BSTFA with the residue extractives following incubation at 90 • C for 45 min. Finally, we extracted 1 µl to inject on the GC/MS system (Agilent, Santa Clara, CA). Considering the inter-individual variations in urinary metabolites on dilution status, we measured urinary creatinine concentration employing an automated clinical chemistry analyzer according to Jaffe's colorimetric method to calibrate urinary 1-OH-Pyr and expressed as micromoles per millimole of creatine.
Determination of damage index of exon of KRAS and damage index of exon of BRAF
The DNA extract procedures and the amplification process have been described previously (12). We designed the primer of KRAS-2 and BRAF-15, and used β-actin as internal reference; the primers are listed as follows: βactin (forward: CGGGAAATCGTGCGTGACAT; reverse: GAAGGAAGGCTGGA AGAGTG); exon 2 of KRAS gene (forward: GGCCTGCTGAAAATGACTGAATATAA; reverse: AAAGAATGGTCCTGCACCAGTA); exon 15 of BRAF gene (forward: TCATGAAGACCTCACAGTAAAAATAGG; reverse: AGCAGCATCTCAGGGCCAAA). We added 10-µl 2× UltraSYBR mixture, 0.8-µl primer mixture, containing 0.4 µl of 10-µM forward primer and 0.4 µl of 10-µMreverse primer, and 1-µl DNA sample into the 96-well plate, and replenished 8.2-µl H 2 O to get a 20-µl final volume. We used the real-time fluorescence quantitative PCR instrument (Roche LightCycler96, America) running the amplification process, which began with 95 • C for 600 s and 45 cycles of 95 • C for 10 s, 60 • C for 10 s, and 72 • C for 15 s. After the amplification process, we used the value of Ct to express the degree of damage index of the gene just as we have referred. According to the method of Sikorsky et al., the mean modified efficiency of PCR was positive correlated with 2 Ct1−Ct0 , where Ct1 means the target genes and Ct0 means the internal genes (28). In this study, we used 2 Ct1−Ct0 to represent of the damage index of gene.
Statistics
We used the Kolmogorov-Smirnov normality test to examine the normality of continuous variables. The concentrations of creatinine-adjusted urinary 1-OH-Pyr and the values of KRAS-2 and BRAF-15 were natural logarithm (ln) transformed to improve their normality. Normally continuous variables in this study were described using mean ± standard deviation (SD) and non-normal distributed variables were showed as medians with interquartile range (IQR) and categorical variables were presented as number (percentage). The concentrations of creatinine-adjusted urinary 1-OH-Pyr and the values of KRAS-2 and BRAF-15 were natural logarithm (ln) transformed because of the right-skewed distribution. The continuous variables of KRAS-2 and BRAF-15 were described as the dependent variable (y), respectively, in the multiple linear regression models with adjustment for working years (continuous), workplace (low exposure/high exposure), and smoking status (smokers/non-smokers) to estimate the association coefficients (β's) and their 95% confidence interval (95%CI) with per increment of creatinine-adjusted urinary 1-OH-Pyr. Age was excluded from the multiple linear regression models due to high correlation with working years (the Pearson . /fpubh. . coefficient r = 0.831; p < 0.001). Additionally, a restricted cubic spline model was employed to estimate the linear and non-linear shape of the associations of KRAS-2 and BRAF-15 with 1-OH-Pyr, respectively. The multiple linear regression models with adjustment of working years, smoking status, and workplaces were used to evaluate the effects of XPC genotype on DNA damage, accompanying with the relative β's and 95% CIs in individuals carrying rs2228001 GG/GT genotype combination with rs3731055 GG genotype, carrying rs2228001 GG/GT genotype or rs3731055 GG genotype against individuals carrying rs2228001 TT genotype combination with rs3731055 GA/AA genotype. After 288 participants were further classified into three subgroups (T1, T2, and T3 subgroups) by the tertiles of 1-OH-Pyr, we employed the multiple linear regression models with adjustment of working years, workplace, and smoking status to calculate the p-trend values, with the relative β's and 95% CIs in T2 and T3 against T1 as the reference. Additionally, we also categorized all the study subjects into low (less than the 50th percentile of creatinine-adjusted 1-OH-Pyr) and high (above the 50th percentile of 1-OH-Pyr) 1-OH-Pyr subgroups. Hardy-Weinberg equilibrium (HWE) for the two SNPs was tested by a goodness-of-fit χ 2 -test before the analysis. Moreover, the joint effects of dichotomous 1-OH-Pyr (low and high exposure) with smoking status (smokers and non-smokers), XPC rs2228001 (TT, GG+GT) and XPC rs3731055 (GA+AA, GG) on KRAS-2 were further estimated using the multiple linear regression models with adjusting for working years and workplaces.
We conducted a restricted cubic spline model using the R software (version 3.4.1). The other data analyses with SPSS18.0 (SPSS Inc., Chicago, IL, USA). The Bonferroni-type correction was used for the multiple comparisons and p < 0.025 (after Bonferroni correction for two comparisons) was defined statistically significant. A two-sided p < 0.05 was considered as statistical significance for all other analysis.
Subject characteristics
The general characteristics and the values of KRAS-2 and BRAF-15 for workers in the different internal exposure are showed in Table 1. No differences were observed in the distributions of age, working years, and smoking status among these three groups (all p > 0.05). In addition, after adjustment for smoking status, working years, and workplaces, we observed the high urinary 1-OH-Pyre group (median: 11.63) had significantly higher values of KRAS-2 and BRAF-15 than those in the intermediate 1-OH-Pyre group (median: 4.60) and low 1-OH-Pyre group (median: 2.42), respectively, (both p < 0.001, Table 1).
Associations of urinary -OH-Pyr with damage index of exon of KRAS and damage index of exon of BRAF
As shown in Table 2, the median of KRAS-2 in tertiles of urinary 1-OH-Pyr levels was 2.62, 2.84, and 2.97; and 3.88, 4.10, and 4.59 for BRAF-15 in the first, second, and third urinary 1-OH-Pyr tertiles, respectively. The multiple linear regression models were used to estimate the associations of urinary 1-OH-Pyr with KRAS-2 and BRAF-15, and we observed both KRAS-2 and BRAF-15 were significantly gradually increased in subjects in the middle and upper tertiles of urinary 1-OH-Pyr compared to the subjects in the lower tertile of urinary 1-OH-Pyr after adjustment for smoking status (model 1) and smoking status, working years, and workplace (model 2) (all p trend < 0.001). In addition, the adjusted β coefficients (95% CI) for ln-transformed KRAS-2 per increment of ln-transformed urinary 1-OH-Pyr were 0.103 (0.066-0.140) and 0.101 (0.061-0.141); 0.088 (0.059-0.117) and 0.087 (0.056-0.119) for lntransformed BRAF-15 per increment of ln-transformed urinary 1-OH-Pyr in adjusted models 1 and 2, respectively (all p < 0.001). Furthermore, the multivariable-adjusted restricted cubic spline curve analyses showed the associations of urinary 1-OH-Pyr with KRAS-2 and BRAF-15 (p for non-linearity < 0.001 and p = 0.006, respectively), which confirmed the positive non-linear relationships (Figure 1). In stratified analyses, after adjustment for working years and workplaces, we found that the values of KRAS-2 and BRAF-15 only in smokers were positively associated with the concentrations of urinary 1-OH-Pyr with adjustment for working years and workplaces (both p < 0.001, Figure 2).
E ects of XPC SNPs on KRAS-and BRAF-
Genotype distribution of the two polymorphisms, rs2228001 and rs3731055, were in Hardy-Weinberg equilibrium (both p > 0.05). The associations of KRAS-2 and BRAF-15 with XPC genotypes in the 288 coke oven workers are listed in Table 3. We found that the rs2228001 minor allele (G-allele) is strongly associated with increased KRAS-2 (p trend < 0.001), the medians of KRAS-2 in TT, TG, and GG genotypes were 2.65, 2.89, and 2.99, respectively. Additionally, the rs3731055 GG homozygote genotype was associated with higher KRAS-2 (p trend = 0.006), the medians of KRAS-2 in GG, GA, and AA genotypes were 2.90, 2.63, and 2.76, respectively. We failed to observe the significant associations of BRAF-15 with rs2228001 and rs3731055 genotypes. In addition, we further assessed the effect of XPC genotypes on exon damage by supgrouping 288 participants into rs2228001 G-allele combination with rs3731055 GG homozygote genotype defined as a risk score of 2 (score = 2) carriers, rs2228001 G-allele or rs3731055 GG homozygote genotype defined as a risk score of 1 (score = 1) carriers, and rs2228001 TT homozygote genotype combination with rs3731055 A-allele carriers defined as reference. Compared to the reference, we observed that 88 participants (30.6%) showed a significant increase effect (β = 0.106; p = 0.013) and 123 participants (42.7%) showed a significant increase effect (β = 0.150; p < 0.001) on KRAS-2 ( Figure 3A). However, we failed to observe a significant impact of XPC SNPs on BRAF-15 ( Figure 3B). to the non-smokers with low urinary 1-OH-Py ( Figure 4A). Considering the small sample size and the strong effect of the rs2228001G-allele on KRAS-2, we then sub-grouped the 288 participants into rs2228001 TT and GG+GT genotype carriers and found the participants carrying rs2228001 GG+GT genotype had the highest KRAS-2 among the four subgroups with the rs2228001 TT genotype carriers as reference, who showed a 0.213 (95%CI: 0.123-0.304) increased KRAS-2 compared to the rs2228001 TT genotype carriers with low urinary 1-OH-Py (median KRAS-12: 3.01 vs. 2.53) ( Figure 4B). In addition, given the weak effect of the rs3731055 Aallele compared to rs3731055 GG homozygote genotype on KRAS-2 as Table 3 showed above, the 288 participants were further categorized into rs3731055 GA+AA and rs3731055 GG genotype carriers. Compared with the rs3731055 GA+AA genotype carriers with low1-OH-Pyr, we observed the rs3731055 GG genotype carriers with urinary 1-OH-Pyr showed the highest level of KRAS-2 (median KRAS-12: 3.06 vs. 2.58), conferring a 0.213 increase in KRAS-2 (95% CI: 0.121-0.305) ( Figure 4C). Furthermore, we then explored the joint effects of the aforementioned four risk factors including smoking, carrying rs2228001 GG+GT genotype, carrying rs 3731055 GG genotype and high urinary 1-OH-Py. Compared to these participants without any risk factors (as reference), we observed that 46 participants (16%) carrying 4 risk factors showed significant higher KRAS-2 (median KRAS-2: 3.02 vs. 2.51), and the adjusted β coefficients with 95% CI from regression models adjusted for working years and workplaces were 0.254 (0.083-0.424) ( Figure 4D).
Discussion
In this study, we found PAHs exposure were positively associated with both KRAS-2 and BRAF-15, respectively, following non-linear dose-response pattern among 288 coke oven workers. Then associations of urinary 1-OH-Pyr with KRAS-2 were observed among both smokers and non-smokers, and the adjusted β coefficients were stronger among smokers than that among non-smokers. Nevertheless, the significant associations between1-OH-Pyr and BRAF-15 were found only in smokers. The subsequent analyses indicated that XPC genetic polymorphism, marked by SNP rs2221008 and rs3731055, were significantly associated with KRAS-2 and this associations > . ; **p < . .
Variables Major allele(A) Minor allele(a) N (AA/Aa/aa) Exon genetic damage index of gene, median (25th, 75th percentiles)
p-trend* may be modulated by 1-OH-Pyr levels. More importantly, we revealed the joint effects of PAHs, smoking and XPC genetic polymorphism on increasing KRAS-2. The PAHs' exposure is a crucial public health concern worldwide because of their genotoxic and carcinogenic properties and associated with DNA damage and increased risk of developing lung cancer. In this study, we used urinary 1-OH-Pyr as suitable and sensitive biomarker to reflect internal PAHs exposure.
AA
The oncogene mutation KRAS is one of the key driver mutations in NSCLC (29), and approximately 97% of KRAS mutations in NSCLC involve codons 12 or 13 in exon 2 (30). Mutations in BRAF, observed in 2-4% of NSCLCs, mainly occur transversion of thymidine to adenosine at nucleotide T1799A on exon 15, also existing the mutation of G469A and D594G in BRAF (31). The latest evidence has shown that DNA damage plays an important role in the DNA mutational signatures (32). In this study, we observed a linear dose-response relationship between urinary 1-OH-Pyr and damage in exon 2 of KRAS and damage in exon 15 of BRAF. The previous evidence has also proved that PAHs in smoky coal emissions can induce genetic mutations in KRAS genes and the mutation in KRAS Frontiers in Public Health frontiersin.org . /fpubh. . gene can reflect the PAH exposure (19). Also, two investigations showed that KRAS mutations were associated with exposure to smoky coal; based on the mutation spectra in tumor genes, the gene mutation can be attributed to direct DNA damage from mutagenic exposures (19,33). Our result was consistent with the reported study in which point mutation in KRAS gene was represented the PAH exposure in mice (34).
Smoking is a major environmental risk factor contributing to DNA damage. Especially, it is proved that smoking is an independent factor for KRAS mutation in NSCLC (35). One of the characteristics of NSCLC in smokers is the DNA damage effect by tobacco carcinogens, including PAHs. It has been identified that most of the driver gene alterations in lung adenocarcinoma in never-smokers include EGFR, KRAS mutations, and so on (36). In this study, we also explore the associations of urinary 1-OH-Pyr with damage in exon 2 of KRAS and damage in exon 15 of BRAF in smokers and nonsmokers, and found significant positive associations of urinary 1-OH-Pyr with damage in exon 2 of KRAS in both smokers and non-smokers, but a stronger effect was observed in smokers compared to that in non-smokers, suggesting tobacco smoking is a contributor to damage in exon 2 of KRAS. Additionally, we found the significant positive association between urinary 1-OH-Pyr and damage in exon 15 of BRAF in smokers but not in non-smokers. Unlike EGFR mutation, which is increased in never smokers, KRAS mutation in NSCLC has an odd decrease among never smokers (29), and it is typically found in tumors from patients who smoke (often heavy smokers) (18). Also, BRAF mutation, another different lung cancer driver mutation, is frequent in smoking patients (37). These previous evidences, along with the results from this study, demonstrated that damage in exon 2 of KRAS and damage in exon 15 of BRAF could be served as novel biomarkers for DNA damage and maybe as potential mediators for carcinogenesis induced by PAHs exposure and cigarette smoking.
The genetic variant XPC, as an important protein in the NER pathway, plays a crucial role in modulating the effects on repairing damaged DNA from the environmental exposure to maintain the genetic integrity (38,39). This study was further intended to investigate whether both XPC rs2221008 and rs3731055 influence the susceptibility of damage in exons in KRAS and BRAF induced by the combined exposure to PAHs and smoking, and illustrated that individuals carrying XPC rs2228001G allele were at a significantly increased risk for damage in exon 2 of KRAS, and carriers of the rs3731055 GG homozygote genotype were associated with higher damage in exon 2 of KRAS. Evidence shows that XPC polymorphisms are associated with the different capacity to repair DNA damage and further impact the individual's susceptibility to lung cancer (40). Similar research has proved that the carriers of the XPC rs2228001 and the XPC rs3731055 are related with DNA damage levels in coke oven workers (24). These risk factors including .
/fpubh. . cigarette smoking, high urinary 1-OH-Pyr, carrying rs2228001G allele and carrying 3731055 GG homozygote genotype, were simultaneously considered to explore their joint effect on damage in exon 2 of KRAS. And we observed that only a small minority of participants (16.0%) with all four risk factors had significant joint effect on damage in exon 2 of KRAS, indicating XPC genetic effects on damage in exon 2 of KRAS are stronger in cigarette smokers with higher exposure to PAHs than in nonsmokers with lower exposure to PAHs, which provides us useful information on the role of co-exposure to PAHs and smoking in inducing damage in exons, and XPC genetic polymorphism may partly confers increased susceptibility of individuals to damage in exon of KRAS associated with combined exposure to PAHs and cigarette smoking, as well as strategies should be designed to protect this subpopulation with these risk factors.
This study certainly has some major strengths. This study is population-based design with a high participation rate (> 97%), and we detected urinary 1-OH-Pyr, a sensitive biomarker to evaluate the individual PAHs exposure levels, and the levels of damage in exons in KRAS and BRAF genes, in particular, KRAS genes are viewed as critical DNA targets for environmental carcinogens. In addition, considering XPC gene plays an important role in the initiation of DNA repair, we further investigated whether XPC genetic polymorphisms regulated the effects of PAHs exposure on exon damage in individuals with regular exposure to coke oven emission rich in PAHs at least 3 months. Our findings showed that joint effects of PAHs exposure with the well-known risk factors, such as cigarette smoking regulated by genetic variation on exon damage levels, are in line with previous findings, which could provide scientific evidence to develop corresponding protective intervention for susceptible population. However, this study is a cross-sectional and exploratory design in which our results are difficult to establish a causal relationship between co-exposure to PAHs and smoking, XPC genetic polymorphisms, and damage in exons. Further functional studies are warranted to elucidate the underlying the molecular mechanisms, and we plan to conduct further biochemical studies and functional studies to elucidate the biological plausibility in this study. Additionally, given the small sample size and this study was carried out among occupational population only aged 19-35 years following the inclusion and exclusion criteria strictly, whether our findings can be extrapolated to the general population remains to be explored in further research with larger sample size.
Conclusion
The findings in this study indicated that individuals with the XPC genetic variants (marked by rs2221008 G allele and rs3731055 GG homozygote genotype) may predict the susceptibility to damage in exon 2 of KRAS induced by PAHs from occupational exposure and cigarette smoking, which lend further insight to potential joint effects of genetic and environmental factors affecting lung carcinogenesis, as well as make it possible to provide evidence-based personalized prevention and intervention for deleterious health effects caused by environmental exposure.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of Guangzhou Medical University approved the study (KY01-2019-02-09). The patients/participants provided their written informed consent to participate in this study.
|
2022-08-06T13:16:40.776Z
|
2022-08-05T00:00:00.000
|
{
"year": 2022,
"sha1": "a1ac7eb56bd56cce93ee36941450fade188af0e4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a1ac7eb56bd56cce93ee36941450fade188af0e4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
251657037
|
pes2o/s2orc
|
v3-fos-license
|
Differences in circulating obesity-related microRNAs in Austrian and Japanese men: A two-country cohort analysis
Background The prevalence of obesity is higher in Western countries than in East Asian countries. It remains unknown whether microRNAs (miRNAs) are involved in the pathogenesis of the ethnic difference in obesity. The purpose of this study was to determine whether expression levels of circulating obesity-associated miRNAs are different in Europeans and Asians. Methods The subjects were middle-aged healthy male Austrians (n = 20, mean age of 49.9 years) and Japanese (n = 20, mean age of 48.7 years). Total miRNAs in serum from each subject were analyzed using the 3D-Gene miRNA Oligo chip. miRNAs that showed significant differences between the Austrian and Japanese groups were uploaded into Ingenuity Pathway Analysis (IPA). Results Among 16 miRNAs that were revealed to be associated with obesity in previous studies and showed expression levels that were high enough for a reasonable comparison, serum levels of 3 miRNAs displayed significant differences between the Austrian and Japanese groups: miR-125b-1-3p was significantly lower with a fold change of −2.94 and miR-20a-5p and miR-486–5p were significantly higher with fold changes of 1.73 and 2.38, respectively, in Austrians than in Japanese. In IPA including all 392 miRNAs that showed significant differences between Austrians and Japanese, three canonical pathways including leptin signaling in obesity, adipogenesis pathway and white adipose tissue browning pathway were identified as enriched pathways. Conclusions miRNAs are thought to be involved in the ethnic difference in the prevalence of obesity, which may in part be caused by different expression levels of miR-125b-1-3p, miR-20a-5p and miR-486–5p.
Introduction
Obesity is a common worldwide health problem and causes an increased risk of various diseases including cardiovascular disease, noninsulin-dependent diabetes mellitus, obstructive pulmonary disease, arthritis and cancer [1]. The basic cause of obesity is an imbalanced energy homeostasis with an excess of calories consumed at a paucity of calories expended, and this imbalance is propelled by a variety of factors related to overeating, low energy expenditure and insufficient physical activity. There is a large heterogeneity in the prevalence of obesity, which is in part caused by individual factors including socio-economic and ethnic differences [2].
Heritability is well recognized as an important cause of obesity. The intrapair correlation coefficients of the values for body mass index (BMI) of identical twins reared apart were reported to be 0.70 for men and 0.66 for women [3]. The heritability of BMI and waist circumference was estimated to be 77% for both in a previous study on 5092 identical and non-identical twin pairs aged 8-11 years [4]. By genome-wide association studies (GWAS), a large number of gene variants were identified to be associated with the prevalence of obesity defined as high BMI; however, those gene variants account for only a small percentage of individual variation of obesity [5]. Genetic susceptibility to obesity involves epigenetic modifications including DNA methylation and histone modification, which are influenced by age and environmental factors such as diet and physical activity [6]. microRNAs (miRNAs), non-coding single-strand RNAs consisting of 20-25 bases, also play a role in epigenetic modifications of obesity-associated genes [7,8]. Circulating miRNAs are also promising biomarkers for detection of various diseases, and there has been an accumulation of information on circulating miRNAs that are different in individuals with and without obesity [9][10][11]. Although ethnic differences are related to the prevalence of obesity [12], it is not known whether and how miRNAs are involved in the ethnic difference in obesity.
The purpose of this study was therefore to investigate the ethnic difference in obesity-related miRNAs and to explore the possibility of miRNAs as a cause of ethnic difference in the prevalence of obesity. As shown in Fig. 1, there is a great difference in the prevalences of obesity, defined as BMI of 30 kg/m 2 or higher, in Western countries and East Asian countries [12]. The highest prevalence of obesity is in the U.S., which is a multiethnic country. In Western countries, the cutoffs for overweight and obesity based on BMI are 25 and 30 kg/m 2 , respectively. When this cutoff is used, the prevalence of obesity (BMI of 30 kg/m 2 or higher) is only 2-4% in East Asian countries. From the viewpoint of prevention of type 2 diabetes and hypertension, lower (stricter) cutoffs such as 23 kg/m 2 for overweight and 25 kg/m 2 for obesity are recommended in East Asian countries [13]. According to WHO global estimates, 39% and 13% of adults aged 18 years and over were overweight and obese, respectively, in the world in 2016 [14]. In the U.S., prevalences of obesity (BMI of 30 kg/m 2 or higher) and severe obesity (BMI of 40 kg/m 2 or higher) from 2017 to March 2020 were 41.9% and 9.2%, respectively [15]. In this study, we compared circulating obesity-related miRNA levels in healthy Austrian and Japanese men as representatives of individuals in Western countries and East Asian countries. Prevalences of obesity among adults (both sexes) were 20.1% in Austria and 4.3% in Japan [12].
Participants
The participants in this study were healthy male Austrians (n = 20) and Japanese (n = 20). All of the Austrian participants were Caucasians, and all of the Japanese participants were originally from Japan. We tried to enroll representative healthy middle-aged Austrian and Japanese men into this study. Those receiving any medication therapy and those with histories of known inflammatory, metabolic and cardiovascular disorders or malignancy were excluded from the participants of this study.
Smokers were also excluded from the participants. The Japanese participants were male healthy volunteers who were working in different districts including three cities, Nishinomiya, Sasayama or Yamagata, in Japan. The Austrian participants were recruited at the Medical University of Graz, Graz, Austria. The protocol of this study was approved by the Hyogo College of Medicine Ethics Committee (No. 3036 in 2018) and the Medical University of Graz Ethics Committee (27-166 ex 14/15). Written informed consent was provided by all of the participants. All methods were performed in accordance with the relevant guidelines and regulations.
Blood sample collection
Blood was collected from each participant after overnight fasting, and serum was separated. Serum samples were kept frozen at − 80 • C until analyses of miRNAs and measurements of leptin and adiponectin as described below.
Measurements of variables related to obesity
Height, body weight and waist circumference were measured, and BMI and waist-to-height ratio were calculated as weight in kilograms divided by the square of height in meters and waist circumference in cm divided by height in cm, respectively. Serum leptin and adiponectin concentrations were measured by enzyme immunoassays using commercial kits, Leptin (Sandwich EIA) Human (EIA2395R) from DRG and ADIPONECTIN, ELISA, HUMAN (BVL-RD195023100-1) from Bio-Vendor, respectively.
RNA extraction and miRNA expression profiling
RNA was extracted from a serum sample (300 μl) using 3D-Gene RNA extraction reagent from a liquid sample kit (Toray Industries Inc., Kamakura, Japan) according to the manufacturer's instructions as described previously [16]. miRNA expression was analyzed using the 3D-Gene miRNA Oligo chip (TRT-XR520, Toray) and 3D-Gene miRNA labeling kit (TRT-XE211, Toray) as described previously [17]. Briefly, half volumes of labeled RNAs were hybridized onto a 3D-Gene miRNA Oligo chip (Toray), which was designed to detect sequences of multiple miRNAs. The annotations and oligonucleotide sequences of the probes conformed to those in miRBase release 21, an miRNA database (http://microrna.sanger.ac.uk/sequences/). After stringent washes, fluorescent signals were scanned with a 3D-Gene Scanner (Toray) and analyzed using 3D-Gene Extraction software (Toray). miRNA expression was normalized as follows: The raw data of each spot were substituted with a mean intensity of the background signal determined by signal intensities of all blank spots with signal intensity of the top and bottom 5% (out of 95% confidence intervals) being removed. Measurements of spots were considered to be valid when the signal intensities were greater than 2 standard deviations of the background signal intensity. The signal intensities of the valid spots were compared and the relative expression level of a given miRNA was calculated. Global normalization of the data was performed for each array, such that the median of the signal intensity was adjusted to 25. Intensity levels of 10 or higher were regarded to be high enough for analysis of comparison between the two country groups. Each miRNA level was compared after log2 transformation between the groups. Fold change in the mean value of each miRNA intensity of Austrian versus Japanese subjects was calculated as the ratio of anti-log2 values of each mean of log2-transformed data.
Selection of obesity-associated miRNAs
A total of 2565 miRNAs in serum were measured by using the miRNA Oligo chip. The intensities of 1744 miRNA were too low for reasonable quantitative comparison. Consequently, 821 miRNAs were available for comparison between the Austrian and Japanese groups. Among these 821 miRNAs, 392 miRNAs showed significant differences in the Austrian and Japanese groups.
Bioinformatics pathway analysis
The resultant miRNAs expressed in serum that showed significant differences in the Austrian and Japanese groups were analyzed using Ingenuity Pathway Analysis (IPA) (Ingenuity Pathway Analysis: QIA-GEN Inc. https://www.qiagenbioinformatics.com/products/ingenuity pathway-analysis) [29]. Putative miRNA targets were found using Ingenuity miRNA target filter for experimentally validated and putative predicted targets (high confidence level) through in silico analysis [30]. After linking the miRNAs with their mRNA target genes, mRNA target genes that were not significantly different between the countries were filtered out. The IPA miRNA-mRNA target link module derives information from TargetScan, a database containing miRNAs and their predicted target genes, along with prediction scores and experimental conformation from the literature [31].
Statistical analysis
Continuous variables are summarized as means with standard deviations and were compared with the use of Welch's corrected t-test. For each of the tested miRNAs, on the basis of the observed distribution of p values, we estimated the positive false discovery rate (q value) according to the method of Storey et al. [32]. Associations among the miRNAs with significant q values were explored with the use of Pearson's correlation coefficient. The intensity of each miRNA was transformed with the base-2 logarithm, that is, binary logarithm. We performed univariable and multivariable linear regression analyses in order to know the relationships among the three miRNAs (miR-125b-1-3p, -20a-5p and -486-5p) that were previously reported to be associated with obesity and were significantly different in Austrian and Japanese men in the present study. The intensity of each miRNA was transformed with the base-2 logarithm for normal distribution. Pearson's correlation coefficient and standardized partial regression coefficient (β) were calculated in the univariable and multivariable analyses, respectively. In multivariable linear regression analysis for the relationship between two miRNAs out of miR-125b-1-3p, -20a-5p and -486-5p, adjustment was performed by using levels of the miRNA (as the other explanatory variable for adjustment) other than the above two paired miRNAs (an explanatory variable and an independent variable). Thus, only two variables of miRNA were used as the explanatory variables, and no other variables were used for adjustment in multivariable linear regression analysis. All p values were two-sided, and p values less than a significance level of 0.05 were considered statistically significant. All q values were two-sided, with statistical significance determined by a false discovery rate of less than 0.05. Data were analyzed with the use of SPSS version 25.0 Armonk, NY, USA. Table 2 shows adiposity-related variables of the Austrian and Japanese subjects. Height, weight and waist circumference in the Austrian group were significantly larger than those in the Japanese group. Adiposity indices, BMI and waist-to-height ratio, were significantly higher in the Austrian group than in the Japanese group. Leptin and adiponectin levels were not significantly different in the Austrian and Japanese groups. -146a, 15a, 423-5p, 520c-3p, 532-5p miR-130, 140-5p, 142-3p, 143 Shown are means with standard deviations of each variable. Asterisks denote significant differences from Japanese (**, p < 0.01). Table 3 shows the results of a comparison of obesity-related miRNA levels in the Austrian and Japanese groups. There were 16 miRNAs that showed levels high enough for comparison and were reported to be associated with obesity. Among those miRNAs, the levels of miR-103a-3p, -15a-5p, -17-5p, -20a-5p, -320a, -423-5p, -486-5p and -758-5p were significantly higher in the Austrian group than in the Japanese group, while the levels of miR-125b-1-3p and -370-3p were significantly lower in the Austrian group than in the Japanese group. The levels of miR-197-3p, -221-3p, -223-3p, -23a-3p, -23b-3p and -486-3p were not significantly different between the two groups. Relatively high values (>1.5) of fold change were found in miR-125b-1-3p, -15a-5p and -486-5p. Also shown in Table 3 are reported changes (up or down) in miRNA levels of individuals with obesity compared with those without obesity: miR-103, -125b, -15a, -17-5p, -197-3p, -221-3p, -223, -23-3p, -320a, -423-5p and -758 levels were reported to be lower in individuals with obesity than in those without obesity [19,20,[22][23][24][26][27][28], while miR-20a, -370, -486-3p and -486-5p levels were reported to be higher in obese individuals than in non-obese individuals [22,28]. Therefore, miR-125b-1-3p, -20a-5p and -486-5p were miRNAs that were reported to be related to obesity and were significantly different in Austrian and Japanese men in the present study. Fig. 2 shows scatter plots of the correlations between obesity-related miRNAs. There was a significant positive correlation between miR-20a-5p and miR-486-5p levels and there was a significant inverse correlation between miR-125-1-3p and miR-486-5p levels. No significant correlation was found between miR-125-1-3p and miR-20a-5p levels. These relationships were not confounded by adjustment for each miRNA level other than paired miRNAs in multivariable linear regression analysis (standardized partial regression coefficient (β): between miR-125-1-3p and miR-20a-5p levels, 0.250 (p = 0.19); between miR-125-1-3p and miR-486-5p levels, − 0.669 (p < 0.01); between miR-20a-5p and miR-486-5p levels, 0.761 (p < 0.01).
Obesity-associated pathway predicted to involve the miRNAs that were found significantly different in Austrian and Japanese men
IPA was performed using all of the miRNAs (n = 392) that showed significant differences in microarray expression analysis. Using a p-value filter for pathway significance of p < 0.05, 366 canonical pathways were identified as enriched pathways in the dataset of the miRNAs showing significant differences in the Austrian and Japanese groups. These canonical pathways include leptin signal in obesity ( in Austrians (upper) and Japanese (lower). ## Fold change in each miRNA intensity of Austrians versus Japanese. ### Reported change (up or down) in circulating miRNA levels of individuals with obesity compared with those without obesity. Asterisks denote significant differences from Japanese (*, q < 0.05; **, q < 0.01).
Discussion
The expression levels of 16 circulating miRNAs, which were reported as increased or decreased in obese individuals, were compared in the Austrian and Japanese subjects: miR-125b-1-3p was lower and miR-20a-5p and miR-486-5p were higher in the Austrian group than in the Japanese group and were thus suggested to be associated with the ethnic difference in the prevalence of obesity. This study is, to the best of our knowledge, the first study showing an ethnic difference in obesityrelated miRNAs. On the other hand, among the 16 miRNAs tested, miR-221-3p, -223-3p and -486-3p levels were not different in the two groups and miR-103a-3p and -423-5p levels were significantly higher in the Austrian group than in the Japanese group, and this direction of changes in miRNAs was opposite to that shown in previous studies on obesity-associated miRNAs [19,20,22,27].
By using IPA, we further explored the canonical pathways that were targeted by all 392 miRNAs displaying significant differences in the Austrian and Japanese subjects. As a result, three obesity-related pathways, including leptin signaling in obesity, adipogenesis pathway, and white adipose tissue browning pathway, were found associated with the above 392 miRNAs. These miRNAs were suspected to target the molecules including STAT3, PI3K, AKT and FOXO1 in leptin signal in obesity, PPARγ and FOXO1 in adipogenesis pathway, and p38MAPK, PPARγ and PPARα in white adipose tissue browning pathway (Fig. 3-5). In the leptin signaling, leptin produced in adipocytes stimulates its receptors in POMC neurons and NPY/AGRP neurons in the hypothalamus and regulates syntheses of α melanocyte-stimulating hormone (α-MSH), agoutirelated peptide (AGRP) and neuropeptide Y (NPY) through modulating phosphorylation of STAT3 and FOXO1 (Fig. 3). In the adipogenesis pathway, mesenchymal stem cells (MSCs) differentiate into mature adipocytes through various signals including PPARγ as a major regulator (Fig. 4). In the white adipose tissue browning pathway, physical activity induces transdifferentiation of white adipose tissue to beige adipose tissue through various signals including p38 MAPK in the cytosol and Creb, PPARγ, PPARα and VEGFA in the nucleus (Fig. 5). Further studies using comprehensive analysis of mRNAs are needed to determine the targeted molecules that explain the ethnic difference in obesity.
In this study, three circulating miRNAs, miR-125b-1-3p, -20a-5p and -486-5p, which are included in the reported biomarkers for obesity, were associated with the ethnic difference in obesity of Austrian and Japanese men. miR-125b has been shown to target the 3 ′ -UTR of the mRNA of phosphoinositide 3-kinase catalytic subunit δ (PI3KCD), the expression level of which was reported to be decreased in hepatocytes in obese mice [33]. Therefore, PI3KCD is an obesity-related target of miR-125b. PI3K was also suggested by the results of IPA to be a target molecule in leptin signal in obesity (Fig. 3). Thus, in previous studies, miR-125b was associated positively with obesity, while circulating miR-125b was associated negatively with obesity in a previous studies [20] and the present study. The reason for this discrepancy in the results for miRNAs in experiments using cells and blood remains to be clarified in the future. One possible reason for the dissociation between blood levels and cellular levels of miRNAs is the influence on circulating miRNA levels of miRNAs derived from blood cells and vascular cells. In fact, there were strong associations among serum levels of some erythrocyte-derived miRNAs [34], suggesting that some of erythrocyte-derived miRNAs considerably influence their blood levels. On the other hand, circulating miR-20a-5p and miR-486-5p levels were higher in the Austrian group than in the Japanese group. The expression of miR-20a-5p was reported to be induced during adipocyte differentiation from preadipocytes and to be increased in white adipose tissue of obese mice [35]. miR-20a-5p was thought to positively regulate adipocyte differentiation through repressing mRNA of the transducer of ERBB2 (TOB2), although TOB2 was not included in the three pathways related to the ethnic difference in circulating miRNAs in the present study. miR-486 was reported to accelerate preadipocyte proliferation [28], and miR-486-5p level in blood was positively associated with the ethnicity difference in obesity in the present study. However, the target gene(s) of miR-486-5p in relation to obesity remains to be determined in future studies. Thus, both miR-20a-5p and miR-486-5p were positively associated with obesity at both the tissue level and blood level and were significantly different in Austrian and Japanese men in the present study. Therefore, blood levels of miR-20a-5p and miR-486-5p might be useful for detection of a high future risk of obesity in young men in Western countries, although further prospective studies are needed to prove this hypothesis.
Although weight and BMI were larger and higher, respectively, in the Austrian group than in the Japanese group, serum leptin and adiponectin levels were comparable in the two groups. In this study, there were no subjects with obesity (BMI of 30 kg/m 2 or higher) in either the Austrian group or Japanese group. In addition, the percentages of subjects with an upper category of overweight (BMI of 27.5 kg/m 2 or higher) were 15% in the Austrian group and 10% in the Japanese group. Thus, one possible reason for the above discrepancy is that most of the Fig. 4. Overlap of identified miRNA targets with the adipogenesis pathway. Highlighted are the molecules that showed significant associations in Ingenuity Pathway Analysis with the 392 miRNAs found to be significantly different between Austrian and Japanese subjects in this study.
Austrian and Japanese subjects in this study did not have high BMI. Interestingly, Kuo and Halpern reported that there was no association between BMI and blood adiponectin levels in healthy adults. They speculated that obesity-related changes in adiponectin levels in previous studies were a consequence of obesity-related metabolic disorders [36].
There are limitations of this study. In this study, serum levels of 2565 miRNAs were measured by microarray analysis without amplification; however, only about one third of the total miRNAs showed intensity levels that were high enough for comparison and thus could be analyzed for comparison. Therefore, further studies using measurements with PCR amplification are needed to compare other approximately two thirds of miRNAs in different ethnicities. The subjects of this study were all men, and it is thus necessary to test the ethnic difference in circulating miRNAs in women. Only miRNA expression was evaluated in this study, and simultaneous evaluation of mRNA expression is needed in future studies to clarify molecules that are targeted by miRNAs showing Fig. 5. Overlap of identified miRNA targets with the white adipose tissue browning pathway. Highlighted are the molecules that showed significant associations in Ingenuity Pathway Analysis with the 392 miRNAs found to be significantly different between Austrian and Japanese subjects in this study. ethnic difference. As candidates of miRNAs explaining the ethnic difference in obesity, we included miRNAs that were shown to differ in obese and non-obese subjects in children as well as adults (Table 1). However, the three miRNAs (miR-125b-1-3p, -20a-5p and -486-5p) that were suggested to explain the ethnic difference in this study were shown in previous studies using data for adults [20,22,28]. There were no significant correlations between BMI and levels of miR-125b-1-3p, -20a-5p and -486-5p (data not shown). The reason for this negative finding may be the limited sample size (n = 40) of our analysis. In addition, subjects with obesity who showed BMI levels of 30 kg/m 2 or higher were not included in either the Austrian group or Japanese group, although the mean BMI level was significantly higher in the Austrian group than in the Japanese group (25.7 vs. 23.3 kg/m 2 ). Therefore, future studies using data from obese Austrian and Japanese participants would be interesting to know ethnic differences in miRNA expression in obese individuals.
In summary, differences in miRNA expression levels in blood were totally analyzed and compared in Austrian and Japanese men. Circulating miR-125b-1-3p, miR-20a-5p and miR-486-5p levels were significantly different between the Austrian and Japanese subjects. By IPA, we provide evidence for an impact of the ethnic differences in the expression of 392 miRNAs on three obesity-related canonical pathways, including leptin signaling in obesity, adipogenesis pathway, and white adipose tissue browning pathway. Thus, miRNAs are thought to partly explain the difference in obesity prevalence in East Asian and Western countries. Future studies are needed to determine the molecules that are regulated by the miRNAs causing the ethnic difference in obesity and the significance of their levels in blood in relation to the pathogenesis of obesity.
Funding
This study was supported by a Grant-in-Aid for Scientific Research (No. 17H02184) from the Japan Society for the Promotion of Science (to IW).
Declaration of competing interest
The authors declare no competing interests.
|
2022-08-19T15:11:32.949Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "88eafe59f1be891917edfafb33711777e6b8e757",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.metop.2022.100206",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "67af1c0a136cdef30bd788638729bd4cdd11d7b8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257908182
|
pes2o/s2orc
|
v3-fos-license
|
Antecedents of circular manufacturing and its effect on environmental and financial performance: A practice-based view
Despite the worldwide recognition of the Circular Economy (CE) philosophy, its comprehensive adoption in manufacturing is not well understood in literature and practice. This study theorizes circular manufacturing (CM) by extending the cleaner production concept according to the design thinking of CE. Drawing on the practice-based view, it develops a conceptual model on the antecedents and performance outcomes of CM and the moderating role of Industry 4.0 (I4.0) production technologies on CM-to-environmental and financial performance relationships. The research adopts a mixed-methods approach to examine the hypothesized relationships. Survey data from 255 Chinese manufacturers are analyzed using structural equation modeling and hierarchical regression. Two qualitative case studies verify the survey findings and offer additional insights. The findings suggest that by strengthening a CE culture and integrated management systems, firms can improve CM implementation and consequently environmental and financial performance. However, investing in I4.0 production technologies may not enhance the impact. Our research contributes to the literature by conceptualizing and operationalizing CM as a new construct. It also provides guidelines for implementing CE in manufacturing.
Introduction
Manufacturers nowadays face immense pressures to operate in an environmentally friendly and socially responsible manner (Farooque et al., 2022;Treacy et al., 2019).There is a growing consensus that the currently dominant "take-make-dispose" linear economic model is unsustainable, and the world should accelerate the transition to a circular economy (CE) (Ellen MacArthur Foundation [EMF] 2022).The CE concept enables firms to rationalize their resource consumption while balancing their environmental, economic, and social performance outcomes (Ghisellini et al., 2016;Rodríguez-Espíndola et al., 2022).It is based on three principles, all driven by design, which are to (i) eliminate waste and pollution, (ii) circulate products and materials (at their highest value), and (iii) regenerate nature (EMF, 2022).To implement CE, manufacturers need to embrace the design thinking of CE, which is embedded in these CE principles, into their manufacturing systems (Acerbi and Taisch, 2020;Ghisellini et al., 2016).However, adoption of CE principles in manufacturing systems has remained an under investigated research area.
Recently, Antonioli et al. (2022) and Garza-Reyes et al. ( 2019) called for research in circular manufacturing (CM), the integration of CE principles in manufacturing systems.Specifically, they highlight a knowledge gap in the operationalization of CE principles and practices in the context of manufacturing systems.Although the term 'circular manufacturing' has been increasingly used in both practice and research, it remains unclear what constitutes CM and how to operationalize CM in practice and quantitative empirical research.The extant literature reports important antecedents to sustainability practices, for example, an organization's sustainability culture (Pagell and Wu, 2009) and prior experience of implementing integrated management systems (IMS) (Dey et al., 2020;González-Benito and González-Benito, 2005).Similarly, we can posit that CE culture and IMS drive CM adoption.However, the theoretical proposition has not been empirically tested.
On the other hand, there is an abundance of research suggesting sustainability practices are positively related to firm performance (Govindan et al., 2020).However, there are also studies suggesting that CE implementation may be economically challenging (Genovese et al., 2017;Nasir et al., 2017).Therefore, it is essential to examine the impact of CM on firm performance.Furthermore, in recent years, Industry 4.0 (I4.0) technologies have been seen to improve operational and sustainability performance (Rosa et al., 2020;Zheng et al., 2021).The EMF (2022) also recognizes technologies as a key enabler of circularity to eliminate waste and pollution.Note that I4.0 encompasses a diverse range of digital technologies and those specifically used in production/manufacturing do not necessarily have the same casual properties as others.However, to date, the role of I4.0 production technologies, a subset of I4.0 technologies, in CM adoption remains largely unexplored.Given these knowledge gaps, this research sets the following objectives.
• To theorize what constitutes CM and how to operationalize it • To empirically verify the antecedent roles of CE culture and IMS in CM adoption • To empirically investigate the effect of CM adoption on firm environmental and financial performance • To understand the moderating role of I4.0 production technologies in the CM-to-firm environmental and financial performance.
To achieve these research objectives, we employ a sequential mixedmethods approach through the theoretical lens of the practice-based view (PBV) (Bromiley andRau, 2014, 2016).The quantitative phase obtained survey data from 255 Chinese manufacturers across different industrial sectors.The qualitative phase studied two real-life cases of representative Chinese manufacturers.
This study makes several original contributions.First, unlike earlier studies that rely on theories from other academic disciplines, we apply the PBV, a theory rooted in the operations management discipline, in the CE context.Second, drawing upon PBV and a review of extant literature, we establish CM as a strategy that integrates the design thinking of CE in manufacturing systems.This pioneering work also operationalizes the measures of CM.Third, we confirm the antecedent role of CE culture and IMS in CM adoption through rigorous survey data analysis and case studies.Fourth, we establish the positive impact of CM adoption on firm environmental and financial performance.Last, we find out that I4.0 production technologies do not moderate the relationship between CM and environmental and financial performance although it has a direct and positive impact on financial performance.
The paper is organized as follows.Section 2 reviews the relevant literature.Theoretical background and hypothesis development are presented in section 3. We present our mixed-methods research approach in section 4. Survey and case study results are presented in section 5 and 6 respectively.Section 7 discusses the results and findings besides highlighting the theoretical contributions and practical implications.Section 8 concludes the study.
The CM concept
Cleaner production (CP) (Ghisellini et al., 2016), sustainable manufacturing/production (Golicic and Smith, 2013;Linton et al., 2007) and green manufacturing (M.Lo, 2014) are the main concepts related to achieving sustainability in manufacturing.All these concepts had been in practice before CE became widely known.Particularly, CP has been widely promoted for over two decades and has received much research attention (Farooque et al., 2022).The United Nation Environment Programme [UNEP] defines CP as "the continuous application of an integrated preventative environmental strategy to processes, products and services to increase efficiency and reduce risks to humans and the environment" (p.3).CP focuses on material/energy conservation and efficiency, elimination of toxic raw materials and toxic emissions, and reduction of overall environmental impacts in the production processes (Ghisellini et al., 2016).In operationalizing CE in a firm's manufacturing systems, CP is often viewed as a preparatory strategy (Ghisellini et al., 2016).
For manufacturers to adopt CE principles in their manufacturing systems, it is essential to design products intentionally for circularity of materials without creating wastages in the entire product lifecycle (den Hollander et al., 2017).This confers importance to the emerging concept of circular product design (CPD) (Bocken et al., 2016;Burke et al., 2021).CPD applies CE design principles to enable product design function to think beyond its functional focus to consider supply chain processes to realize the circulation of resources embedded in products.For example, products should be designed for convenient disassembly to facilitate efficient value recovery at the end of their useful life (Farooque et al., 2019).
The concept of CPD advances the older design approaches such as eco-design and design for sustainability (DfS) (Wang et al., 2022).Eco-design considers the environmental aspects of product design while aiming to reduce the negative environmental impacts throughout the lifecycle (Brezet, 1997).DfS moves beyond the environmental aspects to consider social, economic, and ethical dimensions of product design (Spangenberg et al., 2010).However, CPD significantly differs from these design approaches by emphasizing resource circularity and end-of-life options (Burke et al., 2021;Farooque et al., 2019).CPD follows a cradle-to-cradle approach as opposed the cradle-to-grave approach of the traditional design concepts.The cradle-to-cradle approach embraces circular thinking to achieve an indefinite circulation of resources.
The main aim of the CPD strategy is to slow and close resource loops (Bocken et al., 2016).The resource loops can be slowed by design strategies focusing on durability and product life cycle extension (Burke et al., 2021).In terms of closing resource loops, CPD implies designing a restorative cycle for technical materials and a regenerative cycle for biological materials (Zhang et al., 2021), supported by simplified disassembly and reassembly requirements (Burke et al., 2021;Farooque et al., 2022).In a nutshell, a CM strategy should build on CP but, at the same time, CPD plays a crucial role in implementing CM to achieve CE goals (Asif et al., 2021).Therefore, it seems logical to posit that CM should integrate CPD and CP in operationalizing CE principles in a firm's manufacturing system.
Our conceptualization of CM is consistent with the working definition of CM provided by Acerbi and Taisch (2020) who performed a systematic literature review of 215 research articles to develop a theoretical framework of CE strategies in manufacturing sector.They specifically advocated the concurrent adoption of strategies such as CPD and CP, among others, to "reduce resources consumption, to extend resources lifecycles and to close the resources loops" (Acerbi and Taisch, 2020, p. 12).
Research on CM
The importance of CM has been increasingly emphasized in the recent literature, however, the scholarly knowledge on CM is still in infancy stage.Only a handful of research studies have demonstrated an explicit focus on the interaction of CE principles with the manufacturing operations at the firm level.Table 1 provides a summary of the most relevant literature on CM.
Three research themes are observed in the publications summarized in Table 1.The first theme is related to CM implementation.In this theme, Prosman and Cagliano (2022) deal with how to configure CM systems in the context of circular business models.Chari et al. (2022) study the implementation of CM supply chains from a dynamic capabilities perspective.Roci et al. (2022) suggest that CM system implementation requires a lifecycle approach for measuring performance in cost, revenue, and environmental impacts.The second theme addresses the relationship between lean management and CM.For example,
Table 1
Summary of the CM literature.
Three successful CM configurations were identified which provide insight into the main elements of manufacturing configurations in circular business models.CM configurations are further aligned with typical supply characteristics in a CE to provide insights into when (not) to apply a given manufacturing configuration.Afum et al. (2022) Interaction between lean management and circular production systems and their implications on zerowaste performance, green value competitiveness and social reputation.
Manufacturing
Small and Medium Enterprises (Multiple Industries) The study results suggest that lean management plays a vital role in the implementation of circular production systems.Lean management and circular production systems, when combined, have a significant effect on zero-waste performance, green value competitiveness and social reputation.Further, the mediation role of circular production system between lean management, zero-waste performance, green value competitiveness and social reputation is also confirmed from the study results.The study findings suggest that quality of the recycled material has a significant impact on the 3D printing platform's and material suppliers' (i.e., conventional, and recycled material suppliers) decision-making.A 3D printing platform that sells both the virgin material product and recycled material product (RMP) prefers printing high-quality RMP as its profit increases.However, both the material suppliers avoid printing high-quality RMP as the optimal prices of the material suppliers decrease with the quality of the RMP.
Y. Liu et al.Schmitt et al. (2021) and Afum et al. (2022) suggest that lean manufacturing plays an important role in the implementation of CM.Furthermore, lean management and CM, when combined, have the potential to bring about enhanced performance outcomes across the three dimensions of the triple bottom line (TBL).The role of I4.0 technologies in the implementation of CM appears to be the most popular research theme.I4.0 technologies are seen a key enabler of the CM supply chains (Chari et al., 2022).Specifically, IoT (Acerbi and Taisch, 2020;Delpla et al., 2022), blockchain technology (Govindan, 2022) and 3D printing (Sun et al., 2020) have been identified as the supportive technologies facilitating the implementation of CM.Except for Afum et al. (2022), all the empirical studies in Table 1 are based on a developed country context.It is a surprise that no study has been conducted in China on the emerging CM topic although the country has enforced CP and CE related legislations in its manufacturing sectors for about two decades (Geng et al., 2009).Furthermore, Chari et al. (2022) and Prosman and Cagliano (2022) believe that a firm's culture that is supportive of CE plays an important role to enable the circular transition.Firms having a CE culture upskill their employees to bring about changes in the manufacturing process required for CM implementation (Chari et al., 2022;Govindan, 2022).However, no study has focused on CE culture or provided empirical evidence of its antecedent role in CM implementation.
In summary, research on CM is nascent.A few studies suggest that I4.0 technologies enable the implementation of CM but further and more rigorous empirical validation are required.Research on the antecedent role of company culture and lean management system for CM implementation has just started to emerge.There are ample rooms to expand the research scope to cover the role of CE culture and other management systems such as International Organization for Standardization (ISO) quality management systems and total quality management (TQM) system.Moreover, the performance implications of CM remain underrepresented and largely unexplored.Although CM has the potential to enhance firms' sustainability performance, there is dearth of empirical evidence to support such claims, especially in the context of China.This study narrows these research gaps in the CM literature.
Practice-based view
The PBV seeks to explain the improvement in firm performance due to the adoption of a range of practicesi.e., activity or a set of activitiesthat a variety of firms can execute which are imitable, publicly available, and amenable to transfer across firms (Bromiley and Rau, 2014;Carter et al., 2017).This research adopts the PBV as a theoretical lens for two main reasons.First, sustainability concepts and practices, including those related to CE, have been extensively studied in the literature and are widely available to the public.For example, circular supply chain management (Batista et al., 2018;Farooque et al., 2019;Zhang et al., 2021) and circular product design (Burke et al., 2021) have been proven valuable.Second, China, the research context, has implemented a stringent CE policy framework for about two decades.Consequently, Chinese manufacturers have adopted a variety of CE practices due to regulatory pressure.Therefore, the concerned CM practices in this study conform to the criteria set by the PBVnamely, imitability, availability in the public domain, and transferability across firms.Moreover, we intend to measure firm performance as the dependent variable which is also in line with the PBV.
We believe the PBV is more suitable for this study than the popular resource-based view (RBV) (Barney, 1991).Bromiley and Rau (2016) argued that the RBV is not aligned with the activities and objectives of operations management studies in several ways.In the RBV, the dependent variable is sustained competitive advantage, thus only a small number of long-term industry-leading firms are suitable to be investigated through the RBV (Treacy et al., 2019).Competitive advantage exists at the business or the firm level, and this does not directly translate into the operations level.Measuring sustained competitive advantage is also difficult.The RBV deals with resources that are valuable, rare, and difficult to imitate.Such resources are also difficult to measure due to their uniqueness.Based on these issues, Bromiley and Rau (2016) suggested that the PBV would make a better theoretical lens for operation management studies.The core proposition of the PBV is that firms' performance variations can be explained by the heterogeneity in their implementation of operational practices.Such heterogeneity is often inevitable across firms because of bounded rationality and varying constraints in capacity and time for implementing operational practices.
CE culture and CM
The transition towards CE is a paradigm shift which requires a continuous state of adjustment; reviewing actions and operations; redesigning procedures and structures; and reinventing mindsets (Kjaer et al., 2019).Individual concerns about and organizational values relating to CE can guide firms toward shared values and beliefs that prioritize sustainability and circularity in their business models (Bansal, 2003;Henry et al., 2020), balancing economic efficiency, environmental responsibility, and social equity in their decision-making (Marshall et al., 2015;Pagell and Wu, 2009).Thus, CE culture is expected to play a significant role in driving a fundamental reorientation of business towards CE.
The extant literature is relatively silent on the antecedent role of CE culture in a firm's CM adoption.However, previous studies suggest that a sustainability culture supports the implementation of sustainability practices in production processes (Chari et al., 2022;Marshall et al., 2015;Pagell and Wu, 2009).This is because a sustainability culture provides an atmosphere where firms consider all three dimensions of sustainability in every decision they make (Marshall et al., 2015), not only in manufacturing but also in developing sustainable new products (Pagell and Wu, 2009).On the contrary, an absence of a sustainability culture creates barriers to an organization's commitment to CE (Prosman and Cagliano, 2022;Wang et al., 2022).Thus, we hypothesize.
H 1 .CE culture has a positive impact on CM implementation
IMS and CM
According to Porter's (1996) seminal work on strategy, firms are always on the quest for productivity, quality, and speed.As a result, a remarkable number of management systems have been developed and implemented by firms.They include TQM, Lean/just-in-time (JIT), and ISO systems for quality management (e.g., ISO 9001) and environmental management (e.g., ISO 14001) (Villena et al., 2021).They are collectively referred to as IMS in this study.
No study has investigated the role of IMS in CM adoption.However, IMS seem to support CE's aspiration to maximize resource efficiency.Specifically, TQM continuously improves product and process quality to meet or exceed customer expectations (Cua et al., 2001).Lean/JIT reduces or eliminates non-value-adding activities to improve speed, cost efficiency and customer value, thereby contributing to sustainability performance (Piercy and Rich, 2015;Yu et al., 2020).In particular, lean management and CM are seen to have common goals such as waste elimination (Afum et al., 2022).Since lean manufacturing systems have remained dominant in the linear paradigm, CM systems will have to build on the existing lean manufacturing systems (Schmitt et al., 2021).As the world's most widely adopted management system standards (Marimon Viadiu et al., 2006), ISO 9000 and 14,000 series strengthen environmental management systems, although the effect of the latter is more direct (Bernardo et al., 2012;González-Benito and González-Benito, 2005;Zhu and Sarkis, 2004).Given that IMS support the achieving of CE goals, we hypothesize the following.
H 2 .IMS have a positive impact on CM implementation
CM and firm performance
The effect of CM on firm performance is likely to be on all dimensions of the TBL due to the nature of the CE concept (Ghisellini et al., 2016).This research, however, only deals with long-term financial and environmental performance.It does not consider the social dimension in order to allow for a more focused and in-depth investigation.
A meta-analysis of the literature by Govindan et al. (2020) provides strong evidence that, generally speaking, sustainability practices have a positive impact on financial performance.CE adoption often requires major initial investments in new equipment and modifications in the processes (Geng et al., 2009), so it may be economically challenging as suggested by the case studies of Genovese et al. (2017) and Nasir et al. (2017).Whereas the survey studies by Zhu et al. (2010Zhu et al. ( , 2011) ) suggest that CE practices have a positive association with economic performance among Chinese manufacturers.In China, the government has mandated firms to implement CE in the last two decades.So, it is reasonable to assume that most firms are experiencing the long-term financial impact of CM.Given our focus on long-term financial performance, we hypothesize.
H 3 .CM implementation has a positive impact on financial performance The positive impact of sustainable manufacturing practices on environmental performance is well-established in the literature (Golicic and Smith, 2013;Linton et al., 2007).CM has the potential to further enhance the environmental impact along the product lifecycle.First, by virtue of CPD, it facilitates slowing and closing material loops by means of a systemic supply chain-wide circulation of resources (Burke et al., 2021).Second, by virtue of CP, it improves material/energy conservation and efficiency, preventing the use of non-renewable, toxic raw materials and toxic emissions (UNEP, 2006) Integrating CPD and CP, CM enables firms to control, monitor and prevent pollution, wastages, and emissions, resulting in less environmental damage.Therefore, we hypothesize the following.
H 4 .CM implementation has a positive impact on environmental performance
Moderating role of I4.0 production technologies
The I4.0 concept is also known as smart manufacturing, characterized by interconnected machines and intelligent products and systems (Tortorella and Fettermann, 2018).I4.0 technologies include the Internet of Things (IoT), Big Data analytics, cloud computing, blockchain, and artificial intelligence, among others (Yadav et al., 2020;Zheng et al., 2021).Using these technologies, firms can achieve integration of manufacturing processes (both vertical and horizontal) and product connectivity which can lead to better product and operational performance (Dalenogare et al., 2018).For example, Intel, one of the world's largest semiconductor manufacturers, revamped its production process using big data analytics capability while reporting significant performance improvements (Mikalef et al., 2019).It is also believed that I4.0 technologies can enable firms to achieve higher levels of sustainability performance (Luthra et al., 2020;Yadav et al., 2020;Zheng et al., 2021).
I4.0 technologies are found to be a main enabler in the context of CE implementation (Rosa et al., 2020).These technologies can facilitate the circularity of resources within supply chains (Lopes de Sousa Jabbour et al., 2018).In the context of CM, digital technologies such as IoT, blockchain, and 3D printing facilitate the implementation of a smart CM system (Acerbi and Taisch, 2020;Delpla et al., 2022;Roci et al., 2022;Sun et al., 2020).Given the enabling role of I4.0 technologies in a CE transition, we hypothesize that I4.0 production technologies can enhance CM's financial and environmental performance outcomes.Hence, we posit the following hypotheses.H 5a .I4.0 production technologies positively moderate the relationship between CM and financial performance.H 5b .I4.0 production technologies positively moderate the relationship between CM and environmental performance.
Fig. 1 summarizes the hypothesized relationships between the study constructs.Firm size, ownership type and industry are the three control variables.
Research methodology
This research adopts a sequential mixed-methods approach (Li et al., 2020), including a survey of 255 Chinese manufacturers and two explanatory case studies.The survey involved a wide range of manufacturers to ensure the generalizability of findings, while the case studies further validated the survey results, provided more in-depth understanding to interpret the survey findings, and offered additional insights.We explain our survey research design in section 4.1 followed by case study research design in section 4.2.
Questionnaire development
We reviewed the related literature extensively to develop construct measures.Appendix A provides the sources from which the measures were adapted.In this research, we modeled CM as a second-order construct with CPD and CP being the first-order constructsas discussed in Section 2.1.We adopted construct measures from the English literature and developed a questionnaire in Chinese.Two researchers who are fluent in both English and Chinese followed a back-translation technique to ensure that the measures in the Chinese questionnaire were conceptually equivalent to the original ones developed in English (Paulraj et al., 2017).We ran two rounds of pilot tests in face-to-face meetings.Each round involved seven senior managers from large-scale manufacturers in China.They provided feedback on questionnaire design related to the wording of measures and suggestions for adding/removing certain measures.We incorporated their suggestions in the final questionnaire.This process improved the questionnaire by ensuring content validity and lowering the chance of misinterpretation by survey respondents.
The questionnaire included two parts.Part I covered control variables and dependent variables (i.e., firm performance).The respondents rated firm performance in comparison with their firm's main competitor in the industry.The measures were rated on a seven-point Likert-type scale (1 = significantly lower; 7 = significantly higher) which is considered better for managing social desirability bias (Stöber et al., 2002).Part II included questions on CE culture, IMS, CM, and I4.0 production technologies.The respondents evaluated the situation in their respective organizations in the last year.We used a five-point Likert scale anchored at 'strongly disagree and strongly agree' for CE culture; 'not at all and to full extent' for IMS, CM and I4.0 production technologies.We intentionally asked the respondents to rate firm performance in the current year while CM implementation in the last year.Incorporating a time-lag between CM implementation and its performance effects served to reduce possible bias (Dobrzykowski et al., 2016).
Survey administration
Survey data for this research were collected in 2019.We distributed questionnaires through multiple channels including professional associations, postgraduate and MBA/EMBA students and local government officials.A total of 930 survey questionnaires were distributed to manufacturers across all the six greater administrative areas of China.In 2.
We attempted a split survey method (Dubey et al., 2015;Podsakoff et al., 2003) to ask different respondents within the same organization to complete questions related to indepdent variables (in Part II) and dependent variables (in Part I), respectively.Part I requested a response from a senior manager who was knowledgeable in firm performance.Part II requested a response from a senior manager who was familiar with the operations.Due to the difficulties in managing matched responses, we allowed a single respondent to complete a full questionnaire in the event that it was not possible to recruit two qualified respondents from a firm.In the final sample of 255 responses, 75 were matched responses and 180 were completed by a single respondent.
Case study design
A case study research strategy is most useful when there is a need to observe how a phenomenon emerges in a specific context (Yin, 2009).We adopted a case study approach to understand how CM works in specific business settings and identify technical, organizational, and other contextual aspects relevant to its implementation.We also investigated how these contextual aspects differ depending on several internal and external conditions.Thus, the aim of the case studies was to complement the quantitative analysis and to provide more in-depth insights on the survey results.Furthermore, the case studies were used as a basis to uncover other aspects that can potentially enable or inhibit performance gains that were not included in the quantitative study (Mikalef et al., 2019).
Case selection
Referring to the national and provincial lists of green factories in China, we selected two case companies: Archroma (Tianjin) Ltd.And Rockcheck United Iron & Steel Group Ltd.Both companies are committed to CM.They represent different industry sectors (chemical vs iron & steel), firm sizes (medium vs large), and ownership types (Chinese-foreign joint venture vs private).We believe they are good representatives of Chinese manufacturers in our study focus.
Case data collection
Case data were mainly collected by face-to-face semi-structured interviews.In addition, secondary data from project reports, newsletters and company websites were collected for triangulation to ensure the reliability and validity of our analysis.The purpose of the interviews was to examine the real-world scenarios of CM implementation.The interviewees were invited to elaborate on their firms' specific CM practices, their impacts on performance, and other influential factors.The main interview questions were are follows.
• Are there any circular manufacturing practices implemented in your firm?How have these practices been implemented?What are the impacts of these practices on enterprise environmental and financial performance?• What is the state of CE culture and integrated management systems in your firm?How do they influence the implementation of circular manufacturing?• What are the impacts of the industry 4.0 production technologies on environmental and financial performance?
Case data collection took place between September 2021 and May 2022.In total 15 interviews were conducted with an average length of 45 minutes.Table 3 presents the profile of case study interviewees.All interviews were carried out in Mandarin.Two researchers analyzed the transcribed interview data and met frequently to resolve discrepancies in data analysis.A complete case study draft was checked and approved by both case firms to ensure that there were no misinterpretations.
Non-response bias and common method bias
Non-response bias was assessed by comparing early and late waves of returned questionnaires.We conducted two-tailed t-statistics and did not identify statistically significant differences in any of the variables used in the study.Therefore, non-response bias is unlikely to be a concern in our survey data.
We employed several strategies to reduce the possibility and impact of common method bias (CMB) according to the recommendations of Podsakoff et al. (2003); Podsakoff et al. (2012).First, the survey was anonymous, and the respondents filled in the questionnaire privately by themselves.We assured the respondents that their answers would be unidentifiable by individuals or organizations.Second, we made efforts to collect data from two participants per organization as much as possible using a split survey method, as explained above.Third, a number of variations at the construct level besides measurement items helped us mitigate the CMB as well as social desirability concerns.At the construct level, we followed Pullman et al.'s (2009) example to use firm as a proxy subject for CE practices (Nederhof, 1985).
As mentioned earlier, we attempted a matched response survey.However, most responses (approx.70%) were received from a single source.Thus, CMB may still be a concern (Guide Jr. and Ketokivi, 2015).In this regard, we performed various tests to detect CMB.Harman's (1976) single-factor test showed the presence of seven distinct factors, whereas the first factor only accounted for 24.12% of the variance.Further, a common latent factor test (Podsakoff et al., 2003(Podsakoff et al., , 2012) ) was performed by introducing a latent variable to the original measurement model.The results indicated that model fit indices of the original model (i.e., χ2/df = 1.61,CFI = 0.95, and RMESA = 0.05) and the common latent factor model (i.e., χ2/df = 1.53,CFI = 0.96, and RMESA = 0.05) were quite similar.Lastly, we performed Widaman (1985) test using two latent variable models.The first being a trait-only model and second included a method factor as well as the traits.The CFI change cutoff criterion of 0.01 suggested by Cheung and Rensvold (2002), indicated no significant improvement in the model fit indices.These test results conclude that CMB is not a concern in this study.
Construct validity and reliability
Table 4 shows the results of assessing construct validity and reliability.All the Cronbach's alpha values are above 0.7 and composite reliability (CR) above 0.6, indicating acceptable internal consistency across construct measures.The convergent validity is first established by examining the factor loadings: all are greater than 0.5 (Hair, 2009).The average variance extracted (AVE) values are all above 0.5, except for IMS.Given that IMS captures four management systems of diverse foci, its AVE value of 0.48, slightly below 0.5, is considered acceptable (Prajogo et al., 2021).The discriminant validity is established as all the square-rooted AVE values are greater than the correlations between constructs (Fornell and Larcker, 1981).
Hypothesis testing results
For survey data analysis, we employed covariance-based structural equation modeling (CB-SEM) using IBM® SPSS® Amos version 23.CB-SEM technique is widely used in organizational and management research, which allows simultaneous examination of the relationship between unobserved variables.We chose CB-SEM over partial least squares-based structural equation modeling (PLS-SEM) as the former is considered a preferred technique (Guide Jr. and Ketokivi, 2015) especially when its more restrictive assumptions related to data are met (Peng and Lai, 2012).Since our research model is grounded in well-established theory and seeks theory testing; sample size is relatively large (>200); model complexity is considerably low with normally distributed data; CB-SEM is an appropriate data analysis technique for this research as per Peng and Lai (2012)'s guidelines.
Hypotheses H1 and H2 on the antecedent role of CE culture (β = 0.33 at p < 0.01) and IMS (β = 0.39 at p < 0.01) for CM adoption are supported.Similarly, the direct effects of CM adoption on financial performance (H3) and environmental performance (H4) are also supported (β = 0.34 and 0.40 respectively at p < 0.01).These results are summarized in Fig. 2.
We employed the hierarchical regression analysis method to test the effect of I4.0 production technologies.As shown in Fig. 3 and Table 5, our results do not show any statistical support for the moderating role of I4.0 production technologies on the relationship between CM-to-firm environmental (H5a) and financial performance (H5b).Given this unexpected finding, we further tested the direct effects of I4.0 production technologies on firm performance and found a statistically positive effect on financial performance but not on environmental performance.
Endogeneity test
Reverse causality can be a serious threat to the validity of the theorized directional relationships between variables.In this research, there is a possibility that better financial performance drives CM adoption.Therefore, we conducted a two-stage least square (2SLS) regression analysis to assess endogeneity (Lu et al., 2018).In our data set, we found coercive pressure (DiMaggio and Powell, 1983) as a suitable instrumental variable because it is strongly correlated with the suspected explanatory variable but not with the disturbance term (Rossi, 2014).
In the first stage of the 2SLS test, we find a statistically significant correlation between coercive pressure and CM (β = 0.29 at p < 0.01).
The second stage tests the effect of the predicted values from stage-1 on financial performance.The results are not statistically significant (p > 0.10).Following the Durbin-Wu-Hausman test procedures, we tested whether the error terms from the stage-1 model were correlated with the ones in the original model (Cameron and Trivedi, 2010).The results (p = 0.97) do not suggest any serious endogeneity problem.Appendix B presents the details of the 2SLS test results.
Archroma case overview
Archroma is a global leader in color and specialty chemicals serving the branded and performance textiles, packaging and paper, coatings, adhesives, and sealants markets.The headquarter in Switzerland oversees the operations in over 100 countries involving approximately 2800 employees and 25 production sites.In 2018, Archroma promulgated its goal to achieve carbon neutrality in 2023 (37 years ahead of the Chinese government's 2060 deadline for the nation).In 2020, Archroma signed the United Nations Global Compact Statement from Business Leaders for Renewed Global Cooperation and the Global Coalition Call for Sustainability.In 2021, it was awarded the EcoVadis Platinum rating for its corporate social responsibility (CSR) performance.
As a subsidiary manufacturer in China, Archroma (Tianjin) Ltd. operates according to globally standardized management systems including ISO 9001, ISO 14001, ISO 45001, and ISO 50001.The company has established a sustainable management system and continually innovated in CM and related I4.0 production technologies.For these reasons, it was awarded the provincial level of "Green Factory" honor in 2020.
Archroma case study findings
We conducted eight interviews with Archroma (Tianjin), involving six senior managers and two senior executives in the production, environment, quality and purchasing departments.All interviewees agreed on the need for implementing CM.The general manager of Archroma (Tianjin) said, "Archroma is not only a leader in industry market, but also a leader in circularity and sustainability, which is our nature, the company's commission and the requirement of CSR".Consistent with our survey results, all interviewees indicated that their CE culture and IMS have positive effects on their CM implementation.
Archroma (Tianjin) has been in transition from a linear to a circular manufacturing system.The firm has embraced CM in both product design and production department.In practicing CPD, the company continuously develops products that are safe and designed to reduce natural resource consumption, thereby decreasing its environmental ; the baseline for dummy variables include: "others" category for ownership type a , "more than 8000 employees" for firm size b and "others" category for Industry c .
footprint.For example, the fluorine-based "C-8" chemical product is widely used in the high textiles market due to its excellent waterproof characteristic.However, the biodegradation cycle of this product is very long.Therefore, Archroma introduced C-6 products (Nuva® N) to substitute C-8 and gave up the highly lucrative C-8 market.The company also developed a fluorine-free waterproof product range (Smartrepel®) to offer an even more ecological option.Additionally, to avoid the use of the cancerogenic chemical of CAS 101-77-9, Archroma introduced the new product Cartasol® Yellow M-GLC liq to substitute Cartasol® Yellow M-GLA liq.
In line with the "12 Principles of Green Chemistry", Archroma has improved and innovated the manufacturing processes of azo-dyes, chemical additives, and "Nuva® N" waterproof products.The new processes minimized the use of energy, resources and chemicals, reduced waste and greenhouse gas emissions, and avoided unintended contaminants of raw materials and intermediates in the final product.
The environment department manager stated, "in addition to 'Safe' and 'Efficient', Archroma's third innovation pillar is 'Enhanced' performance and sustainability".These innovations on CM have a significant indirect impact on enhancing product value throughout the supply chain.The products with high value are more durable (and more appreciated by their users) than the less expensive ones.The general manager said, "The Archroma systems and solutions create value down the chain and deliver enhanced value to our customers, which guarantee the sustainability of the company's profit".
Rockcheck case overview
As an integrated iron and steel manufacturing enterprise, Rockcheck was founded in 2001.It is a subsidiary of Rockcheck Group Co., Ltd., which ranked No. 115 in the top 500 Chinese manufacturers and No. 99 in the top 500 Chinese private enterprises in 2021.It has around 4000 employees.
With substantial experiences in implementing I4.0 production technologies, Rockcheck has established strict environment principles and advanced management systems such as ISO 9001, ISO 14001, GB/T 28,001, and GB/T 23,331.Rockcheck was awarded the national "Green Factory" honor in 2019 and passed the audit of ultra-low emissions of environmental protection in 2021.Rockcheck set a goal to achieve peak carbon dioxide emissions in 2023 (seven years ahead of the nation's target 2030).
Rockcheck case study findings
We conducted seven interviews with Rockcheck including one executive (vice president) and six senior managers in six departmentsproduction, purchasing, marketing, environment, energy, and culture & publicity.They stated that the company was confronted with many challenges and pressures in managing its environmental performance.The iron and steel sector, a significant contributor to pollution and greenhouse gas emissions, is under strict regulations on the annual output and emission level set by the national government in China.The sector in China is classified into three levels (A, B and C) based on the technologies used in their manufacturing processes and environmental control and performance.Only A-level companies are allowed continuous production under severe air conditions such as fog and haze.Therefore, to ensure sustainable and undisrupted production, Rockcheck, as a B-level company, has implemented many initiatives on CM and environmental protection, aspiring an upgrade to A-level.
Since 2001, Rockcheck has further emphasized the core management principle of "green driven" and has invested over 5.5 billion RMB on CM and environment initiatives."There is no budget limitation on environmental investment and expenditure", said by the VP and one senior manager from the environment department.Interviewees from environment and production departments affirmed that their CE culture significantly promoted CM implementation and their IMS contributed to ensuring process control of CM and its environmental performance.
As a principle with top priority, environmental protection has been implemented throughout CPD, production process reengineering and operations innovation.Technology innovation and CPD have enabled Rockcheck to increase the use of recycled scrap steel for steel production, which is not only environment-friendly but also economically profitable.Many CP projects reduced gas emissions and waste discharge, while some other ones recovered value from coal gas, heats, steam, water discharge and solid waste.For example, using the coal gas and heats generated in production, the self-power generation project can meet around 60 percent of the company's total electricity needs.Rockcheck also achieved 100% comprehensive utilization of solid waste.Furthermore, Rockcheck was awarded many honors due to its zero-wastewater discharge initiative since 2008, which met all the company's water demand for manufacturing by purifying wastewater from itself and the local communities.
Discussion of the study results and findings
Our survey results confirm that CE culture and IMS are major antecedents of CM implementation.Similarly, CE culture and IMS strengthened CM implementation in both case companies.These findings are consistent with previous studies suggesting that corporate cultures strongly influence innovations (Wang et al., 2021) and firms with strong sustainability cultures are likely to adoption sustainability practices (Marshall et al., 2015;Pagell and Wu, 2009).Similarly, IMS have been reported to significantly contribute to operational improvements and superior performance (Porter, 1996;Villena et al., 2021).Both case companies have a strong cultural orientation toward circularity and sustainability principles, and both are committed to take further steps (including CM initiatives) to comply with local regulations and customers' requirements relevant to environmental protection and CE.IMS, especially ISO 14001, play a critical role in the CM activities and ensure CM operations are performed in line with the international standards, customers' requests besides satisfying legal requirements.Hence, the role of IMS is not only as an enabler of CM, but also an assurance and subsequent control for CM implementation.
Results of the survey study affirm that CM improves long-term financial performance apart from environmental benefits.Likewise, interview participants from both case companies concur that CM implementations has led their firms to improvement in environmental and financial performance.This is a significant finding because previous studies have reported contradicting findings on the link between CE practices and economic performance.As mentioned earlier, Zhu et al. (2010Zhu et al. ( , 2011) ) reported a positive link, but Genovese et al. (2017) and Nasir et al. (2017) questioned the economic viability of CE implementation due to the required upfront investments.Our mixed-methods approach provides a holistic understanding to reconcile the seeming contradiction: CM initiatives may indeed have a negative impact on short-term economic performance owing to substantial initial investments required for their implementation.However, CM initiatives offer long-term financial benefits resulting from reduced energy cost, materials reuse/recycling, and marketing advantages.With regards to economic performance of CM, Archroma's top management shared that their company not only complies with local environmental regulations, but also ensures consistency of practice within the global framework, which inevitably leads to discontinuation or substitution of some products although they are still profitable in the local market.The general manager said, "Archroma's commercial strategy is focusing on the promotion of more sustainable solutions, which accounted for 51% of Archroma' sales in FY 2022.We believe the number will be boosted significantly in the years to come."Similarly, Rockcheck's top management believes that their CM projects do not undermine their cost competitiveness since all companies in the iron and steel sector in China must invest in CM to meet the increasingly stricter environmental standards.Furthermore, both case companies confirmed that investing in CM brought benefits in terms of marketing and customer retention.A senior manager from the marketing department said, "The CM initiatives help with customer retention and keeping our market share, which are crucial." Moreover, our survey study findings suggest that I4.0 production technologies do not moderate the relationship between CM and firm performance although they have direct and positive effects on financial performance.This finding corroborates Lin et al. (2019)'s analysis, which is based on secondary data, that I4.0 significantly improves the financial performance of manufacturers in China.Our case study suggests that both companies had widely adopted I4.0 production technologies such as intelligent manufacturing, IoT and big data analytics intending to achieve process automation and operational excellence (e. g., process simplification, operations optimization and productivity improvement) and to efficiently monitor and control energy use, waste, emission, and effluent.Such technologies enabled higher utilization of materials and energy besides reducing environmental accidents and improving environmental performance by strengthening the environmental supervision.Moreover, both firms claimed to have achieved good return on investments (ROI) in I4.0 production technologies due to productivity gains.In addition, the interviewees indicated that the effects of technology investments on performance decrease progressivelyi.e., the marginal benefit is diminishing.Discussions and deliberations with the case study participants revealed that performance was mainly driven by practices not technologies, which shed light on the reason why the I4.0 production technologies did not show a moderating effect on the CM-to-firm environmental and financial performance relationship.This aligns with Tortorella et al.'s (2019) finding that the moderating effect of I4.0 on the lean production-to-operational performance is mixed, contingent upon the employed technology types and process practices.Our results support their argument that purely technological adoption does not guarantee better performance.Firms employing same CM practices are likely to achieve similar performance regardless of their differences in technology adoption.Therefore, organizations should focus on technologies and practices that aid systematic process improvements (Dalenogare et al., 2018).
Theoretical contributions
This study offers two important theoretical contributions.First, this study theorizes what constitutes CM.The term CM has been increasingly used but not clearly defined, which undermines the academic rigor of further studies on the topic.This research establishes CM as an extension of the cleaner production concept according to the design thinking of the CE.It operationalizes CM as a strategy which integrates CPD and CP.Our survey results prove that it is more appropriate to model CM as a secondorder construct instead of treating CPD and CP separately for understanding performance implications.Our case studies provide concrete evidence that the joint exercise of CPD and CP was instrumental in improving circularity in manufacturing systems.
Second, this study provides strong empirical evidence on the explanatory power of PBV (Bromiley andRau, 2014, 2016) in the CE context.Our survey study demonstrates that PBV is a useful theoretical lens for explaining the performance outcomes of CM practices that are imitable, available in the public domain, and transferable across firms.The case studies further establish that it is indeed practices that drive performance, not only in the direct effect of CM, but also in the lack of moderating effect of I4.0 production technologies on the practice-to-performance relationship.
Practical implications
Our study findings have important practical implications.First, based on our study results we suggest manufacturing firms to continuously nourish a CE culture and IMS to promote and strengthen CM adoption.Although organizational culture is intangible, it plays a fundamental role in shaping employee values and decision-making behaviors related to sustainability practices (Pagell and Wu, 2009).In the CE context, Burke et al.'s (2021) empirical study suggests that sustainable organizational values and CE vision make up the cornerstone of a successful CE implementation.Since leadership plays a crucial role in driving sustainability (Jia et al., 2019), the firms' top management should actively cultivate a CE culture for CM adoption.On the other hand, IMS use institutional procedures to ensure CE principles are incorporated in the manufacturing systems.They help to sustain CM adoption, which will further contribute to operational improvements and superior performance (Villena et al., 2021).In this regard, we advise the managers to follow strict compliance with the respective standards of IMS currently in place beside preparing for the implementation of the forthcoming ISO standards for CE, which are expected to be published by early 2024 (ISO, 2022).
Second, for the development of CM strategy, our conceptualization of CM provides a timely and practical guidance to the practitioners.In this regard, we advise practitioners to develop inter-functional coordination between product design function and production function to ensure CM is being exercised as a uniform strategy.Previous studies have also stressed on the importance of inter-functional coordination for CE, see for example, Burke et al. (2021).Additionally, external integration across supply chain actors to enable the take-back of end-of-use products for value recovery (i.e., to realize circularity of materials in the manufacturing supply chain through remanufacture, refurbish, reuse of parts/components/materials, recycling) would also be required.Luthra et al. (2022) provide a good example for external integration for circularity.
Third, the practitioners should confidently implement CM strategies relying on our empirical results which confirm positive environmental and financial performance outcomes.The implementation of CM comes with a potentially high cost in the short-term, as is the case with CE in general (Geng et al., 2009).However, it can reduce energy costs by improving energy efficiency and recovering energy from waste.It can also reduce material costs by reuse and recycling.Furthermore, there are marketing advantages associated with CM implementation by demonstrating a firm's commitment to environmental protection, which enhances firm reputation and helps retain existing customers and attract new customers.In the Chinese context, CM implementation in the most polluting industries like steelmaking will help reduce forced shutdowns which are very costly.Given that many manufacturers struggle with the upfront investments required, the government should consider financial aid in various forms including interest-free loans, environmental subsidies, and tax benefits to support CM implementations.
Fourth, firms should exercise discretion in their adoption of advanced technologies given that our study results strongly favor practices and processes as the main drivers of firm performance.Technologies often play a role in process improvements, but their impacts on performance are contingent upon many factors including implementation cost, technology type, and process characteristics (Tortorella et al., 2019).Therefore, we recommend that firms tailor technology solutions to their unique situations to ensure a strong ROI and performance improvements.In addition, manufacturers should be prepared for the fact that it takes time to fully exploit the potential benefits of I4.0 production technologies because they increase technical complexity and require employee training in new knowledge and advanced skills.Hence, I4.0 adoption can be demanding of financial resources (Kiel et al., 2017), particularly at the early adoption stage.Therefore, firms should strategically manage the short-term costs and long-term productivity gains from potential technology adoptions.
Last, the government should continue to enforce stringent environmental regulations to promote CM implementation.Our case studies reveal that environmental regulations level the playing field for all manufacturers in the same industry to embrace CM initiativesthe early Y. Liu et al. movers did not have to worry about being economically disadvantaged because all their domestic competitors would have to incur a similar cost in comparable environmental initiatives.Therefore, environmental legislations and their strict enforcement are crucial for driving CM implementations.From a global perspective, there exist great disparities in environmental laws and their enforcement in different countries.Policymakers should not be shortsighted by sacrificing environmental protection for promoting economic development, noting that enforcing CM enhances the long-term financial performance of the industry.
Conclusion
While many manufacturers have made efforts to embrace CE to improve sustainability, it remains unclear what constitutes CM and how to operationalize it.The term CM has been increasingly used but not clearly understood, which undermines the significance of this potentially high-impact research area.Such a knowledge gap hinders CE research and practice.Earlier studies suggest that sustainability practices have a positive impact on both environmental and financial performance.However, in the CE context, there exist contradicting findings on the economic performance of CE implementation.This research focuses on CM, a key component of CE, to investigate its antecedents and performance outcomes, as well the moderating role of I4.0 production technologies on CM-to-firm environmental and financial performance.Our theoretical lens, the PBV, is rooted in the operations management discipline and precisely fits the need to examine performance implications from adopting practices that are replicable across firms.
This study employs a rigorous mixed-methods approach.We first conducted a large-scale survey among Chinese manufacturers and then two representative case studies for the triangulation and interpretation of survey results.The research is believed to be the first attempt to theorize and operationalize CM by extending the well-established CP concept according to the design thinking of CE.It provides empirical evidence that CE culture and IMS were antecedents to CM adoption.It confirms that CM adoption not only improved environmental performance but also long-term financial performance.However, I4.0 production technologies did not moderate the CM-to-firm environmental and financial performance relationship although they did lead to better financial performance.Both the survey study and case studies proved the explanatory power of PBV in the CE context.Based on the study findings, we derived several important practical implications for policymakers and practitioners.
Our study has several limitations which can be overcome in future research.First, this research applied the PBV to study environmental and financial performance.Future research may consider other performance aspects, for example, social sustainability performance.Second, our survey collected cross-sectional data.Future studies may attempt to collect longitudinal data to provide a more holistic view on the developments in the related research phenomenon.Last, our study context is China.It will be meaningful to conduct comparative studies in other countries.
(continued ) Variables and their measures Sources IPT2 Remote monitoring and control of production processes through systems such as Manufacturing Execution Systems (MES) and Supervisory Control and Data Acquisition (SCADA) IPT3 Integrated systems for product development and product manufacturing IPT4 Simulations/analysis of virtual models (finite elements, computational fluid dynamics, etc.) for product design and commissioning IPT5 Collection, processing and analysis of large quantities of production process data (Big Data) Financial Performance (FP) (1: Substantially lower -7: Substantially higher)
Y
. Liu et al. total, 360 completed questionnaires were returned, i.e., a response rate of 38.71%.The lead researcher scrutinized all the responses to ensure data quality.A large proportion of responses (n = 105) were rejected based on missing data, inattentiveness to scale variations and similarity in response patterns.The final included 255 responses and the sample demographics are provided in Table
Table 2
Sample demographics for the survey.
Table 3
Profile of the case study interviewees.
Table 4
Construct analysis.
Table 5
Hierarchical regression results.
|
2023-04-03T15:05:54.349Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "01b122c7f8d6e2529d1070b186d459654e977910",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijpe.2023.108866",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c2969388f7f438a563f04a03e892bbcd3e80e042",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": []
}
|
268885773
|
pes2o/s2orc
|
v3-fos-license
|
Differences between men and women in dietary intakes and metabolic profile in response to a 12-week nutritional intervention promoting the Mediterranean diet
Few studies have compared men and women in response to nutritional interventions but none has assessed differences between men and women in the response to a nutritional intervention programme based on the self-determination theory (SDT) and using the Mediterranean diet (MedDiet) as a model of healthy eating, in a context of CVD prevention and within a non-Mediterranean population. The present study aimed to document differences between men and women in changes in dietary, anthropometric and metabolic variables, in response to a nutritional intervention programme promoting the adoption of the MedDiet and based on the SDT. A total of sixty-four men and fifty-nine premenopausal women presenting risk factors for CVD were recruited through different media advertisements in the Québec City Metropolitan area (Canada). The 12-week nutritional programme used a motivational interviewing approach and included individual and group sessions. A validated FFQ was administered to evaluate dietary intakes from which a Mediterranean score (Medscore) was derived. Both men and women significantly increased their Medscore in response to the intervention (P < 0·0001). Men showed a significantly greater decrease in red and processed meat (−0·4 (95 % CI −0·7, −0·1) portions per d) and a greater increase in fruit (0·9 (95 % CI 0·2, 1·6) portions per d) intakes than women. Significant decreases were observed for BMI and waist circumference in both men and women (P ≤ 0·04). Significant greater decreases were found for total cholesterol (total-C):HDL-cholesterol (HDL-C) (−0·2; 95 % CI −0·4, −0·03) and TAG:HDL-C (−0·2; 95 % CI −0·4, −0·04) ratios in men than in women. When adjusting for the baseline value of the response variable, differences between men and women became non-significant for red and processed meat and fruit intakes whereas significant differences between men and women (i.e. larger increases in men than women) were observed for legumes, nuts and seeds (0·6 (95 % CI 0·2, 1·0) portions per d) and whole-grain products (0·5 (95 % CI 0·01, 1·0) portions per d) intakes. For metabolic variables, differences between men and women became non-significant for total-C:HDL-C and TAG:HDL-C ratios when adjusted for the baseline value of the response variable. The present results suggest that the nutritional intervention promoting the adoption of the Mediterranean diet and based on the SDT led to greater improvements in dietary intakes in men than in women, which appear to have contributed to beneficial anthropometric and metabolic changes, more particularly in men. However, the more deteriorated metabolic profile found in men at baseline seems to contribute to a large extent to the more beneficial changes in CVD risk factors observed in men as compared with women.
Adoption of healthy eating habits is encouraged in the context of chronic disease prevention and the Mediterranean diet has been ranked as one of the best models to provide protection against CVD (1)(2)(3) . The Mediterranean diet pattern is characterised by a high intake of vegetables, fruits, legumes, nuts, cereals (mainly unrefined), a high intake of olive oil, a low-to-moderate intake of dairy products, a low intake of meat and poultry, and a regular but moderate intake of alcohol, primarily in the form of wine and generally during meals (3) .
Studying differences between men and women in response to interventions aimed at preventing or treating diseases is absolutely essential for providing optimal care to men and women. Without such studies comparing men and women, we would not know, for example, that usage of some medications for preventing or treating CVD are efficacious in men but not appropriate in women (4,5) . At this point it is essential to underline that differences observed between men and women can be explained by both sex and gender differences. Sex differences refer to biological and physiological characteristics that distinguish males from females while gender is described as socially constructed roles, relationships, behaviours, relative to power, and other traits that societies ascribe to men and women (6) . When studying differences between men and women in response to nutritional interventions, both sex and gender differences can be involved to a different degree depending upon the type of intervention and the two constructs have been suggested to be closely interrelated and difficult to dissociate (6) .
Some studies have documented differences between men and women in the context of controlled studies where all food and drinks are provided. In such a context, differences observed between men and women in response to the intervention refer more to sex than to gender differences (7)(8)(9) . In fact, in those types of studies the impact of the diet on metabolic variables measured can be influenced by sex-related factors such as sex hormones (10) and is not likely to be influenced by factors such as diet adherence that is in turn modulated by gender-related factors. A few studies have been performed to compare men and women in response to diet manipulations performed in controlled conditions. Accordingly, a greater decrease has been reported in LDL-cholesterol (LDL-C) levels in response to a low-SFA diet in men than in women (8,9,11) and a recent study published by our team showed decreases in insulin levels in men but not in women, in response to a 4-week Mediterranean diet (7) . On the other hand, in nutritional interventions during which subjects continue to buy their food, cook their meals and make decisions about what they eat, the differences observed between men and women cannot be considered as sex differences since gender-specific factors such as attitudes, beliefs and motivation towards food regulation are influencing adherence to dietary recommendations and therefore health benefits that can be obtained from it. This is why when referring to these types of studies the term gender differences is more appropriate.
Only a few studies have been performed to assess gender differences in response to educational nutrition programme promoting the Mediterranean diet. Among a Mediterranean population, a higher success in improving adherence to the Mediterranean diet in men than women was reported after 1 year in the PREDIMED trial, which includes Spanish men and women presenting high risk for CVD (12) . Among a non-Mediterranean population, a study has reported the impact of a Mediterranean diet education programme in hypercholesterolaemic men and women and showed that whereas women improved their dietary intakes in accordance with the education programme and significantly decreased their total cholesterol (total-C) levels, no such changes in serum total-C were observed in men (13) .
Changing eating habits represents a major challenge for many individuals (14) and evidence indicates that the extent to which health professionals involve their clients in the decisionmaking process may influence adherence to treatment (15) . In this regard, the self-determination theory (SDT) suggests that stimulating optimal quality of motivation could help individuals to evolve toward healthier eating habits (16) . To the best of our knowledge, no study has assessed differences between men and women in the response to a nutritional intervention programme based on the SDT and using the Mediterranean diet as a model of healthy eating, in a context of CVD prevention and within a non-Mediterranean population. Therefore, the objective of the present study was to determine differences between men and women in changes in dietary, anthropometric and metabolic variables, in response to a 12-week nutritional intervention programme promoting the adoption of the Mediterranean diet, based on the SDT, in Canadian men and women presenting risk factors for CVD.
Participants
The present study was conducted among a sample of sixtyfour men and fifty-nine premenopausal women aged between 25 and 50 years, and were recruited through different media advertisements in the Québec City Metropolitan area (Canada). In women, a follicle-stimulating hormone (FSH) measurement was performed if needed (for example, when women presented menstrual irregularities) to confirm premenopausal status (FSH < 20 IU/l) (17) . Men and women had to present slightly elevated LDL-C concentrations (between 3·0 and 4·9 mmol/l) (18) or a total-C:HDL-cholesterol (HDL-C) ratio ≥ 5·0, and at least one of the four following criteria of the metabolic syndrome (19) : (1) TAG concentrations ≥ 1.7 mmol/l; (2) fasting glycaemia between 6·1 and 6·9 mmol/l; (3) blood pressure measurements ≥ 130/85 mmHg; and (4) waist circumference ≥ 80 cm in women and ≥ 94 cm in men (20) . Participants also had to have a stable body weight (± 2·5 kg) for a minimum of 3 months before the beginning of the study and to be involved in food purchases and/or preparation at home. We excluded men and women who had cardiovascular events and who used medication that could affect dependent variables under study, i.e. medication for hypertension, dyslipidaemia and diabetes (type 1 and type 2). Pregnant women, smokers, participants with an alcoholism history or with a high Mediterranean score (Medscore > 29; i.e. food pattern already highly concordant with the Mediterranean diet) (21) were also excluded. The present study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects were approved by the Laval University Research Ethics Committee on human experimentation. All subjects voluntarily agreed to participate in the research project and written informed consent was obtained from all men and women before their participation in the study. This clinical trial was registered at www.clinicaltrials.gov as NCT01852721.
Study design
The 12-week nutritional programme was based on the SDT and used a motivational interviewing approach. The SDT relies on the quality of the motivation that regulates behaviours, which lies on a continuum from lower to higher selfdetermined motivation forms (extending from amotivation to intrinsic motivation) (22) . The SDT also postulates that the key component for the development of intrinsic motivation is the satisfaction of basic psychological needs which are autonomy, competence and relatedness (22) . The study was conducted in five phases (spanning from January 2010 to November 2012) and the nutritional intervention included three group sessions (with ten to fifteen individuals each), three individual sessions and four follow-up telephone calls with a registered dietitian (Fig. 1). Three registered dietitians were trained to provide a standardised intervention and participants always met with the same dietitian during individual sessions. The first group session was a lecture, always provided by the same dietitian and aiming at explaining principles of the traditional Mediterranean diet (length: 2·5 h; thirteen to twenty-five participants per group). At week four, men and women actively participated to a 3 h Mediterranean cooking lesson during which they had to cook a Mediterranean meal (eight to fourteen participants per group). At week eight, they shared a 3 h Mediterranean potluck dinner aimed at discussing barriers met in adopting dietary recommendations since the beginning of the intervention (five to twelve participants per group). Individual counselling took place at weeks 1, 5 and 10 and lasted between 45 min and 1 h for each appointment. Individual follow-up telephone calls took place at weeks 3, 6, 9 and 12, and lasted about 20-30 min for each telephone call. The main objective of individual counselling and follow-up telephone calls was to assess dietary changes and to determine progressive personal goals aimed at improving the adherence to Mediterranean diet principles. Different tools such as the decisional balance and the action plan, congruent with the motivational interviewing approach, were used during the individual sessions to formulate dietary objectives while increasing self-determined motivation. In accordance with the SDT (22) , basic psychological needs (i.e. autonomy, competence and relatedness) were promoted during the nutritional intervention via the motivational interviewing approach in order to increase self-determined motivation. More specifically, autonomy and competence of men and women were promoted by the dietitian during individual sessions, i.e. in supporting them into their decision-making process about dietary changes and potential strategies to achieve and maintain these changes, but also during the group sessions by improving their cooking skills and knowledge related to food and nutrition. Therefore, the dietitian had a client-centred approach and put no pressure on participants about the type of dietary objectives to be chosen. In addition, no emphasis was put on body-weight control. Men and women were encouraged to maintain dietary changes in an autonomous way at the end of the nutritional programme and there was no additional contact with the dietitian after the end of the 12-week intervention.
Measurements of dependent variables
All measurements were performed before (time = 0) and after the 12-week nutritional intervention programme (time = 12 weeks), except for the perceived adherence to the Mediterranean diet which was assessed only at the end of the intervention (time = 12 weeks).
Dietary variables. A validated FFQ (23) was administered by a registered dietitian. The FFQ is based on typical foods available in Québec. It contains ninety-one items and thirty-three subquestions. Participants were questioned about the frequency of intake of different foods and drinks during the last month and could report the frequency of these intakes in terms of day, week or month. A Medscore (21) was calculated based on the FFQ and allowed to assess the level of adherence to the Mediterranean food pattern. A partial score varying from 0 to 4 is attributed to each of the eleven components of the Mediterranean pyramid. The Medscore could therefore vary between 0 and 44 points. Components of the Medscore are: grains (whole and refined); vegetables (whole and juices); fruits (whole and juices); legumes, nuts and seeds; olive oil (including olives and rapeseed oil); dairy products; fish (including seafoods); poultry; eggs; sweets and red meat/processed meat. As previously described (21) , a high consumption of food groups promoted by the Mediterranean diet (grains, vegetables, fruits, legumes, nuts and seeds, olive oil and fish) contributed to increase the Medscore, whereas a high consumption of food groups less concordant with the Mediterranean diet (sweets and red meat/processed meat) contributed to decrease the Medscore. Moreover, a moderate consumption of dairy products, poultry and eggs obtained the maximum possible score for the respective component. A maximum of one point was respectively attributed to refined grains, vegetables juice, fruit juice consumption and intake of rapeseed oil or margarine made from olive or rapeseed oil. Macronutrient and micronutrient intakes obtained from the FFQ were evaluated using the Nutrition Data System for Research software (NDS-R, version 4.03_31; Nutrition Coordinating Center, University of Minnesota).
Anthropometric and metabolic profile. According to standardised procedures (24) height was measured to the nearest millimetre with a stadiometer (Seca 222 Mechanical Telescopic Stadiometer), body weight was measured to the nearest 0·1 kg on a calibrated balance (BWB-800S Digital scale; Tanita), and BMI was then calculated. Waist circumference measure was also taken to the nearest millimetre according to standardised procedures (24) . Body fat percentage was estimated using the Tanita body-fat analyser, with the accuracy level being ± 5 % of the institutional standard of body composition analysis (dual-energy X-ray absorptiometry) and repeatable to within ± 1 % variation when used under consistent conditions (Tanita-BC-418 body-fat analyser; Tanita Corp.). Blood samples were collected after a 12 h overnight fast. Total-C, HDL-C and TAG concentrations in serum were measured using commercial reagents on a Modular P chemistry analyser (with 0·8 and 1·7 % of within-and between-assay precision, respectively) (Roche Diagnostics). Serum LDL-C concentrations were obtained by calculation using the Friedewald equation (25) and apoB concentrations by immunoturbidimetry (with < 1·5 and < 2·5 % of within-and between-assay precision, respectively) (Roche Diagnostics). Plasma glucose concentrations were measured with the hexokinase enzymic method (with 0·7 and < 1·2 % of withinand between-assay precision, respectively) and plasma insulin concentrations by electrochemiluminescence (with < 2·0 and < 2·8 % of within-and between-assay precision, respectively) (Roche Diagnostics). Systolic and diastolic blood pressures were measured on the right arm and using an automated blood pressure monitor (BPM 300-BpTRU: Vital Signs Monitor) after a 10 min rest in the sitting position. Measurement of blood pressure was computed as a mean of three readings.
Perceived adherence to the Mediterranean diet. At the end of the nutritional intervention (time = 12 weeks), men and women were invited to rate their perception of adherence to the Mediterranean diet principles according to a visual analogue scale (range 0-150 mm). Accordingly, the following question was asked: 'In your opinion, to what extent do your current dietary intakes meet the Mediterranean diet principles?' (not at all to perfectly). The distance between 0 mm and the vertical mark drawn on the 150 mm horizontal line was then measured with a ruler and corresponded to the perceived level of adherence to the Mediterranean diet (adapted from Dansinger et al. (26) ).
Statistical analyses
Results are first presented in descriptive tables with preintervention (time = 0) and post-intervention (time = 12 LDL-C, LDL-cholesterol; HDL-C, HDL-cholesterol; total-C, total cholesterol. * Mean value was significantly different from that for men (P ≤ 0·05; Student's t test). † Men (n 52) and women (n 48) because of missing values. ‡ Metabolic variables: men (n 63) and women (n 58) because of missing values. (Tables 2 and 4). Then, results are reported as changes within men and within women (Δ values) calculated as postnutritional intervention minus pre-nutritional intervention values and as percentage of change from baseline value (with P value), and two columns with the difference between men and women and the difference between men and women adjusted for the baseline value of the response variable, as mean values and 95 % CI (Tables 3 and 5).
weeks) mean values (95 % CI) according to men and women
Differences between men and women in dietary intakes, anthropometric and metabolic variables were assessed using an ANCOVA (general linear model; GLM procedure) on Δ values. The least squares means (LSMEANS) of the GLM procedure, which can be defined as a linear combination (sum) of the estimated effects, for example, means, from a linear model and based on the model used, allowed determining significant changes in outcomes over time in men and women.
The main model of the GLM procedure included gender only, but additional analyses included gender, baseline value of the response variable and gender × baseline value interaction in the model. The interaction was removed from the model when it did not reach statistical significance. Student's t test was used to compare macronutrient intakes as well as anthropometric and metabolic variables of men and women before the beginning of the nutritional intervention programme and allowed comparisons of the perceived adherence to the Mediterranean diet between men and women. The χ 2 test was performed to compare the frequencies of categorical data, i.e. attrition rate and attendance rate to intervention sessions, between men and women. Since three different dietitians were in charge of providing the intervention, the intervener effect was tested using an ANOVA with the GLM procedure. For variables not normally distributed, a transformation was performed but these variables are presented as raw data in the tables. In order to determine sample size, we considered a difference of 35 % in the change in Medscore as being clinically significant, based on results of a previous study from our group (21) . Therefore a final sample , 3·3 Medscore and food groups Medscore (arbitrary units) Whole-grain products (portions/d) Refined-grain products (portions/d) Milk and dairy products (portions/d) , 0·8 Medscore and food groups Medscore (arbitrary units) , 0·2 Legumes, nuts and seeds (portions/d) 0·6 Δ 0-12 weeks, Change following the 12-week nutritional intervention programme. * Significant interaction between gender and baseline value (P ≤ 0·05). † Significant differences in men v. women between 0-12 weeks without adjustment for the baseline value (P ≤ 0·05). ‡ Including energy-containing foods and drinks. § Analysis was performed on transformed values.
size of forty-five men and forty-five women was needed to detect a difference of 35 % in the change in Medscore between men and women with a power of 0·80 and α of 0·05, considering that the standard deviation corresponds to 55 % of the mean of the change in Medscore. The probability level for significance used for the interpretation of all statistical analyses was set at a α level of P ≤ 0·05. All analyses were performed using SAS statistical software (version 9.2; SAS Institute Inc.). Table 1 shows characteristics of men and women in terms of their age, anthropometric variables and metabolic profile. Men and women included in the present study were about the same age, but men had higher BMI, waist circumference, total-C: HDL-C ratio and TAG levels than women, whereas women had a higher percentage of body fat and HDL-C levels than men. Overall, attrition rate was similar in men and women (10·9 and 13·6 %, respectively; P = 0·66), and, except for higher LDL-C levels in completers, no significant differences were observed in baseline characteristics of participants who dropped out compared with the ones who completed the 12-week nutritional intervention. Among completers, no differences were observed between men and women for the attendance rate to the whole intervention (8·9 (SD 2·0) sessions in men and 9·0 (SD 1·8) sessions in women, out of a maximum of ten sessions) nor for each component taken separately, i.e. attendance to group meetings (2·3 (SD 0·8) in men and 2·5 (SD 0·7) in women, out of a maximum of three meetings), individual counselling sessions (2·8 (SD 0·6) in men and 2·8 (SD 0·5) in women, out of a maximum of three sessions) and follow-up telephone calls (3·7 (SD 0·9) in men and 3·7 (SD 0·8) in women, out of a maximum of four follow-up telephone calls). Moreover, a significant difference in attendance rate between the different group meetings was observed (P ≤ 0·0001), with the higher participation rate found at the lecture on traditional Mediterranean principles (group meeting 1) and the lower rate found at the potluck dinner (group meeting 3). Briefly, 95·2, 82·5 and 57·1 % of men (P ≤ 0·0001) and 98·3, 83·1 and 71·2 % of women (P ≤ 0·0001) attended group meetings 1, 2 and 3, respectively. Similarly, significant differences in attendance rate between the different individual counselling sessions and also between the different follow-up telephone calls were observed (P = 0·002 and P = 0·006, respectively), with a progressive decrease over time in the participation rate. The change in Medscore was not influenced by the dietitian in charge of the intervention as indicated by the ANOVA (F = 0·36; P = 0·70). Table 2 presents nutritional intakes as well as the Medscore and its components at baseline and at the end of the intervention, and Table 3 presents changes in these variables in response to the 12-week nutritional intervention programme, in men and women separately. Significant differences were found between men and women for changes in energy density, percentage of energy intake provided by lipids, SFA and transfatty acids and total dietary fibre intake. Indeed, men significantly decreased more their energy density, had a greater increase in total dietary fibre intake, and greater decreases in percentage of energy intake from lipids, SFA and trans-fatty acids than women, in response to the intervention. Moreover, both men and women significantly increased the percentage of energy intake provided by PUFA, although no difference was observed between them. Also, a significant decrease in energy intake was observed in men only in response to the intervention. When statistical analyses were adjusted for the baseline value of the response variable, similar results were obtained for the percentage of energy intake provided by SFA and total dietary fibre intake whereas differences between men and women were no longer significant for energy density, percentage of energy intake provided by lipids and trans-fatty acids. As for the Medscore, both men and women showed increases in response to the intervention but without significant differences among them. With regards to Medscore components, significant differences were observed between men and women for red and processed meat and fruit intakes. The decrease in red and processed meat and the increase in fruit consumption were more pronounced in men than in women. In addition, intakes of legumes, nuts and seeds, whole-grain products and fish and seafood increased while the intake of refined-grain products decreased in both men and women without significant differences between them. Moreover, a significant increase in vegetable intake was only observed in men whereas a significant increase in olive oil and olive intake was only observed in women, in response to the intervention. After statistical adjustment for the baseline value of the response variable, differences observed between men and women for changes in red and processed meat and fruit intakes were no longer significant. Moreover, significant differences were observed between men and women for legumes, nuts and seeds and whole-grain products intakes once adjusted for the baseline value, with greater increases observed for these variables in men than in women.
Results
At the end of the nutritional intervention, the perceived level of adherence to the Mediterranean diet, as determined by visual analogue scale, was not different between men and women (99·8 (SD 24·2) mm in men and 100·1 (SD 25·3) mm in women; t = 0·07; P = 0·94). A significant and positive association in both men (r 0·33; P = 0·01) and women (r 0·28; P = 0·05) was observed between perceived level of adherence to the Mediterranean diet and the actual Medscore calculated after the 12-week nutritional intervention. Table 4 presents anthropometric and metabolic values at baseline and at the end of the intervention, and Table 5 presents changes in anthropometric and metabolic variables in response to the 12-week nutritional intervention programme, in men and women separately. As shown in Table 5, no significant differences were observed between men and women for anthropometric changes, in response to the nutritional intervention. However, significant decreases were observed for BMI and waist circumference in both men and women. Also, despite the trend for women to decrease their body weight, only men significantly decreased their body weight and percentage of body fat in response to the nutritional intervention. As for metabolic changes, significant differences were found between men and women for total-C:HDL-C and TAG:HDL-C ratios, with greater decreases observed for these variables in men than in women. In addition, results showed significant changes in HDL-C (increase) and in TAG levels and diastolic blood pressure (decreases) in response to the intervention, but only in men. Moreover, differences observed between men and women in total-C:HDL-C and TAG:HDL-C ratios became non-significant after adjustment for the baseline value.
Discussion
The aim of the present study was to determine differences between men and women in dietary, anthropometric and Table 5. 8 metabolic changes, in response to a 12-week nutritional intervention programme promoting the adoption of the Mediterranean diet, and based on the SDT. Results showed that our nutritional intervention led to improvements in dietary, anthropometric and metabolic profile, that were generally more pronounced in men than in women. The present results indicate that both men and women increased their level of adherence to the Mediterranean diet (Medscore) in response to the 12-week nutritional intervention and therefore both men and women improved the general quality of their diet. This improvement in the level of adherence to the Mediterranean diet also indicates that our nutritional intervention programme based on the SDT appears to be appropriate for both men and women. Moreover, the significant association found in both men and women between perceived level of adherence to the Mediterranean diet and the actual Medscore calculated suggests that men and women had a similar understanding of the intervention and also the capability to assess the quality of their diet accurately after the end of the 12-week nutritional intervention, which represents relevant information in a context of nutritional education.
Although men and women improved their dietary intakes as shown by the increase in the Medscore, differences were observed between men and women when examining individual components of the Medscore, which were concordant with changes observed in nutritional intakes in response to the intervention. Indeed, the more pronounced changes observed in men in some food groups, for example, by greater decrease in red and processed meat and greater increase in fruit consumption, were consistent with some differences observed between men and women in nutrient intakes such as the greater decreases in energy density, percentage of energy provided by lipids, SFA and trans-fatty acids, and the greater increase in fibre intake in men than in women. Moreover, the significant decrease in energy density in men was concordant with the decrease in daily energy intake, as previous studies have reported that decreasing energy density of the diet leads to a spontaneous decrease in energy intake (27) . Similarly to our findings, results from a nutritional intervention, which promoted the traditional Mediterranean diet over a period of 12 months among a Spanish population, reported a greater success in men than in women when considering the level of adherence to the Mediterranean diet (12) . On the other hand, the present results are different from other studies (13,28) reporting greater dietary changes in women than in men. Several differences in the intervention design can explain that differences observed between men and women regarding changes in dietary intakes in response to our nutritional intervention programme differ, for example, from the study of Bemelmans et al. (13) . First, our intervention included group but also individual counselling sessions to individualise dietary objectives and strategies adopted and to support men and women to overcome barriers in the adoption of the Mediterranean diet, whereas only group sessions were provided to men and women in the study of Bemelmans et al. (13) . Second, our nutritional intervention aimed at promoting autonomy and competence in men and women towards the adoption of the Mediterranean diet. Indeed, we did so by supporting them in their dietary changes and strategies to achieve these changes and by promoting the development of their skills and knowledge related to nutrition, which contrasts with specific nutritional guidelines and daily intake explained at the beginning of the intervention in the study of Bemelmans et al. (13) . It can be argued that providing more details about dietary guidelines may reduce the possibility for autonomy and that a more passive role of subjects in a nutritional intervention could explain divergence in results obtained among studies. Globally, direct comparison of the present results with those from the literature remains difficult because of major differences in the intervention design and statistical analyses among studies.
Some baseline characteristics in men and women may have influenced the magnitude of dietary changes observed in response to our nutritional intervention. Accordingly, healthier diet at baseline of a dietary intervention has been previously reported to decrease the likelihood of observing significant dietary changes (12) . The fact that women in the present study had globally dietary intakes of higher quality at baseline and which tended to be closer to the traditional Mediterranean diet pattern than those of men could thus possibly explain, at least partially, differences observed between men and women in dietary changes. In the context of our nutritional intervention, it can be hypothesised that because men's dietary intakes at baseline were generally further away from Mediterranean diet principles, they could possibly identify more easily changes that could be made and modify their eating habits, especially in the context of individual counselling sessions where specific dietary objectives were settled. Accordingly, the fact that many differences observed between men and women were no longer significant once dietary changes were adjusted for the baseline value of the response variable brings support to this hypothesis. The present results underline the importance of considering the dietary profile of men and women before the beginning of a nutritional intervention to properly respond to the clients' needs and maximise potential improvements in dietary intakes during the intervention. However, because some differences between men and women in dietary changes were significant once adjusted for the baseline value of the response variable, it is suggested that differences between men and women regarding other factors than baseline dietary intakes, such as attitudes and beliefs towards health and nutrition, might have influenced their response to the nutritional intervention. Our hypothesis is in agreement with a previous study (29) reporting that health attitudes and beliefs are relevant predictors of adherence to health recommendations. The use of mixed methods in which qualitative and quantitative data are combined would therefore warrant to be considered in the future to obtain more specific information about such factors in men and women.
Although our nutritional intervention promoted healthy dietary changes with no focus on body weight, both men and women showed improvements in their anthropometric profile with significant decreases in BMI and waist circumference, in response to the 12-week nutritional intervention. The present results are concordant with previous studies, which found that a higher adherence to the Mediterranean diet was associated with lower prevalence of overweight or obesity (30,31) . The Mediterranean diet is recognised to be highly satiating (31,32) and the present results suggest that dietary changes led to increased satiety. More specifically, increases in intakes of legumes, nuts and seeds and whole-grain products reported in both men and women possibly contributed to a decrease in energy density through increased water content and fibre intake (33) . Moreover, the decrease in red and processed meat intake may have led to replacement of animal proteins by vegetable proteins sources such as legumes, nuts and seeds, which contain satiating components such as proteins and fibres (34,35) .
As for metabolic profile, the present results showed more pronounced changes in metabolic variables in men than in women and these can possibly be explained by greater dietary changes observed in men, in response to the nutritional intervention. As suggested by Estruch et al. (1) , potential synergy among nutrient-rich foods included in the Mediterranean diet might foster favourable changes in some pathways of cardiovascular risks. It is also possible that some sex-related characteristics such as the level of sex hormones may interact with the complex synergistic effect between food components, resulting in a smaller beneficial impact of the Mediterranean diet in women than in men. In support of this, we recently reported significant improvements in insulin homeostasis in men only, in response to an isoenergetic controlled experimental diet based on the traditional Mediterranean diet where all foods and drinks were provided to the participants (7) . However, the absence of differences between men and women in metabolic changes once statistical adjustment was performed for the baseline value of the response variable underlines the importance of the metabolic status at the beginning of a nutritional intervention programme. Indeed, results related to metabolic changes suggest that an individual with more deteriorated metabolic variables before the beginning of a nutritional intervention could show greater health improvements in response to the intervention, which is concordant with the fact than men in the present study had a more deteriorated metabolic profile at baseline and improved more in response to the intervention than women.
We acknowledge that the present results cannot be extrapolated to the whole population because we recruited men and women presenting risk factors for CVD and who had dietary intakes closer to recommendations of the Canada's Food Guide recommendations than the general adult population in Canada (36) . Moreover, although anthropometric and metabolic variables were measured, dietary intakes were self-reported. Therefore, the risk of misreporting dietary intakes cannot be excluded. The present study has also important strengths such as the fact that analyses were conducted distinctively in men and women according to dietary intakes, anthropometric variables and metabolic profile in the context of a nutritional intervention programme. Also, our nutritional intervention based on the SDT appears to be acceptable for both men and women as similar attrition and attendance rate to the intervention as well as to its specific components (i.e. group meetings, individual sessions, follow-up telephone calls) was observed among them. Although attrition rate found in the present study was similar to those reported in the literature among nutritional interventions based on a motivational interviewing approach (37) , reasons for not attending some sessions remain difficult to identify and might differ between men and women. It remains essential to consider the clinical implication related to attendance rate when developing a nutritional intervention. Indeed, the lack of flexibility in the schedule related to group meeting attendance must be considered in the development of interventions as it may require efforts for some individuals to attend pre-scheduled meetings and this might progressively generate fatigue with time. It is also possible that active participation in practical activities, for example the cooking lesson, may be perceived as requiring too much effort for some individuals. Nevertheless, the present study underlines the potential of improvement in adherence to the Mediterranean diet among a non-Mediterranean population, more specifically in the context of an intervention during which men and women chose their own dietary objectives (i.e. based on their personal interest and motivations). In addition, the present results bring information about differences between men and women in potential health benefits obtained following a 12-week nutritional intervention programme promoting the Mediterranean diet, and thus support the relevance to consider gender in the development of nutritional intervention programmes aimed at preventing chronic diseases.
Conclusions
In conclusion, the present results suggest that the nutritional intervention programme promoting the adoption of the Mediterranean diet and based on the SDT led to greater improvements in dietary intakes in men than in women, which appear to have contributed to beneficial anthropometric and metabolic changes, more particularly in men. However, the present results also suggest that the more deteriorated metabolic profile found in men at baseline appears to explain to a large extent the fact that the improvements in CVD risk factors were more pronounced in men than in women in response to the intervention. the study; S. L. (corresponding author) was responsible for the conception and design of the study and contributed to the interpretation of data. All of the authors have read and approved the final version of the manuscript submitted for publication.
The authors report no conflict of interest.
|
2018-04-03T03:13:35.845Z
|
2015-04-13T00:00:00.000
|
{
"year": 2015,
"sha1": "18409329fae0c7d02cc8087aa930aec62ea55793",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/8302CDC371917A337E1B21E9244F0F4B/S2048679015000026a.pdf/div-class-title-differences-between-men-and-women-in-dietary-intakes-and-metabolic-profile-in-response-to-a-12-week-nutritional-intervention-promoting-the-mediterranean-diet-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "084b184f0ac60fbb4c35cb5a692bf95a6288fba5",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4159974
|
pes2o/s2orc
|
v3-fos-license
|
A Monte Carlo Permutation Test for Random Mating Using Genome Sequences
Testing for random mating of a population is important in population genetics, because deviations from randomness of mating may indicate inbreeding, population stratification, natural selection, or sampling bias. However, current methods use only observed numbers of genotypes and alleles, and do not take advantage of the fact that the advent of sequencing technology provides an opportunity to investigate this topic in unprecedented detail. To address this opportunity, a novel statistical test for random mating is required in population genomics studies for which large sequencing datasets are generally available. Here, we propose a Monte-Carlo-based-permutation test (MCP) as an approach to detect random mating. Computer simulations used to evaluate the performance of the permutation test indicate that its type I error is well controlled and that its statistical power is greater than that of the commonly used chi-square test (CHI). Our simulation study shows the power of our test is greater for datasets characterized by lower levels of migration between subpopulations. In addition, test power increases with increasing recombination rate, sample size, and divergence time of subpopulations. For populations exhibiting limited migration and having average levels of population divergence, the statistical power approaches 1 for sequences longer than 1Mbp and for samples of 400 individuals or more. Taken together, our results suggest that our permutation test is a valuable tool to detect random mating of populations, especially in population genomics studies.
Introduction
In a random mating population all individuals have an equal chance of being a mating partner. In population genetics, deviations from random mating may indicate inbreeding, population stratification, natural selection or sampling bias. Extensive association studies have been conducted on population samples to search for genes underlying complex traits through linkage disequilibrium of these genes with markers [1][2][3][4][5][6][7][8]. However, when samples originate from a nonrandom mating population, spurious associations may arise between marker loci and complex traits. In evolutionary studies, it is important to determine whether a given locus is under random mating since deviations may be due to natural or artificial selection [9]. In population genetics, samples are usually tested to determine if they are derived from the same random mating population [9], since the samples might exhibit signs of genetic stratification even if they are from one locality.
Many methods have been proposed to test for random mating. They can be divided into two categories: asymptotic and exact tests. Several asymptotic tests (also know as "goodness-of-fit" orχ 2 tests) have been developed based on asymptotic theory. They perform well when considering independent loci having a small number of alleles [10][11][12]. However, for loci having a large number of alleles, the contingency table used to implement asymptotic tests usually contains too many empty cells and the number of individuals in the sample is insufficient for large sample theory to be applied [13][14][15][16][17][18][19][20][21][22][23][24]. Although the "single allele test" addresses the problem of sparse tables by analyzing each allele separately, the statistical power of this approach is limited because multiple comparisons are made [9]. With the advent of dense genomewide sequencing, loci having large numbers of alleles, and the genome as a whole, are now available for populations genomics investigation [25]. These studies generate sparsematrix data for which asymptotic methods are not reliable. In such cases, exact methods are necessary.
Exact tests use the exact probability of potential outcomes rather than using an asymptotic probability distribution. The pvalue of the exact test is given by the sum of the exact probabilities of the allele combinations that deviate from the null hypothesis of random mating by at least as much as the observed sample. The idea of the exact test was first proposed by Fisher (1935) and subsequently advocated by Levene (1949) and Haldane (1954) in genetics [26,27]. However, the application of the exact test was hindered by its computational complexity until Louis and Dempster (1987) proposed a complete enumeration algorithm to compute the p-value for this test [15]. Unfortunately, this method is computationally impractical when the number of alleles is large. This prompted the development of Monte Carlo (MC) and Markov chain Monte Carlo (MCMC) methods, which are easy to perform and have become widely used in population genetics [16,17].
In addition to the above statistical tests, some other methods, such as STRUCTURE analysis and PCA analysis, can be used to infer possible genetic substructure of populations, thus providing evidence of random mating [4,[28][29][30]. However, these methods cannot act as a substitute for statistical tests of random mating.
Neither asymptotic nor exact tests possess enough statistical power to take advantage of the large amount of polymorphism data made available by genome-wide sequencing. Therefore, there is considerable interest in novel statistical tests for detecting random mating using large-scale sequencing data. In this study, we address the shortcomings of the existing methods by developing a Monte-Carlo-based-permutation (MCP) test to detect random mating. Using computer simulations, we demonstrate that the MCP test performs well on large-scale sequencing data and that its statistical power is greater than that of the classical chi-square test (CHI test). Here, we discuss the influence of genetic and demographic parameters, such as sequence length, sample size, mutation rate, recombination rate, divergence time, and migration rate, on the performance of the MCP test.
Model of random mating for sequence data
A random mating population is one in which all individuals have the same probability of being mating partners. In other words, potential mates have an equal chance of being chosen, without being influenced by environmental, hereditary, or social factors. In this context, the process of random mating can be treated as a random sampling process. In our MCP method, we simulated random mating as the pairing of sequences from randomly selected individuals. Random mating was simulated for a sample of individuals as follows: (i) two gametes from these individuals, which were not necessarily from the same individual, were randomly chosen without replacement to generate a new individual; (ii) further two gametes were then randomly chosen from the remaining pool of gametes to generate another new individual; (iii) this process was repeated until every gamete had been chosen. By treating this process as a simple Monte Carlo permutation procedure, individuals from a random mating population were simulated. After choosing an appropriate statistic, the null distribution of this statistic from a random mating sample can be obtained. By comparing the observed statistic to its null distribution, standard hypothesis testing can be performed to determine if the sample is derived from a random mating population. This approach resembles an exact test which randomly samples alleles [17].
Average Pairwise difference within individuals as a statistic
Pairwise difference, denoted by ξ in this study, is the number of different nucleotides between aligned sequence pair. The expected pairwise difference for a pair of sequences (E(ξ)) is proportional to the mutation rate (μ) and expected coalescence time (T) for that pair of sequences (i.e.E(ξ)=2μT). In a population under random mating conditions, the expected pairwise difference is the same for all randomly selected pairs of sequences. However, for a non-random mating population, the expected pairwise differences for randomly selected pairs of sequences differ according to the population substructure. For simplicity, we assume that sequences are sampled from two homogeneous subpopulations called A and B. T AB denotes the expected coalescence time of any pair of sequences, of which one sequence comes from subpopulation A, the other from B. Similarly, T AA and T BB denote the expected coalescence time of any two sequences both coming from subpopulation A or B, respectively. ξ AB is the pairwise difference of two sequences coming from two different subpopulations, and ξ AA , ξ BB are the pairwise differences when both come from subpopulation A or B, respectively. We can infer that E(ξ AB ) > E(ξ AA ) and E(ξ AB ) > E(ξ BB ) in the case of a non-random mating population, since T AB >T AA and T AB >T BB .
We suppose that a sample of size n individuals is drawn from a population of interest. Sequences of the individuals are denoted by C 1 , C 2 …, to C 2n where C 1 and C 2 are from individual 1, C 3 and C 4 are from individual 2, and so on. Under random mating, a sample of size n has an observed average pairwise difference within individuals as defined by: ξ observe =(ξ C1C2 + ξ C3C4 +……+ξ C(2n-1)C(2n) )/n, where ξ C1C2 is the pairwise difference between sequences C 1 and C 2 , and so on. When these sequences are randomly permuted and divided into n pairs, we obtain a simulated sample where the average pairwise difference within individuals is defined as ξ permute =(ξ Cs1Cs2 + ξ Cs3Cs4 +……+ξ Cs(2n-1)Cs(2n) )/n, where, s 1 , s 2 …, until s 2n is an order of one permutation of sequence 1 to 2n. When a sample of sequences is collected from a random mating population, E(ξ observe ) = E(ξ permute ), since all sequence pairs have the same expected coalescence time. However, if sequences are chosen from a non-random mating population, containing two subpopulations A and B, there are three possible pairwise differences for the simulated samples: ξ AA , ξ BB , and ξ AB . Thus ξ permute is a combination of these three different pairwise differences, whereas in the real sample, ξ observe is only a combination of ξ AA and ξ BB .
Therefore, the test of whether the sample is from a random mating population can be formulated as H 0 : ξ observe = ξ permute H 1 : ξ observe < ξ permute According to the null hypothesis (H 0 ), the sample is from a population under random mating whereas according to the alternative hypothesis (H 1 ), the population is not randomly mating.
Hypothesis testing
Under the null hypothesis of random mating, the distribution of average pairwise sequence differences within individuals is equivalent to that of a simulated sample obtained by the permutation procedures described above. In statistical hypothesis testing, many permutations are conducted to obtain the null distribution of average pairwise differences within individuals. In other words, the null distribution of ξ observe can be obtained by calculating ξ permute for each of the simulated samples generated in the permutation procedure. For a given permutation test, the significance level (p-value) of the null hypothesis is the probability that ξ permute is equal to, or less than, ξ observe . The hypothesis testing procedure is graphically outlined in Figure 1.
To ensure that the obtained p-value is within δ units of the true significance level, at a (1-γ) % confidence level, the Gamete sequences (2n) of the individuals are denoted by C 1 -C 8 as follows: C 1 and C 2 are from individual 1, C 3 and C 4 are from individual 2 and so on. After permuting these sequences N times (where N is any positive integer), N new datasets were obtained by dividing each permuted sequence into n consecutive pairs. For each permutation, the ξ permute statistic could be calculated. This allowed us to derived the null distribution of the statistic. After locating ξ observe on the null distribution, a p value of the test could be obtained.
Performance evaluation
To evaluate the performance of our statistical test for random mating, simulated genomic sequences in multiple genetic scenarios were generated using the MS software program [32]. Both type I error rate (i.e. false positive rate) and type II error rate (i.e. 1-statistical power) were evaluated. Experimental parameters (e.g. sample size n and sequence length l), and inherent parameters (e.g. mutation rate and recombination rate), may affect the type I error of the MCP test. Under the alternative hypothesis that mating is non-random, the power of this test may also be affected by two parameters related to demographic history: the divergence time of the subpopulations and the migration rate between them.
Simulated datasets of genomic sequences were generated using the "control of variables" strategy [33]. To evaluate how a specific parameter affects the type I or type ΙI error of the MCP test, the parameter's values were varied in the simulation while all other parameters were kept constant (we refer to these as "steady states") ( Table 1). For example, we tested how sequence length affects the performance of MCP when all other parameters (e.g. sample size, recombination rate, and mutation rate) were kept in a "steady state". In each case, 1000 replicates were generated, thus yielding 1000 p-values in statistical tests for which type I and type II errors could be examined. Notations used in this report and the values of parameters in "steady states" are presented in Table 1.
Evaluation of type I error
To evaluate the influence of experimental and inherent parameters on the MCP test, we estimated the type I error rate of the MCP test by simulations in which individual parameters were varied using the "control of variables" strategy (see Materials and Methods for details). In these simulations, sequence length was varied from 5kb to 2Mb and sample size was varied from 50 to 800 individuals. Since different genome regions differ in their recombination (ρ=4Nrl) and mutation rates (θ=4Nμl), we assessed both to evaluate their influence on the type I error of our method (Table 2).
Our simulations indicate that type Ι error is well controlled in the MCP test (Table 2). At a significance level of 0.05, type I error rate ranged from 0.027 (for n=50) to 0.069 (for l=1Mb) ( Table 2). In simulations of the MCP test, under the null hypothesis of a randomly mating population, the number of the p-values smaller than a threshold p followed a binomial distribution B(m, p), where m is the number of replicates. Therefore, when m=1000 and p=0.05, 95% of estimated type I error rates are expected to lie in the range 0.0373 to 0.0654. In our evaluations, all the estimated type I error rates lie in this range, except for the two extreme cases noted above. Furthermore, when m=1000 and p=0.01, 95% of the estimations are expected to fall between 0.0048 and 0.0183. In this study, most of the corresponding estimations fell into the expected range and none of them exceeded the upper boundary (Table 2). Test for type I error for more scenarios are presented in Tables S1-S4.
Evaluation of statistical power
Given the generally favorable evaluation of type Ι error rate, we sought to examine the statistical power of the MCP test at significance levels of 0.05 and 0.01. We considered experimental, inherent, and demographic parameters under the alternative hypothesis to determine their effect on statistical power.
We compared our MCP test to the CHI test. Since the CHI test uses numbers of genotypes and alleles, it cannot be directly implemented using real sequences. Therefore, we chose a fixed number of equally distanced SNPs, treating them as independent loci. We also evaluated the influence of locus number (from 1 to 100 in increments of 10) on the performance of the CHI test. When n loci (n>1) were available in the CHI test, we calculated Pearson's chi-statistic for each locus and summarized this to obtain a summary statistic following a standard central chi-square distribution with n degrees of freedom [34]. In simulations, we found that the type Ι error rate of the CHI test was greater than expected, which may be due to the interdependence of the loci involved (Tables S5-S8). To compensate for the inflated type I error in the CHI test power evaluation, we replaced the standard rejection criteria with empirical thresholds for significance levels 0.01 and 0.05, represented respectively by the 10 th and 50 th ranked values of the CHI test summary statistic in 1000 simulated tests under the null hypothesis. This allowed us to calculate the statistical power of the CHI test at different loci. The highest statistical power of the CHI test using different numbers of loci was chosen for comparison to our method (Tables S9-S12).
Our investigation showed that the MCP test has more statistical power than the CHI test in experimental designs with different sequence lengths and sample sizes. For sequence length, the statistical power of the MCP and CHI tests was compared for eight lengths, in the range of 1kb to 2Mb ( Figure 2A). The statistical power of both tests increased with an increase in sequence length, however, the power of the MCP test was consistently much higher than that of the CHI test. For example, when l=1Mbp, the power of the MCP test reached 0.8 or higher, whereas the power of the CHI test was only around 0.2. Furthermore, for sequence lengths greater than 1.5Mb, the power of the MCP test exceeded 0.9.
For sample sizes ranging from 100 to 1000 individuals, the power of both tests increased with larger sample size, with the power of the MCP test consistently greater than that of the CHI test. For samples of more than 400 individuals, the power of the MCP test exceeded 0.8 for a significance level of 0.05, but the power of the CHI test never exceeded this value, even for sample sizes greater than 1000 individuals ( Figure 2B).
The MCP test outperformed the CHI test in genome regions subject to a variety of mutation and recombination rates. We found that statistical power did not vary for mutation rates θ ranging from 50 to 1000 ( Figure 2C). However, at a significance level of 0.05, the power of the MCP method consistently exceeded 0.8, whereas the power of the CHI test was approximately 0.6. Furthermore, at a significance level of 0.01, the power of the MCP test was approximately 0.6, whereas that of the CHI test was only 0.3. The power of both tests at different recombination rates ρ ranging from 50 to 1000 was evaluated ( Figure 2D). The power of the MCP test increased with increasing recombination rate, whereas that of the CHI test remained relatively constant. With a recombination rate per generation per site of 10 -8 (ρ=200 when μ=10 -8 ), which is the most commonly used recombination rate [35][36][37][38], the power of the MCP test exceeded 0.4, but that of the CHI test was less than 0.2, at a significance level of 0.05.
We further compared the statistical power of the MCP and CHI tests in demographic scenarios having different population divergence times and gene migration rates ( Figure 2E and Figure 2F). Statistical power increased with divergence time for both methods. Interestingly, for a population divergence of 400 generations, the power of the MCP test was greater than 0.9 at a significance level of 0.05, and approached 1 when population divergence increased to 600 generations whereas the power of the CHI test never exceeded 0.8. The power of both methods was highly dependent on the migration rate between subpopulations. The power was high for small or no migration rates, but declined with increasing migration rate, resulting in no power at the highest migration rates ( Figure 2F).
Discussion
Here we report a Monte Carlo permutation-based (MCP) statistical test for detecting random mating in a population of interest. Computer simulation showed that the type Ι error behaved well in the MCP testing and statistical power of the method compared favorably with the CHI test in most cases. Moreover, this method can be used not only to detect population stratification of genetic samples, but also to test for random mating at specific regions of the genome or multiple tightly linked loci.
Using the average pairwise difference within individuals has the advantage of allowing the MCP test to consider multiple loci without assuming independence between them, since recombination has no effect on this measure in a homogenous population. Linkage disequilibrium (LD), which can even occur between loci situated several kilobases apart, can inflate the type Ι error in statistical tests. Accordingly, we found that the type I error for the CHI test was highly inflated when more markers were used in order to increase statistical power (Tables S5-S8). In inference-based approaches for detecting population stratification, LD is also problematic. For example, STRUCTURE does not fully eliminate the effects of strong LD, which may produce inaccurate results [39][40][41]. Therefore, it was suggested that loci used as input for STRUCTURE analysis should be separated by at least 1 cM [39][40][41]. However, this constraint is an obvious drawback in the analysis of genome-wide sequencing data.
In contrast to other methods for detecting random mating [15,27], we found that the performance of our MCP method is not diminished by increasing the number of alleles or haplotypes. In fact, higher statistical power is achieved with longer sequence lengths. Moreover, power increases rapidly especially in cases with larger sample sizes, lower migration rates, higher recombination rates, and larger divergence time between subpopulations (Figure 2). When the inherent genetic and demographic parameters are fixed, higher statistical power The MCP test is presented in this report as a single-tailed test, rather than a two-tailed test, since inbreeding is common in population genetic history whereas outbreeding is relatively rare [42,43]. Notably, the MCP test can be conveniently modified to form a two-tailed test when necessary. However, a two-tailed version of the MCP test is likely to be impractical for quality control of sequencing projects. This is because when sequencing errors are randomly introduced into sequence products, the errors have little impact on the average pairwise difference. Therefore, a two-tailed MCP test may lack power in detecting sequencing errors. Furthermore, a two-tailed MCP test may not be a good choice for detecting balancing selection because detecting extra heterozygosity has been suggested to be lack of power in simulation study [44,45].
The time complexity of the entire MCP test calculation process is O(mn 2 +Nn), where m is the number of SNPs, n is the number of individuals and N is the number of permutations. On a cluster machine with 4G RAM, with 4 CPU cores (Dual-Core AMD Opteron(tm) Processor2214, 2194 MHz), with each test executed using a single CPU, it would require about 10 minutes with 2000 individuals and a window size of 2Mbp, using a R script. Thus, the MCP test is suitable for large data sets. Table S1. We detected type 1 error of the MCP test in different sequence length l corresponding to two different significance levels 0.05 and 0.01. Other parameters in "steady states" were as follows: sample size n=400 individuals; effective population size N=5000; recombination rate ρ = 4Nrl=4×5000×10-8l; mutation rate θ = 4Nμl=4×5000×10-8l. (DOCX) Table S2. We detected type 1 error of the MCP test in different sample size n corresponding to two different significance levels 0.05 and 0.01. Other parameters in "steady states" were as follows: sequence length l = 1Mb; effective population size N=5000; recombination rate ρ=4Nrl=4×5000×10-8×106=200; mutation rate θ=4Nμl=4×5000×10-8×106=200. (DOCX) Other parameters in "steady states" were as follows: sequence length l = 1Mb; effective population size N=5000; mutation rate θ=4Nμl=4×5000×10-8=200; sample size n=400 individuals. (DOCX) Table S4. We detected type 1 error of the MCP test in different mutation rate θ corresponding to two significance different levels 0.05 and 0.01. Other parameters in "steady states" were as follows: sequence length l = 1Mb; effective population size N=5000; recombination rate ρ=4Nrl=4×5000×10-8×106=200, sample size n=400 individuals. (DOCX) Table S5. We compared the type Ι error rate of the MCP test with the CHI test in different recombination rate ρ corresponding to two different significance levels 0.05 and 0.01. Other parameters in "steady states" were as follows: sequence length l = 1Mb; effective population size N=5000; mutation rate θ=4Nμl=4×5000×10-8×106=200; sample size n=400 individuals. (DOCX) Table S6. We detected the type 1 error of the CHI test in different sequence length l with certain numbers of loci. The longer the sequences, the more loci we could use. Empty cells meant we did not do the experiments because of limited SNPs. Other parameters in "steady states" were as follows: sample size n=400 individual from a random mating population; effective population size N=5000; mutation rate θ=4Nμl=4×5000×10-8×106=200; recombination rate ρ=4Nrl=4×5000×10-8×106=200. (DOCX ) Table S7. We detected the type 1 error of the CHI test in different sample size n with certain numbers of loci. Other parameters in "steady states" were as follows: sequence length l = 1Mbp; effective population size N=5000; mutation rate θ=4Nμl=4×5000×10-8×106=200; recombination rate ρ=4Nrl=4×5000×10-8×106=200. (DOCX) Table S8. We detected the type 1 error of the CHI test in different mutation rate with certain numbers of loci. Other parameters in "steady states" were as follows: sequence length l = 1Mbp; sample size n=400 individuals from a random mating population; effective population size N=5000; recombination rate ρ=4Nrl=4×5000×10-8×106=200. (DOCX) Table S9. We detected the power of CHI test in different migration rate M=4Nm with certain numbers of loci.
Supporting Information
(m is the fraction of each subpopulation made up of new migrants each generation.) Other parameters in "steady states" were as follows: sample size n=400 individuals, in which half of them came from subpopulation 1 and the other half came from subpopulation 2; sequence length l = 1Mbp; effective population size N=5000; mutation rate θ=4Nμl=4×5000×10-8×106=200; divergence time T = 10000 years, recombination rate ρ=4Nrl=4×5000×10-8×106=200. (DOCX) Table S10. We detected the power of the CHI test in different sequence length l with certain numbers of loci. Empty cells meant we did not do the experiments because of limited SNPs. Other parameters in "steady states" were as follows: sample size n=400 individuals, in which half of them came from subpopulation 1 and the other half came from subpopulation 2; effective population size N=5000; mutation rate θ = 4Nμl=4×5000×10-8l; divergence time T = 10000 years, recombination rate ρ = 4Nrl=4×5000×10-8l and no migration. (DOCX) Table S11. We detected the power of the CHI test in different sample size n with certain numbers of loci. Other parameters in "steady states" were as follows: sample size n=400 individuals, in which half of them came from subpopulation 1 and the other half came from subpopulation 2; the divergence time of the two subpopulations T=10000 years; effective population size N=5000; mutation rate θ=4Nμl=4×5000×10-8×106=200; recombination rate ρ=4Nrl=4×5000×10-8×106=200 and no migration. (DOCX) Table S12. We detected the power of the CHI test in different mutation rate θ with certain numbers of loci. Other parameters in "steady states" were as follows: sequence length l = 1Mbp; sample size n=400 individuals, in which half of them came from subpopulation 1 and the other half came from subpopulation 2; effective population size N=5000; recombination rate ρ=4Nrl=4×5000×10-8×106=200; divergence time of the two subpopulations is T=10000 years and no migration. (DOCX)
|
2016-05-12T22:15:10.714Z
|
2013-08-05T00:00:00.000
|
{
"year": 2013,
"sha1": "eab2a6c9ebbf652cabc844074d68fbbbe50d81df",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0071496&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eab2a6c9ebbf652cabc844074d68fbbbe50d81df",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
51944061
|
pes2o/s2orc
|
v3-fos-license
|
A Simplified Microcontroller Based Potentiostat for Low-Resource Applications
A low component count, microcontroller-based potentiostat circuit was developed through the use of operational amplifiers arranged in different feedback configurations. This was developed to alleviate the cost burden of equipment procurement in low-cost and budget applications. Simplicity was achieved in the design by the use of the microcontroller’s native functionalities and a low-cost R/2R resistor ladder digital-to-analogue converter. The potentiostat was used to investigate the Ni2+/Ni(s) redox couple in a 3-electrode cell with a silver/silver chloride reference electrode and graphite counter and working electrodes. Linear sweep voltammograms were obtained at scan rates of 10, 20, 30 and 40 mV/s. The analysis of the peak current versus (scan rate)1/2 plot indicated that the Ni2+/Ni(s) reduction, though conforming to the Randles-Sevcik equation, was a nonreversible redox reaction.
Introduction
The potentiostat has remained the work-horse of the electrochemistry laboratory ever since the development of the first 3-electrode cell by Hickling in the early 1940s.Using Hickling's original ideas, many incarnations of the device have been produced for different specialized applications over the years from basic electrochemical redox investigations [1] [2] to electrochemical sensors used in biomedical implants [3].
Potentiostat circuits are of widely varying complexities, often dictated by the intended use.A key distinguishing feature is the current measurement range.It is almost impossible to have a single system capable of current measurement across a wide range, from the sub pico-ampere up to the ampere range due to the different circuit architectures required.Auto-ranging resistors have been employed as a solution to achieve multi-current measurement capabilities [4] [5].This feature and the supporting circuits however contribute to the cost of the instrumentation.As the required current range increases even such measures often become impractical.Additional circuit features required for precision potentiostat control may also result in increased circuit complexity and cost.Read noise minimization circuits [6] [7] sometimes added to reduce "ringing" or oscillation, invariably increase system costs.Modern potentiostat circuits range in prices from about $2500 to $25,000 [8], with a median price of about $10,000 for decent laboratory units.
The potentiostat is fast ceasing to be the prime preserve of the electrochemistry laboratory.It is quickly making inroad into industrial and consumer devices.Potentiostat circuits can be found embedded in various gas sensors [9], electronic "tongue" in the food industry [10] and environmental monitoring devices [11].
The demand for "embedded" and specialized potentiostat circuits is bound to continue.This has inspired the development by many workers of simplified, low cost instrumentation that do away with the bells and whistles, while retaining core potentiostat functionalities.A field-portable, low-foot print design is described in [12], while a do-it-yourself unit that is capable of a wide range of electrochemical investigations is reported in [13].
In the simplified design reported here, otherwise expensive components are replaced with low-cost, simplified circuit architecture.It is believed that the low component count, and low cost of this design will contribute further to making the potentiostat ubiquitous in low-resource electrochemical laboratories and in budget products and applications.
The Potentiostat System
A schematic representation of the potentiostat system is presented in Figure 1.The ADC was required for process data acquisition and caching by software.Level shifters "bridge" different segments of the circuit.These were configured as bipolar-to-unipolar converters and unipolar-to-bipolar converters.The level shifters transform voltage signal produced in one circuit module (e.g.single or dual rail voltage) into a form required in another circuit module.
A C-Sharp based software interface was developed in-house for data acquisition and control of the circuit.The data acquired were exported to spread sheet programs for detailed analysis.
The Potentiostat Control Circuit
The potentiostat control circuit is presented in Figure 2. In the figure, CE is the Counter Electrode, RE is the Reference Electrode, and WE is the Working Electrode.The main modules of this circuit are described as follows.
1) Module A Module A is a differential amplifier.The operational amplifier (op-amp) amplifies the voltage difference between the non-inverting input and the inverting input.This can be described by the following equations: ( ) At the summing point, If V 2 = 0, then ( ) When R 1 = R 2 and R 3 = R 4 , then ( ) If all the resistors are of the same value, that is , then the circuit becomes a unity gain differential amplifier.
2) Module B In Module B, the voltage output from module A is fed into the non-inverting input of op-amp 2 while the op-amp's inverting input was directly connected to its output.Op-amp 2 was configured as a voltage follower.This effectively isolates the input from the output to prevent loading of the input signal.The voltage output from op-amp 2 was connected to the CE.
3) Module C Module C comprises op-amp 3 configured as a voltage follower.This is an additional measure to prevent current flow through the RE.Current flow through the RE will polarise it, rendering it unreliable.Point Z in this module is a connection to a level shifter circuit.The level shifter's task is to transform the voltage at Z into the 0 to +5 V range required by the microcontroller for the measurement of the RE's potential.
4) Module D
Module D is the current measurement module based on the feedback ammeter principle.Here, the input current flows through the feedback resistor (R f = 100 Ω).The low offset current of the op-amp changes the current (I in ) by a negligible amount.Thus, the output voltage is a measure of the input current, and sensitivity is determined by R f .Op-amp 4 is tied to real ground; voltage perturbations coming from the WE forces the op-amp to output a voltage of equal but opposite polarity through the 100 Ω resistor.This action forces the potential values at the op-amps inputs to the same value of zero.This feedback voltage is also measured at Y. From this point it is transmitted to the level shifter for eventual connection to the microcontroller's ADC.
The DAC-Use of the R/2R Ladder Circuit
A typical R/2R resistor ladder is illustrated in Figure 3.In this circuit, the digital output word emanating from in the diagram.The eventual voltage output, V out , in the figure is computed by the use of the following equation: In this manner, the voltage value from the microcontroller's on-board program (converting voltage value into binary equivalents of 0 s and 1 s) is converted to its actual analogue equivalent.This physical equivalent, called the control voltage, represents the actual voltage value desired.It is this voltage that is connected to point X in Module A (see Figure 2).
Investigation of the Ni 2+ /Ni Reduction Reaction
An investigation of the Ni 2+ /Ni reduction reaction was carried out by using linear sweep voltammetry (LSV).This was done in a 3-electrode cell with a silver/silver chloride reference electrode and graphite working and counter electrodes.The surface area of the working electrode was 0.12568 cm 2 .The experiment was carried out by using a 1 M solution of nickel sulphate (BDH Chemicals, Poole England) prepared with distilled water.The surfaces of the graphite counter and working electrodes used were polished to rid it of any adsorbed films and rinsed in de-ionised water.LSV was carried out at four different scan rates of 10, 20, 30 and 40 mV/s within a potential window of between −0.1 and −1.5 V (vs.Ag/AgCl).
Operation of the Potentiostat
The complete potentiostat circuit is presented in Figure 4.In Figure 4, op-amp 1 was configured as an inverting amplifier.The non-inverting input of the op-amp was tied to real ground while the inverting input at point A, strove to become a virtual ground.This was achieved when the op-amp output an inverted value of V in .The op-amp sent the inverted quantity in a feedback loop, through the test solution and the reference electrode, back to the inverting input.At point A in the inverting input, both the original and inverted values of V in combined to give a net potential of zero.By this the potential difference between the inverting and non-inverting inputs of op-amp 1 became zero.In this way, op-amp 1 succeeded in applying the control voltage (V in ) to the counter electrode.However, the reference and working electrodes were connected together by virtue of being immersed in the same ionically conducting solution.
The potential (with respect to ground) at both the working and reference electrodes is equal in magnitude but opposite that applied to the counter electrode by op-amp 1.The potential applied to the counter electrode by op-amp 1 appeared at the working electrode through conduction by charged ionic species in the test solution.In order to have the same potential on the working electrode, the applied potential at the counter electrode caused a quantity of charged species (ions), equivalent to the applied potential, to be produced or consumed at the counter electrode.The ionic species produced were the current conductors in the solution and were conducted towards the working electrode, under the influence of potential difference.At the working electrode, the ionic conductors were exchanged for electrons at the electrode's surface.
In the circuit, provision was made for measuring the small exchange current.This was accomplished via op-amp 2, which was configured as a feedback ammeter.The feedback ammeter produces an output voltage equivalent of the exchange current.Op-amp 3 is used as a voltage follower and acts as a buffer for the reference electrode to prevent current flow through it.The buffer was essential to keep the potential at the reference electrode constant.This was naturally achieved in op-amp 3 due to its inherently large input impedance (several mega Ohms).Thus, current could not flow from the input to the output of the op-amp.It instead, followed the path of least resistance, which was through the working electrode.It also acted as an inverting voltage follower, producing an inverted output of the reference electrode potential for onward summing at point A, for the attainment of virtual ground.
Output Voltage and Current
The microcontroller is capable of producing voltage in the range of 0 -5000 mV through the R/2R ladder network.It was then level-shifted to between −2500 mV and +2500 mV.This gives a voltage output range of 5000 The current measurement resolution depends on the voltage read resolution of the microcontroller's ADC (5 mV calculated in Equation ( 12)), and the value of R f (i.e. 100 Ω).The minimum current detectable by the potentiostat circuit was thus determined using Ohm's law as follows: The maximum current measured by the circuit is similarly determined by applying Ohm's law to the maximum voltage value measurable (2500 mV) and the value of R F .Hence, the maximum current measurable by the potentiostat is, Considering positive and negative voltage perturbations, The incorporation of 16-bit and higher precision ADC units would seem to suggest an automatic attainment of higher resolution.While this prospect is attractive, the actual reality is however different.At 16-bit resolution and a Full Scale Range of 5 V (5000 mV), the voltage resolution is about 76 µV.This opens upnumerous problems related to circuit instability and noise.Guaranteeing this sort of accuracy with digital switching current emanating from a microcontroller is not a task easily achieved without costs.The use of a higher precision ADC (in a microcontroller) comes with the added responsibility of additional circuit components such as the incorporation of a separate chip with separate filtered power, and stringent circuit assembly protocol.Even ordinarily mundane tasks such as soldering and the physical sizes of resistors become very serious considerations that are necessary to ensure stability.Also, environmental considerations such as temperature and interference from nearby electronics or electrical installations have to be factored-in because interferences from them can be sources of noise and unreliability.If these considerations are not properly executed then higher precision may actually cause more problems that it sets out to solve.Though the considerations are solvable, they come with costs-component cost, production costs, circuit complexity and labour cost.While these costs may be easily absorbed in high-end commercial developments, it goes against the focus of the current study which has as its thrust cost minimization and availability of potentiostat in low-resource and constrained budget environments.
RE Input Impedance
In the potentiostat circuit, no current is allowed to pass to the RE so as to allow the reference potential to remain unchanged during the experiment.To achieve this, the RE input impedance should be very high.The potentiostat control circuit in Figure 2 was built using ST TL084 General Purpose JFET quad operational amplifier (ST Microelectronics, Switzerland) which has an input impedance of 10 12 Ohms.This imparts an impedance of 10 12 on the RE input.
Ni 2+ /Ni(s) Reduction Reaction
Linear sweep voltammograms of Ni 2+ /Ni redox couple at different scan rates are presented in Figure 5. From the figure, the peak current (i p ) and reduction potential (Vs.Ag/AgCl) at different scan rates were determined.These are presented in Table 1.
Since potentials were measured with respect to the Ag/AgCl reference electrode, equivalent values versus the standard hydrogen electrode (SHE) were obtained using the equation below [14]: where E (Vs SHE) is the electrode potential relative to the SHE, E (Vs reference) is the potential measured in the experiment relative to the Ag/AgCl reference and E reference (Vs SHE) is the potential of the Ag/AgCl reference relative to the SHE.For 3 M NaCl (aq) filling solution (used in this work), E reference (Vs SHE) is 209 mV [15].Equivalent values of reduction potential Vs.SHE thus obtained are presented in Table 1.
It would be seen from the table that Ni 2+ reduction occurs at increasingly negative potential as the scan rate increases.These values of the reduction potential of Ni 2+ differ from the standard value of −0.25 V (Vs.SHE).The deposition of nickel on graphite requires high nucleation overpotential.
Nucleation overpotential of 0.3 V has been reported for the deposition of nickel on graphite [16].Also, the electrodeposition of nickel involves a significant amount of hydrogen co-evolution [17], and it is highly dependent on the pH.Low pH tends to favour hydrogen evolution due to the small hydrogen evolution overpotential.In a sulphate bath, electrodeposition is further inhibited by 2 4 SO − and hydrogen adsorption [18].These sources of overpotential contribute to the occurrence on the graphite working electrode of greater negative potential.The potential of −0.97 V (vs.Ag/AgCl) has been reported for Ni 2+ /Ni(s) reduction by using a graphite working electrode [19].
The relationship between the peak current and concentration of analyte's ionic species is given by the Randles-Sevcik equation: ( ) where i p is the peak current (in amperes), n is the number of moles of electrons appearing in half-reaction for the redox couple, ν is the scan rate at which the potential window is scanned (V/sec), F is Faraday constant (96,485 C/mol), A is the electrode's exposed surface area (cm 2 ), R is the Universal gas constant (8.314J/mol K), T is the absolute temperature, and D is the analyte's diffusion coefficient (cm 2 /sec).
If ambient temperature was taken to be 25˚C (298.15K), the equation becomes: ( ) If the concentration of the analyte (in this case, nickel sulphate), the peak current, scan rate and number of moles of electrons (n) are known, the Randles-Sevcik equation can be used to determine the diffusion coefficient of the analyte ions.
The reduction of nickel (II) ions is according to following equation: Here n = 2. Hence using this value of n, together with other known variables in the Randles-Sevcik equation, the diffusion coefficients (D) at the different scan rates were calculated.These are presented in Table 1.The average value of D was determined to be 3.557 × 10 −8 cm 2 /s.
The plot of peak current vs. (scan rate) 1/2 is presented in Figure 6.A linear variation of peak current with the square root of the scan rate, as can be seen from the regression line through the data points, is an indication that the system follows the Randles-Sevcik equation [20].However linearity may not indicate the reversibility of redox reaction [21].The inability of the regression line to pass through the origin in Figure 6 indicates that the Ni 2+ /Ni(s) reduction is non-reversible.
The non-zero intercept can be attributed to the contribution of non-faradaic currents to the peak current [22].At increasingly higher scan rates diffusion becomes less effective as a means of mass transport to the electrode surface.
Conclusion
This work has described the development steps and circuit architecture for a low-cost simplified potentiostat circuit.The microcontroller-based unit is able to demonstrate key potentiostat functionalities in the investigation of the Ni 2+/ Ni redox couple.The possible range of functions is expandable with expanded software algorithms.Applications for simplified potentiostat circuits transcend low-resource environments.Such simplified
Figure 1 .
Figure 1.Schematic representation of the potentiostat system.
Figure 4 .
Figure 4.The complete potentiostat circuit.mV.The microcontroller's 10 bit ADC reads this voltage at the following resolution:
Figure 6 .
Figure 6.A plot of peak current vs. square root of scan rate.
Table 1 .
Peak current, diffusion coefficient and reduction potential at different scan rates.
|
2018-08-06T13:58:39.279Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "07b3fb60af6a008cc0f0ff82f54e190aa0acb0e7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=63265",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "07b3fb60af6a008cc0f0ff82f54e190aa0acb0e7",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
17177949
|
pes2o/s2orc
|
v3-fos-license
|
Human trafficking and severe mental illness: an economic analysis of survivors’ use of psychiatric services
Background Previous studies have found a high prevalence of depression and post-traumatic stress disorder (PTSD) among survivors of human trafficking. European countries are required to assist trafficked people in their psychological recovery, but there are no rigorous data on the costs of doing so. The objectives of this study were to quantify the use of secondary mental health services by survivors of human trafficking; to estimate the cost of survivors’ use of secondary mental health services provided by the UK National Health Service (NHS); and to identify factors that predict higher costs of mental health service provision. Methods Historical cohort study of psychiatric patients who had experienced human trafficking. The South London and Maudsley NHS Trust (SLaM) Biomedical Research Centre Case Register Interactive Search (CRIS) database was used to identify anonymised full patient records of patients who had experienced human trafficking and who had accessed SLaM mental health services between 2007 and 2012. Data were extracted on socio-demographic and trafficking characteristics and contacts with mental health services. Total costs were calculated by multiplying each resource use item by an appropriate unit cost. Factors that predicted high mental health service costs were analysed using regression models. Results One hundred nineteen patients were included in the analysis. Mean total mental health service costs per patient were £27,293 (sd 80,985) and mean duration of contact with services was 1490 (sd 757) days (approximately 4 years). Regression analysis showed that higher costs were associated with diagnosis of psychotic disorder (p < 0.001) and experiences of pre-trafficking violence (p = 0.06). Patients diagnosed with psychotic disorders cost approximately £32,635 more than patients with non-psychotic disorders/psychological distress but no formal diagnosis and patients whose clinical notes documented pre-trafficking violence cost £88,633 more than patients for whom pre-trafficking violence was not documented. Conclusions Trafficked patients’ use of mental health services – and the cost of providing care – is highly variable, but patients with psychotic disorders and with experiences of pre-trafficking violence are likely to require more intensive support. Evidence is needed on the effectiveness of interventions to promote the recovery of survivors of human trafficking.
Background
Human trafficking is the recruitment and movement of peoplemost often by force, coercion or deceptionfor the purposes of exploitation [1]. Exploitation may include forced sex work and labour in settings such as domestic work, agriculture, and construction. Research has shown a high prevalence of mental health problems among victims of human trafficking in contact with support services, including depression, anxiety and posttraumatic stress disorder [2][3][4][5], and has demonstrated that secondary mental health services in the UK are providing care for survivors of human trafficking with a range of diagnoses, including schizophrenia and related disorders [6,7]. European law requires that governments assist victims of trafficking in their psychological recovery [8,9], but to date there are no rigorous data on the likely costs of doing so.
This study addresses this evidence gap by providing robust estimates of trafficked people's use of secondary mental health services and the associated cost to the UK National Health Service (NHS), and identifying factors that predict higher mental health service costs. The study uses data from a larger cohort study describing the socio-demographic, clinical, and service use characteristics of trafficked people in contact with secondary mental health services in South-East London, UK [6]. We hypothesised that the costs of mental health service use would be significantly higher among: 1) Trafficking survivors with a diagnosis of psychotic disorder, versus other diagnoses; 2) Trafficking survivors who experienced sexual exploitation, versus those who had experienced other forms of exploitation (e.g. domestic servitude, labour exploitation); 3) Trafficking survivors who experienced pretrafficking violence, versus those who had not.
Study design
Historical cohort study of trafficked patients in contact with secondary mental health services.
Setting
The study used data from the South London and Maudsley NHS Foundation Trust (SLaM) Biomedical Research Centre Case Register Interactive Search (CRIS) database [10]. SLaM provides secondary mental health services to the London boroughs of Croydon, Lambeth, Lewisham and Southwark (a catchment area of approximately 1.2 million people), and has a near 100 % monopoly on provision. The CRIS database allows the searching and retrieval of anonymised patient records for over 200,000 patients in contact with SLaM services.
Participants
The study included SLaM service users whose clinical records indicated that they may have been trafficked for exploitation and who had one or more contact with SLaM services between 2007 and 2012. Trafficking was defined in accordance with the United Nations (UN) Optional Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children (i.e. the recruitment or movement of people, by means such as force, fraud, coercion, deception, and abuse of vulnerability, for the purposes of exploitation), and included international and internal trafficking [1]. Trafficking search terms (see Supplementary Information) were used to search the free-text clinical notes and correspondence of all patients in contact with SLaM services during the study period and to retrieve the records of patients whose records included one or more of the search terms. One researcher assessed the returned records for eligibility in the study (records that documented concerns that the patient may have been trafficked as per the UN definition of human trafficking); a second researcher (SO) independently assessed the eligibility of 10 % of the records. There were three scenarios by which healthcare professionals became aware that their patient had been trafficked: (1) the patient disclosed their experiences of exploitation; (2) the patient presented with signs of abuse or exploitation that led the professional to suspect trafficking, or (3) the healthcare worker was informed by another professional (e.g. law enforcement, immigration, social services, voluntary sector, other health professionals) that their patient had been trafficked. Less detail regarding the type of exploitation was typically recorded in the third situation, but correspondence between professionals included other relevant information that indicated that the patient met the study criteria e.g. that the patient was involved in criminal proceedings against their trafficker, was claiming asylum in relation to their experiences while trafficked, or was receiving social services or voluntary sector support as a victim of trafficking.
Data extraction and costing
Data were extracted on routinely recorded sociodemographic characteristics (e.g. gender, age, country of origin), clinical characteristics (e.g. International Classification of Disease-10 (ICD-10) diagnosis), mental health service characteristics (see Oram et al for full details [6]), and mental health service use. Mental health service use data included information on the date and duration of each contact, the type of professional contacted, the type of contact (inpatient, outpatient, accident and emergency or indirect contacts) and whether or not the patient attended. Data were also extracted from free-text clinical notes on patients' experiences of physical and sexual violence prior to and during trafficking, and type of exploitation. Patients whose notes did not refer to violence prior to or during exploitation were categorised as not having experienced these types of abuse. Type of exploitation was categorised as sexual exploitation, domestic servitude, labour exploitation, financial exploitation (trafficking for benefit fraud, for example), or unknown. Total costs were calculated by multiplying each resource use item by an appropriate unit cost. All unit costs, in United Kingdom (UK) pound sterling, were for the financial year 2012-2013 and included national NHS reference costs for hospital contacts [11] and national average unit costs for community health services [12]. No adjustments were made for inflation but costs were discounted to reflect time preferences. Costs were assumed to occur at the beginning of each year [13], and the discount rate used was 3.5 %, based on the recommendations of the UK Treasury for the discounting of costs [14] .
Indirect contacts (phone calls, letters, faxes and emails) were not costed as the cost of these contacts are included in the published unit costs through the use of appropriate direct to indirect contact ratios. The cost of appointments not attended were assumed to be equal to the full cost of the appointment, which assumes the professional involved failed to make productive use of the time. This assumption was reduced in sensitivity analysis to zero but this had little impact on the results presented, so only the main analyses are presented.
Data analysis
Total costs over the period that each patient was in contact with SLaM services are presented as mean, standard deviation, median and range. Factors associated with total costs were explored using regression analysis. A list of possible cost predictors was created based on previous research investigating risk of mental health problems among trafficked people and in collaboration with clinical members of the research team [5,15,16]. This included: gender, age at first contact, diagnosis (psychotic disorder versus other), type of exploitation (sexual versus other [domestic servitude, financial exploitation, labour exploitation or unknown]), and violence pre-and during trafficking (sexual versus other). First, univariate associations between each of the specified predictors and total costs were explored in a linear regression. All variables are categorical except age at first contact, which is presented in two groups split at the median. In addition, age is presented split at the legal age for adulthood (<18 versus 18 and older) to assess any differences in the two populations. Secondly, multiple regression was used to reduce the variable set to those factors independently associated with mental health service costs. The model initially included all variables that had univariate associations with total costs at a significance level of 10 %, discarding from the model all variables that were no longer found to be important. Variables that did not have a univariate association were then added, one at the time, and retained if they added significantly to the model, otherwise discarded. The model derived was checked to ensure that no variables excluded would make a significant additional contribution [17]. To confirm the validity of this approach, multiple regression was used with all independent variables included.
Cost data are commonly skewed and as a result the choice of regression method is not straightforward. Although the ordinary least squares assumptions may be violated, namely linearity and homoscedasticity, it is not appropriate to transform costs as analysis is then not concerned with the arithmetic mean but with the geometric mean, which is of less value to decision makers [18]. For this reason, the results of the model were checked against the results obtained from a generalised linear model using an identity link function to describe the scale on which covariates in the model are related to costs and assuming a gamma distribution function for the costs [18]. Results were compared with the results from a non-parametric bootstrap regression in order to assess the robustness of the confidence intervals and p-values to non-normality of the cost distribution.
Ethics and consent
Ethical approval for the research use of CRIS-derived anonymised databases without the written informed consent of SLaM service users was granted by an independent Research Ethics Committee (Oxfordshire C, reference 08/H0606/71). An Oversight Committee reviews all applications to use CRIS, and gave approval for this study (11/025).
Results
A total of 119 patients were included in this analysis. The socio-demographic and clinical characteristics are summarised in Table 1 and are described in full elsewhere [6]. The majority of the sample were female (76 %, n = 91), and amongst them, two-thirds were trafficked for sexual exploitation (63 % of women). The median age at first contact with SLaM services was 22 years old (range from 8 to 49). The majority of the sample were diagnosed with non-psychotic disorders (72 %, n = 86). Psychotic disorders were present in just under a fifth of the sample (17 %, n = 20) and were more prevalent in men (39 %, n = 11) than women (10 %, n = 9). The remainder (11 %, n = 13) had psychological distress but no formal diagnosis ("psychological distress"). Violence prior to trafficking was documented in the records for almost half of the sample (48 %, n = 57), and violence during trafficking for 58 % (n = 69). Pre-trafficking violence was perpetrated by a variety of people, most commonly parents (14 %, n = 17), other family members (20 %, n = 24), and soldiers (10 %, n = 12), but also by acquaintances, strangers, teachers and pimps.
Use of psychiatric services by the cohort and the cost of these services are presented in Table 2. Patient records documented 5,171 contacts with SLaM services during the study period, and the vast majority of these (92 %) were with outpatient SLaM services: 87 % (n = 103) of trafficked patients had at least one contact with an outpatient SLaM service, approximately a third had one or more inpatient stay (32 %, n = 38) and almost half (46 %, n = 55) had one or more emergency department contacts. Mean cost per person using each service was highest for inpatient stays.
Total costs over the period for which trafficked patients were in contact with SLaM services are presented in Table 3. The mean duration of contact with services was approximately four years (1490 days, s.d. 757), with a mean total cost per patient of £27,293 (s.d. 80,985), or approximately £57 (s.d. 147) per patient per day. The distribution of the cost data in the cohort was positively skewed and thus the mean total costs are much higher than the median. This is illustrated in Fig. 1, which shows that only 20 patients had a total cost higher than the mean cost, with seven participants costing at least five times the mean cost.
Univariate associations between total mental health service costs per patient and key characteristics are shown in Table 4. Total mental health service costs per patient did not vary significantly according to type of exploitation, age at first contact with SLaM services or violence suffered during trafficking, and there was no difference comparing children (<18 years old) with adults (18 or over). Psychotic disorders were associated with higher total mental health service costs (p < 0.001) and documented history of pretrafficking violence showed a weak association (p = 0.060). Costs were higher for men, who were more likely to have a diagnosis of psychotic disorder, than for women, although not significantly so (p = 0.091). Table 5 shows the results of the two multivariate models: model 1 containing only those variables found to be significantly associated with mental health service costs and model 2 containing all independent variables. Model selection did not alter the results; in both cases, psychotic disorders remained significant (p < 0.0001) and pre-trafficking violence became significant (p = 0.017 model 1 and 0.019 model 2). The results suggest that trafficked patients diagnosed with psychotic disorders cost approximately £33,000 more than trafficked patients with non-psychotic disorders or psychological distress and trafficked patients with a documented history of pre-trafficking violence cost approximately £90,000 more than trafficked patients who did not. Gender was not associated with total mental health service costs in the multivariate regression analysis and no other variables became significant. Results from bootstrap regression analyses and those based on generalised linear models were not substantially different from the OLS regression results reported in Table 5. Repeating our analyses using only the sample of adults aged 18 or over did not change these results and so are not presented here.
Key findings
The study provides, for the first time, estimates of the use of secondary mental health services by trafficked people in England and the cost of secondary mental health care provision for this population. The mean duration of survivors' contact with SLaM mental health services was four years, and the mean cost of care was . 80,985). This figure, however, disguises substantial variation. Two factors were identified as significant predictors of mental health service cost: diagnosis of psychotic disorder and documented history of pre-trafficking violence. Psychotic disorders were diagnosed in just under a fifth of the sample, and have been previously shown to be associated with more expensive mental health treatment [19]. Other disorders, including the more commonly diagnosed post-traumatic stress and depressive disorders were associated with significantly lower costs to services. Previous research has demonstrated an association between experiences of pre-trafficking violence and mental health problems among non-clinical samples of trafficked women recruited from post-trafficking support services [5,15]. This is consistent with survey research suggesting that cumulative physical and sexual abuse is associated with a higher risk of mental disorder [20]. This study goes further by suggesting that among trafficking survivors with diagnosed mental disorders, those with experiences of pre-trafficking violence are likely to require more intensive mental health support. Neither gender nor type of exploitation was found to be associated with cost of mental health service provision.
Our finding that the majority of the sample were female and were trafficked for sexual exploitation is consistent with the national profile of identified cases of human trafficking during the period 2009-2012 (national statistics are not available for the period 2007-2008) [21]. In 2009, the UK introduced an identification and referral procedure to assess whether people were victims of human trafficking and therefore eligible for assistance. Between January 2009 and July 2012 there were 2,737 referrals: 70 % (n = 1918) of referrals were for females and 42 % (n = 1149) related to cases of sexual exploitation [21].
Due to resource limitations, we were not able to assess whether mental health service costs for trafficked people differ from those of non-trafficked patients. However, our previous finding that trafficked patients have a longer duration of inpatient admission and are more likely to be compulsorily admitted than matched non-trafficked patients suggests that costs may be higher for this patient group [6]. The mean duration of trafficked patients' contact with secondary mental health services far exceeds the standard duration of support both in the UK and elsewhere [22,23], and suggests there is a subgroup of trafficked people for whom long-term mental health, social, and welfare support will be vital. Yet, evidence on interventions to support the psychological recovery of trafficked people is lacking [24].
Strengths and limitations
Psychiatric case registers which include complete electronic health records have exciting potential for estimating service use and associated costs for patients that are usually difficult to recruit into clinical studies. This study used an innovative data resource that allowed the searching and retrieval of anonymised full patient records for over 200,000 cases recorded on the SLaM Patient Journey System, a system in which data on gender, age, diagnosis, and mental health service use are routinely recorded. However, other key characteristics of interest for this studyincluding patients' experiences of trafficking and experiences of violencewere not recorded in a standardised way and so could not be included or cannot be assumed to be entirely accurate [25].
All returned records were reviewed against the UN definition of human trafficking and against the study protocol, with an independent review of the first ten returned records and a random sample of a further 10 % of records by a second researcher. However, it is possible that patients inaccurately referred to as having experienced human trafficking by their care professionals may have been misclassified by the research team. A much larger number of trafficked patients are likely to not have been included in the sample because the professionals involved in their care were unaware that they have experienced trafficking or had not documented their concerns appropriately. In addition, pre-trafficking violence and violence during trafficking may not have been reported by all patients who disclosed they had been victims of trafficking, although the prevalence of pre-trafficking violence documented in the medical records of this sample is consistent with previous survey research with trafficked people [5,15,16], giving us some confidence in the rates recorded.
Cost estimates are limited to use of secondary mental health services, and do not include the use of other health services, including primary care, or the services provided by other sectors, such as Local Authority social services. Therefore, the cost results presented should be seen as a minimum for this population.
To our knowledge, there are no data on the number or characteristics of trafficked people in contact with mental health services elsewhere in England, and the generalizability of the findings beyond the study setting is unclear, including findings relating to characteristics predictive of higher cost. Further research in other settings is required. Bipolar and schizophrenia and other non-affective psychoses c Childhood emotional, depression, emotionally unstable PD, enduring personality change following catastrophic experience, mixed conduct disorder, OCD, PTSD, severe stress and adjustment, substance misuse, unspecified mental retardation, unspecified disorder of psychological development, psychological distress, and not assessed
|
2018-04-03T06:17:23.780Z
|
2016-07-19T00:00:00.000
|
{
"year": 2016,
"sha1": "4bae629c93cea69a6fda200d2b872040c916229e",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-016-1541-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bae629c93cea69a6fda200d2b872040c916229e",
"s2fieldsofstudy": [
"Economics",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
117908019
|
pes2o/s2orc
|
v3-fos-license
|
Winding up by a quench: Insulator to superfluid phase transition in a ring of BECs
We study phase transition from the Mott insulator to superfluid in a periodic optical lattice. Kibble-Zurek mechanism predicts buildup of winding number through random walk of BEC phases, with the step size scaling as a the third root of transition rate. We confirm this and demonstrate that this scaling accounts for the net winding number after the transition.
Introduction. -In a second order phase transition, the critical point is characterized by divergences in the correlation length and in the relaxation time. This critical slowing down implies that no matter how slowly a system is driven through the transition its evolution cannot be adiabatic close to the critical point [2]. As a result, the state after the transition is not perfectly ordered: it is a mosaic of domains whose size depends on the rate of the transition. This scenario was first described in the cosmological setting by Kibble [1] who appealed to relativistic causality to set an upper bound on domain size. The dynamical mechanism that determines domain size in second order phase transitions was proposed by one of us [2]. It is based on the universality of critical slowing down, and predicts that average size of the ordered domainsξ scales with the transition time τ Q asξ ∼ τ w Q , where w is a combination of critical exponents. This Kibble-Zurek mechanism (KZM) for second order thermodynamic phase transitions was confirmed by numerical simulations [3] and tested by experiments in liquid crystals [4], superfluid helium 3 [5], both high-T c [6] and low-T c [7] superconductors, and even in non-equilibrium systems [8]. With the exception of superfluid 4 He -where the situation remains unclear [9], experimental results are consistent with KZM (see [10] for a review). Spontaneous appearance of vorticity during Bose-Einstein condensation driven by evaporative cooling was recently reported [11]. This confirms KZM predictions [12], and is further elucidated by numerical studies of BEC formation [13].
Our goal is to study dynamics of a quantum phase transition in a simple yet non-trivial example that can be implemented experimentally. Quantum phase transitions we consider differ qualitatively from finite temperature transitions. Most importantly, evolution is unitary, so there is no damping, and no thermal fluctuations to initiate symmetry breaking. Recent work on the dynamics of quantum phase transitions is mostly theoretical, [14,15,16,17,18,19,20,21,22,23], but there is one possible exception: Ref. [24] on the transition in a spin-1 BEC. Generic outcome of that experiment is a mosaic of ferromagnetic domains whose origin was attributed to a sudden quench limit of KZM. This explanation is sup-ported by theory [25]. Model.
-Bose-Hubbard model is a paradigmatic example of a non-integrable quantum critical system. It describes cold bosonic atoms in an optical lattice [29]. In dimensionless variables, its Hamiltonian reads Here N is the number of lattice sites and n is an average number of atoms per site. This model with periodic boundary conditions (which we assume) should be directly experimentally accessible in a ring-shaped optical lattice [30]. For an integer n, the transition from the Mott insulator (small J) to the superfluid phase (large J) is located at J c ≃ n −2 [28]. We drive the system through its critical point by a linear quench with a quench timescale τ Q : In an experiment one can increase Josephson coupling J by turning off the optical lattice potential as in [29]. The initial state is the Mott insulator ground state at J = 0, |n, n, n, . . . , n , with the same atom number at each site. We assume n ≫ 1: This large density limit is accessible experimentally. Numerical approach. -We replace annihilation operators a s by complex field φ s , a s ≈ √ n φ s , which is normalized, N s=1 |φ s | 2 = N , and evolves with the time-dependent Gross-Pitaevskii equation These approximations are accurate for n → ∞, when the critical point J c ≃ n −2 → 0.
In the truncated Wigner method we employ quantum expectation values are given by the averages over stochastic realisations of the field φ s (t) [13,26,27]. For example, the correlation function becomes Here .. means quantum expectation value while the overline is an average over realizations. All realizations of φ s (t) evolve with the same deterministic Gross-Pitaevskii equation (4), but they start from different random initial conditions which come from a probability distribution depending on an initial quantum state. The initial Mott state (3) corresponds to initial fields with independent random phases θ s ∈ [0, 2π): The Mott state has the same number of n particles at each site (i.e., |φ s (0)| = 1), and, hence, indeterminate phases, that translate into random θ s . Kibble-Zurek mechanism. -In an optical lattice with BEC pools that become gradually connected with Josephson couplings in accord with Eq. (2) it is natural to rephrase KZM: Rather than seek distanceξ over which phase remains more or less the same we compute size ∆θ s of a typical phase step between neighboring sites. One could use it to deduce the size of domainsξ over which winding number changes by one, and get the accumulated phase from square root of circumference of the whole ring of BEC pools measured in units ofξ, as in [2]. However, the same result obtains from a random walk between neighboring sites, with the corresponding step size ∆θ s . We now compute ∆θ s as a function of τ Q .
The Gross-Pitaevski equation (4) can be linearized in small fluctuations δφ s around uniform large background, φ s = 1 + δφ s , and δφ s can be expanded in Bogoliubov modes as δφ s = In the Josephson regime, when J ≪ 1, we have v k ≈ −u k so that purely imaginary δφ s in φ s = 1 + δφ s is a phase fluctuation. However, for our random initial conditions (6), this linearization is justified only for short wavelength modes of φ s , with k ≈ ±π, for whom the modes with longer wavelength are a locally uniform large background. From now on we focus on the short wavelength modes because they determine variance of the nearest-neighbor ∆θ s .
When k ≈ ±π and J ≪ 1, then ω k ≈ 2 √ 2J. Early in the linear quench (2) this ω k is small, so that early evolution of the short wavelength modes is approximately impulse i.e. their magnitude remains the same as in the initial Mott state and, consequently, ∆θ 2 s ≃ 1 in this impulse stage. The impulse approximation breaks down atĴ ( [2]) when the transition rateω k /ω k equals ω k , and evolution becomes adiabatic. Eq. (7) leads tô which is consistent with J ≪ 1 when τ Q ≫ 1. The crossover from impulse to adiabatic evolution at J is the key ingredient of KZM. In the following adiabatic evolution afterĴ but before J ≈ 1, short wavelength phase fluctuations scale as δφ s ∼ J −1/4 because the mode amplitudes |b k | do not change, but u k and v k follow stationary Bogoliubov modes u k ≈ −v k ≈ −1/2(2J) 1/4 . Consequently, ∆θ s has variance scaling as On the other hand, when J ≫ 1 then stationary modes u k ≈ 1 and v k ≈ 0 do not depend on J and ∆θ 2 s does not depend on J either. This means that ∆θ 2 s must stabilize between the regimes of J ≪ 1 and J ≫ 1 i.e. around J ≃ 1 where it takes its final value which scales with a power of w = 1/3. This variance determines e.g. the correlator C 1 in for τ Q ≫ 1. Kinetic hopping energy per particle K 1 is expected to stabilize for J ≫ 1, when the hopping term dominates over the non-linearity in Eq. (4) and K 1 becomes an approximate constant of motion, see Fig. 1. Key ingredients of KZM are confirmed by our simulations: Phase performs a random walk that is markovian to a good approximation. Moreover -as seen in Fig. 1 its size is consistent with the above predictions.
Winding number. -Condensate wavefunction is single-valued. Therefore, phase accumulated Θ R after R = N steps defines integer winding number: where Arg(...) ∈ (−π, π]. A random walk of phase, with the variance of nearest neighbor phase differences scaling as in Eq. (9), gives winding numbers with variance There are two limits where this scaling is bound to fail. For very fast quenches with τ Q ≪ 1 phases are completely random between neighboring sites, so ∆θ 2 s = π 2 /3, and W 2 N = N/12. For quenches so slow that W 2 N < 1 the nature of the problem changes, leading to steeper falloff of W 2 N with τ Q [7,16]. Between these two limits the 1 3scaling in Eq. (12) for the winding number is confirmed by our numerical results in Fig. 2. Correlation function. -Constant amplitude and Gaussian distribution of phase Θ R after R steps imply where σ R is dispersion of Θ R = R s=1 Arg (φ s+1 φ * s ) which after R = N steps becomes the winding number in Eq. (11) i.e. W N = Θ N /2π. For a random walk σ 2 R = R∆θ 2 s , which leads one to expect: Using Eq. (9) we would expect scaling ξ ≃ τ 1/3 Q . Numerical simulations confirm exponential correlations, see Fig. 3, but correlation lengths ξ measured at J = 10 are better fitted by ξ ≃ τ 0.45 Q . On the other hand, early on in the quench, for smaller values of J ≪ 1, correlation length exhibits ξ ≃ τ 1/3 Q . It seems that intermediate scales are subject to phase ordering between the freezeout atĴ ≃ τ −2/3 Q and the final J = 10. Similar post-transition phase ordering was observed in the integrable quantum Ising chain [21].
On the other hand, winding number continues to scale with τ −1/3 Q , see Fig. 2. It is not too surprising that it is insensitive to phase ordering: While in our simulations winding number is not really stable following the freezeout, it changes much less frequently than smaller scale excitations, as its topological nature leads one to expect. Summary. -We have investigated the process making a single condensate wavefunction out of many -Nindependent BEC pools. We conclude that, in the ring geometry, the overall winding number W N (which will set up persistent current) can be predicted using simple idea of a random walk in phase between the initially independent BEC fragments [2]. For very quick quenches this leads to saturation at W N = N/12. Slower quenches lead to scaling of W N with the rate of reconnection that can be inferred from the Kibble-Zurek mechanism.
Correlation functions also exhibit behavior consistent with a random walk in phase. Initially, correlations scale in a way that is directly related to healing length at the instant when dynamics of the system becomes faster than the rate of change of its Hamiltonian [2]. However, while winding number "remembers" this scaling as Josephson couplings increase, correlations on smaller scales evolve. In thermodynamic transitions similar phase ordering associated with diffusion is responsible for the post-transition smoothing of the order parameter structure, so that -eventually -only topological defects still "remember" initial state of the system. In our model evolution is completely reversible. Therefore, diffusion cannot smooth out small scale structures. However, evolution itself appears to redistribute energy between the excitations. This may be regarded as a quantum analogue of phase ordering. Correlations on intermediate scales change, but (as was also the case in thermodynamic phase transitions) small-scale evolution does not affect the topologically protected winding number W N .
Our model ignores decoherence and damping that are likely to intervene in the laboratory experiments with, say, gaseous BECs. It is relatively easy to modify equations and introduce damping "by hand". There is however no unique prescription for it (although one could appeal to presence of a dilute thermal cloud, as in simple models of BEC decoherence [31]). In experiments dissipation and decoherence are inevitable. We expect dissipation to affect small scales, but leave the topologically conserved W N intact. This is based on a limited number of simulations we have conducted where different models of dissipation were tried out. Above all, this is corroborated by the experiment [11] where sudden reconnection of N = 3 uncorrelated condensates led to relaxation to a condensate with stable vortices -stable winding number. It is also consistent with the recent numerical results [32].
Acknowledgements. -This work was supported by DoE LDRD program at Los Alamos, the Polish Government project N202 079135, and the Marie Curie ATK project COCOS (MTKD-CT-2004-517186).
|
2008-08-22T10:42:22.000Z
|
2008-05-07T00:00:00.000
|
{
"year": 2008,
"sha1": "ce4a07d2c065de425f189bd8d2fda07c10d183f1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ce4a07d2c065de425f189bd8d2fda07c10d183f1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
211121429
|
pes2o/s2orc
|
v3-fos-license
|
A longitudinal study of pre-pregnancy antioxidant levels and subsequent perinatal outcomes in black and white women: The CARDIA Study
Background Although protective associations between dietary antioxidants and pregnancy outcomes have been reported, randomized controlled trials of supplementation have been almost uniformly negative. A possible explanation is that supplementation during pregnancy may be too late to have a beneficial effect. Therefore, we examined the relationship between antioxidant levels prior to pregnancy and birth outcomes. Methods and findings Serum carotenoids and tocopherols were assayed in fasting specimens at 1985–86 (baseline) and 1992–1993 (year 7) from 1,215 participants in Coronary Artery Risk Development in Young Adults (CARDIA) study. An interviewer-administered quantitative food-frequency questionnaire assessed dietary intake of antioxidants. Pregnancy outcome was self-reported at exams every 2 to 5 years. Linear and logistic regression modeling was used to assess relationships of low birthweight (LBW; <2,500 g), continuous infant birthweight, preterm birth (PTB; <37 weeks) and length of gestation with antioxidant levels adjusted for confounders, as well as interactions with age and race. Results In adjusted models, lycopene was associated with higher odds of LBW (adjusted odds ratio for top quartile, 2.15, 95% confidence interval 1.14, 3.92) and shorter gestational age (adjusted beta coefficient -0.50 weeks). Dietary intake of antioxidants was associated with lower birthweight, while supplement use of vitamin C was associated with higher gestational age (0.41 weeks, 0.01, 0.81). Conclusions Higher preconception antioxidant levels are not associated with better birth outcomes.
Introduction Nutrition is considered a key to good maternal health during pregnancy, as well as to good birth outcomes. Partly because oxidative stress has been implicated in a number of pregnancy complications [1,2], interest in antioxidants as a possible preventive factor has grown. A number of observational studies have indicated that higher maternal antioxidant levels, measured in serum or plasma, during pregnancy are associated with better birth outcomes. For instance, higher maternal vitamin A has been associated with a lower risk of preterm labor/birth [3], and higher birth length [4]. Vitamins C and E have been associated with a lower risk of smallfor-gestational-age [5] and higher birthweight and length [6]. Other antioxidants such as αtocopherol have also been shown to be associated with higher birthweight [7], and a case-control study of spontaneous preterm birth (PTB) found that plasma concentrations above the median of α-carotene, β-carotene, α-cryptoxanthin, β-cryptoxanthin, and lycopene were associated with lower risk of PTB, while high γ-tocopherol was associated with higher risk of PTB [8]. These studies were conducted within medical care settings and for the most part on women unlikely to be actively malnourished, although the range of countries was considerable [U.S. [7], Canada [8], South Korea [6], Algeria [5], China [4], and Turkey [3]].
Although these observational studies demonstrated significant protective associations between certain antioxidants and birth outcomes, the results of randomized controlled trials of supplementation to reduce pregnancy complications have been almost uniformly disappointing. It should be noted that the randomized controlled trials primarily addressed supplement use, while the observational trials primarily examined measured levels in biological samples and selfreported dietary intake. Generally, randomized controlled trials have found no effect on smallfor-gestational-age [9]. Reviews concluded that the data are insufficient to assess the effects of vitamin C on birthweight, but there is a possibility of higher risk of PTB with supplementation [10], and the data did not support a positive effect of vitamin E on any outcome [11]. The interventions in these cases were conducted in a wide variety of populations and often focused on women at high risk for pre-eclampsia, with birthweight or PTB as a secondary outcome.
A possible explanation for these discordant results is that supplementation during pregnancy is too late to have a beneficial effect. Some studies indicate a stronger effect for periconceptional or preconceptional antioxidant or multivitamin intake than for intake during pregnancy. For instance, a study of 2064 pregnant women in North Carolina found that women with dietary intakes of vitamin C below the 10 th percentile were associated with preterm premature rupture of membranes [12], and this association was stronger for preconception intake compared to level of intake in the 2 nd trimester. During the preconception period, general multivitamin use has been associated with a lower risk in pre-eclampsia [13], PTB, and preterm labor [13][14][15], but a higher risk of early fetal losses [16]. Multivitamin use was particularly protective in women with BMI <25, but not in overweight or obese women [14]. The periconceptional use of a multivitamin was associated with higher birthweight in black infants [17], fewer early preterm births [18], and a lower risk of SGA and PTB [14]. These studies were conducted in the U.S. or northern Europe. Preconception and continued supplementation of vitamin A and β-carotene have shown a lower risk in maternal mortality in Nepal [19]. Given these studies, it may be that the very early period is key for setting the stage for the pregnancy (e.g., for placentation), or that the woman's health and nutrition status before pregnancy are crucial for improving pregnancy health.
These studies share the limitations of relying on women's self-reported vitamin use during pregnancy and retrospective report of vitamin use pre-pregnancy, and no preconception studies use biomarkers to measure the antioxidant levels. Also, they generally focus on white women. In this analysis, we examine whether serum antioxidant levels, as well as diet and supplement use, prior to the pregnancy predict birthweight and gestational age. We hypothesized that higher antioxidant levels would be associated with higher birthweight and longer gestations, and that the associations may differ by maternal age at first birth and by race.
Methods
The Coronary Artery Risk Development in Young Adults (CARDIA) Study is a multi-center, longitudinal, observational study designed to describe the development of risk factors for cardiovascular disease in young black and white men and women. In 1985-86 (year 0), 2,787 women (52% black) aged 18-30 were enrolled at the baseline exam from four geographic areas: Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota; and Oakland, California [20,21]. Participants were re-examined 2, 5, 7, 10, 15 and 20 years after baseline. The overall retention rate was 72 percent of the surviving cohort 20 years later. A variety of clinical, laboratory and lifestyle measures were obtained using standardized methods at baseline and followup exams [22,23]. Institutional Review Boards at each participating study center approved all study years. Written, informed consent was obtained from subjects for all study procedures. The IRB of Tulane University ruled this secondary data analysis exempt from review.
The subset for this analysis included women with antioxidant measures assessed at year 0 or 7 as part of the Young Adult Longitudinal Trends in Antioxidants ancillary study. One thousand five hundred sixty-nine women had both pregnancy and antioxidant data (flowchart in S1 Fig). Compared to women who did not report pregnancies at any time and who participated in at least one follow-up, women who reported a pregnancy had higher serum levels of β-cryptoxanthin and lower levels of γ-tocopherol (p<0.05); there was no difference in the levels of the other measures. Women who were pregnant or breastfeeding at the time of the interview/biospecimen collection (n = 15) were excluded, due to potential changes in diet, supplement use, and antioxidant and lipid metabolism during those times. The final analytic sample of 1,215 women was limited to those with valid birthweight or gestational age information on one or more singleton live births delivered after index exam; the most common reason for not being included in this analysis was not having had any post-baseline births (n = 339).
There were three sources of information about antioxidant levels: serum measurement, reported diet, and reported supplement intake. Details of the antioxidant serum analyses have been provided elsewhere [24]. Briefly, blood samples were drawn after an overnight fast. Serum obtained at the baseline exam and the year 7 exam was used to assay the carotenoids α-and βcarotene, lycopene, zeaxanthin/lutein, and β-cryptoxanthin and the tocopherols α-tocopherol and γ-tocopherol at the Molecular Epidemiology and Biomarker Research Laboratory, University of Minnesota, USA. The carotenoids and tocopherols were measured by an HPLC-based assay at Years 0 and 7. The coefficients of variation were less than 10% for all analytes and control pools, and year 0 and 7 measurements were correlated at between r = 0.4 and r = 0.7.
The CARDIA Diet History, an interviewer-administered quantitative food-frequency questionnaire designed to be a comprehensive assessment tool for habitual intake [25], identified 1609 distinct food items at baseline and year 7. Overall, its reliability and validity have been shown to be good in whites though less consistent in blacks [26]; antioxidants were not specifically assessed in the validation study. Intake of antioxidants and total energy was based on calculations from the Diet History [26], a modified version of the Western Electric dietary history; the database from the Nutrition Coordinating Center for the Multiple Risk Factor Intervention Trial and the Lipid Research Clinics was used to calculate the nutrients from this questionnaire. The A Priori Diet Quality Score was used as a measure of diet quality [27]. This measure has been shown to be associated with health outcomes, including longevity, diabetes, myocardial infarction, and biomarkers for cardiovascular disease [28]. Dietary supplements were queried as part of the Diet History: "Do you take vitamin or mineral supplements, what kind, how many, and how often?" Answers were open-ended and amounts were added to the nutrient amounts for each participant (nutrients were recorded in the database both with and without supplements). One carotenoid (α-carotene) and three tocopherols (α-, γ-, δ-tocopherol) were examined in diet, and two carotenoids (vitamins A and E) and one tocopherol (αtocopherol) in supplements. β-carotene supplement use was not sufficiently common to be analyzed. Diet and supplement measures were analyzed separately. Serum and diet measurements were correlated at between 0.2 and 0.3 (p<0.01). Seven women in the sample were missing information on either diet or supplement use.
Pregnancy outcomes were based on the woman's self-report for each pregnancy. At each follow-up exam, women were asked whether they had been pregnant since the previous exam; how the pregnancy had ended; the baby's birthweight; and length of gestation. The outcomes examined were birthweight, gestational age, and birthweight-for-gestational-age (z-score, limited to gestational ages between 32 and 43 weeks). Each outcome was examined as a continuous variable and as a dichotomous variable: low birthweight (LBW) defined as birth weight <2500 g, and PTB defined as gestational age at birth <37 weeks. To estimate growth restriction, term LBW (LBW among babies delivered after 37 weeks' gestation) and birthweight were also examined. A validation study of maternal report of gestational age at delivery among a subset of 211 CARDIA women using medical record abstractions has been conducted [29]. Sensitivity for preterm birth <34 weeks was 100%; specificity was 99%. Sensitivity was 67% and specificity was 89% for preterm births delivered at 34-36 weeks. The overall sensitivity for maternal report of ever delivering preterm (<37 weeks) was 84% (16/19), and the specificity was 89% (170/192). We conducted additional analysis of term low birthweight (to minimize the effect of the correlation of gestational age and birthweight); the results were similar and did not appear to add more information beyond the birthweight and gestational age analyses. A supplementary analysis of macrosomia (birthweight >4000 g) was also conducted. The first pregnancy after the first available antioxidant measure was used in analyses.
Analyses were conducted both considering the predictors as quartiles and continuous variables. Supplement intake was highly right-skewed; it was examined as both a log-transformed continuous variable and yes/no for any supplement intake (few enough women reported supplement use for more detailed categories to be useful). Multiple linear and logistic modeling was used to adjust for the a priori selected potential confounders of smoking (current/former/ never), race (black/white), age at included pregnancy, BMI (continuous), education (highest degree based on years of school at the follow-up prior to the pregnancy), parity (0, 1, 2+), physical activity (total physical activity intensity score (quartiles) [30], diet quality, and marital status(married/not). Interactions with age, race, smoking, parity, and BMI were examined using a product term in the model; 3-way interactions were also checked. Age at included pregnancy interactions were examined both as continuous and dichotomous variables; for ease of presentation, dichotomous results are provided (age �30 and > 30).
We also examined an interaction with time (both continuous and dichotomized at 2 years), to determine whether any associations differed if the measures were taken closer in time to the pregnancy. No interactions were found. Standardizing antioxidant levels for lipids is recommended in some circumstances [24], but lipid levels could also be an intermediate between antioxidant intake and birth outcomes [31,32], in which case this adjustment could bias the results. However, adjustment for lipids made no difference to the results.
Results
Mean age at included pregnancy was 30.3 years. Women were evenly split between black or white races. Sixty-three and five-tenths percent had a pre-pregnancy BMI in the normal range; 62.2% were never-smokers, and 45.5% were nulliparous at their CARDIA index exam. Eight and three-tenths percent of the first infants born after the antioxidant measurement were low birthweight and 18.3% born preterm (Table 1). Carotenoid levels were on average higher in white women than black, and in women older than 30, although no differences were found in lycopene levels.
Serum antioxidant levels and birth outcomes
Among the carotenoids (Table 2), in unadjusted analysis, no associations were found with low birthweight, except for lycopene (unadjusted odds ratio (OR) for highest vs. lowest quartile 2.40, 95% confidence intervals (CI) 1.31,4.37, adjusted OR (aOR) 2.15, 1.14,3.92). β-carotene was associated with a higher risk of preterm birth (aOR 1.62, 1.01, 2.26); the third quartile of β-cryptoxanthin was also associated with a higher risk (aOR 1.73, 1.11, 2.71). In unadjusted analysis (Table 3) α-carotene was found to be associated with higher birthweight (beta coefficient 158 g, 95% CI 51,265 for the highest quartile) and birthweight for gestational age, while lycopene was found to be associated with lower birthweight (-130 g, 95% CI -235, -24); both results were reduced after adjustment. Lycopene was found to be associated with shorter gestational age (adjusted beta coefficient -0.50 weeks for the highest quartile, -0.99, -0.01). Among the tocopherols, only α-tocopherol (119 g, 31, 225) was found to be associated with higher birthweight, in unadjusted analyses only. Detailed examination of the results suggested that race, smoking, and pre-pregnancy BMI were the strongest confounders (changed the effect estimate the most).
Interaction analysis
Significant interactions were found between age at included pregnancy and many of the serum antioxidants. Among younger (<30 years) women, higher carotenoids were associated with higher risk of LBW, while these patterns were not seen in older women (Table 4); for PTB, this pattern was seen for β-carotene. Further examination of 3-way interactions (age, race and antioxidants) found that the higher risk of LBW associated with carotenoids was largely limited to the younger black women, while among the older black women, these factors were neutral or protective (p for 3-way interaction � 0.05; S1 Table). Similar patterns, though not as strong, held for gestational age (Table 5).
Supplement antioxidant intake and birth outcomes
Intake of vitamin C and α-tocopherol were associated with higher birthweight, but adjustment for confounders attenuated these associations (S2 Table). Gestational age was higher with estimated intake (adjusted beta coefficient 0.06, 0.01-0.12) and any supplement intake of vitamin C (0.41 weeks, 0.01-0.81). Any intake of α-tocopherol supplements was associated with a lower odds of LBW (adjusted OR 0.38, 0.19-0.76). Unlike serum antioxidant levels, there were no patterns of interaction between age at first pregnancy and dietary or supplement intake, and once age and race were accounted for, no interactions were found with smoking.
All results were similar when exposures were considered as continuous variables, and/or adjusted for lipids (supplementary material). No strong or consistent associations were found with macrosomia (S3 Table).
Discussion
In this study, we did not find significant support for the idea that preconception antioxidant levels reduce the risk of preterm birth or low birthweight. Most protective associations with serum or dietary antioxidants that we found were eliminated by adjustment for confounders, while a few associations indicating that higher antioxidants were associated with worse birth outcomes were found. On the other hand, supplement use of vitamin C and tocopherols was associated with lower risk of low birthweight (in some analyses), perhaps suggesting confounding by health-consciousness, self-care, or other nutrients included in multivitamins. This is consistent with the randomized trials that have found that supplementation during pregnancy does not improve birth outcomes [9,33], and may even worsen them [34][35][36]. The most extensive previous study of antioxidant levels during pregnancy, a study of 812 white nulliparous English women, found higher serum lutein was associated with higher risk of preterm premature rupture of membranes, but α-carotene, β-carotene, cryptoxanthin, lycopene, retinol, α-tocopherol, and γ-tocopherol were not [37]. Higher plasma γ-tocopherol was also associated with higher preterm birth risk in one study [8], while serum β-cryptoxanthin was associated with history of preterm birth in NHANES [38], so our results are broadly consistent with previous studies incorporating biomarkers. One notable finding was the fairly consistent interaction with age at pregnancy. Generally, the negative effects were found most strongly in younger women, and protective effects were largely limited to older women. This might indicate that effects of aging were somewhat offset by antioxidant use, but among younger women, negative effects predominated. This trend was particularly pronounced in Black women. A few studies have found race differences in antioxidant relationships with metabolism: lower β-carotene has been associated with insulin resistance [39] and oxidative stress has been associated with lower insulin sensitivity in African-Americans [40], but these are health-protective, rather than the negative effect we found. Research also suggests that multivitamin use may be particularly protective against low birthweight in Black women [17]. Although we did not find an interaction with BMI, and all results are adjusted for BMI, it is possible that differences in BMI or patterns of weight gain are also affecting metabolism. Age and race are both strong predictors of BMI, and there may be residual confounding.
Antioxidants have been hypothesized to improve birth outcomes by protecting the placental membranes from damage due to reactive oxygen species [41]. Oxidative DNA damage has been associated with worse fetal growth [42], and oxidative stress with lower birthweight and gestational length [43]. However, our results do not support that hypothesis. The association of blood tocopherols and lycopene with birth outcomes may be a reflection of consuming diets relatively high in fat and low in other nutrients. Such a preconception dietary pattern could be maintained during pregnancy, influencing development of the fetus and contributing to low birthweight. The dietary pattern can be associated with blood lycopene and tocopherols due to an increase in their absorption with high-fat diets; lycopene and tocopherols are the most lipophilic compounds of the antioxidants examined. Lycopene and tocopherols are not always indicators of fruit and vegetable intakes as are other carotenoids, but have been associated with meat intake in some situations [44]. The most common sources of lycopene, in particular, in the U. S. diet, are tomato products such as pizza sauce, which may not be associated with other positive nutritional behaviors [45]. This association may be partially influenced by blood lipids and adjustment for blood lipids had a small effect on the relationship (supplementary materials). A study in London found that South Asian vegetarian women had shorter duration of gestation and lower birthweights [46], but a larger study of preconception diet found no relationship between birth outcomes and vegetarian diet [47]. Generally, both meat-eating and vegetarian diets are considered adequate for pregnant women [48]. Thus, the overall quality of the diet may have a larger effect than the antioxidant effect of the lycopene and tocopherols. Strengths of the study were the prospective design, including measures prior to pregnancy; biracial sample, and the biomarkers of antioxidant status; both diet and supplement intake as well as serum levels could vary considerably over the time period studied, which would likely reduce study power. Weaknesses include the variable time period between the antioxidant measurements and the pregnancy. Another consideration is the self-report of gestational age, which may have biased the findings toward the null; the proportion of preterm births is on the high side, suggesting a degree of measurement error. Measurement of nutritional intake is notoriously prone to error, and although we attempted to adjust for overall diet quality, such measures can be no more than an estimate. In addition, nutritional intakes covary, and factors such as folic acid, known to be associated with pregnancy health and to affect antioxidant metabolism [49], were not considered. However, as such factors are generally associated with better health, they would be more likely to create a spurious positive association than mask one. The large number of statistical tests makes multiple comparisons an issue, although results were fairly consistent across outcomes and within classes of nutrient. In addition, our study was conducted in the United States, and the women were not likely to be actively malnourished. Results might be different in a developing country or in particularly deprived populations, or in other race/ethnic groups.
Conclusions
Higher preconception antioxidant levels were not associated with better birth outcomes, and results were more consistent with worsened outcomes for some indicators in this sample.
S1 Fig.
(DOCX) S1 Table. Relationship between antioxidant status (continuous z-score) and subsequent birth outcomes, interaction with age and race.
(DOCX) S2 Table. Relationship between antioxidant intake from supplements and subsequent birth outcome.
|
2020-02-16T14:04:17.859Z
|
2020-02-14T00:00:00.000
|
{
"year": 2020,
"sha1": "12467baef6b1f1ec0836a519041ae3665041519e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0229002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "087c77364681680701a94b84d6be018b6dff918f",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14284100
|
pes2o/s2orc
|
v3-fos-license
|
Discriminative vs. Generative Approaches in Semantic Role Labeling
This paper describes the two algorithms we developed for the CoNLL 2008 Shared Task “Joint learning of syntactic and semantic dependencies”. Both algorithms start parsing the sentence using the same syntactic parser. The first algorithm uses machine learning methods to identify the semantic dependencies in four stages: identification and labeling of predicates, identification and labeling of arguments. The second algorithm uses a generative probabilistic model, choosing the semantic dependencies that maximize the probability with respect to the model. A hybrid algorithm combining the best stages of the two algorithms attains 86.62% labeled syntactic attachment accuracy, 73.24% labeled semantic dependency F1 and 79.93% labeled macro F1 score for the combined WSJ and Brown test sets 1 .
Introduction
In this paper we describe the system we developed for the CoNLL 2008 Shared Task (Surdeanu et al., 2008). Section 2 describes our approach for identifying syntactic dependencies. For semantic role labeling (SRL), we pursued two independent approaches. Section 3 describes our first approach, where we treated predicate identification and labeling, and argument identification and labeling as c 2008.
1 These numbers are slightly higher than the official results due to a small bug in our submission. four separate machine learning problems. The final program consists of four stages, each stage taking the answers from the previous stage as given and performing its own identification or labeling task based on a model generated from the training set. Section 4 describes our second approach where we used a generative model based on the joint distribution of the predicate, the arguments, their labels and the syntactic dependencies connecting them. Section 5 summarizes our results and suggests possible improvements.
Syntactic dependencies
We used a non-projective dependency parser based on spanning tree algorithms. The parameters were determined based on the experimental results of the English task in (McDonald et al., 2005), i.e. we used projective parsing and a first order feature set during training. Due to the new representation of hyphenated words in both training and testing data of our shared task and the absence of the gold part of speech (GPOS) column in the test data, the format of the CoNLL08 shared task is slightly different from the format of the CoNLL05 shared task, which is supported by the McDonald's parser. We reformatted the data accordingly. The resulting labeled attachment score on the test set is 87.39% for WSJ and 80.46% for Brown.
A discriminative machine learning algorithm is trained for each stage using the gold input and output values from the training set. The following sec-tions describe the machine learning algorithm, the nature of its input/output, and the feature selection process for each stage. The performance of each stage is compared to a most frequent class baseline and analyzed separately for the two test sets and for nouns and verbs. In addition we look at the performance given the input from the gold data vs. the input from the previous stage.
Predicate identification
The task of this stage is to determine whether a given word is a nominal or a verb predicate using the dependency-parsed input. As potential predicates we only consider words that appear as a predicate in the training data or have a corresponding PropBank or NomBank XML file. The method constructs feature vectors for each occurrence of a target word in the training and test data. It assigns class labels to the target words in the training data depending on whether a target word is a predicate or not, and finally classifies the test data. We experimented with combinations of the following features for each word in a 2k + 1 word window around the target: (1) POS(W): the part of speech of the word, (2) DEP(W, HEAD(W)): the syntactic dependency of the word, (3) LEMMA(W): the lemma of the word, (4) POS(HEAD(W)): the part of speech of the syntactic head.
We empirically selected the combination that gives the highest accuracy in terms of the precision and recall scores on the development data. The method achieved its highest score when we used features 1-3 for the target word and features 1-2 for the neighbors in a [-3 +3] word window. TiMBL (Daelemans et al., 2004) was used as the learning algorithm. Table 1 (4-stage, All1) shows the results of our learning method on the WSJ and Brown test data. The noun and verb results are given separately (Verb1, Noun1). To distinguish the mistakes coming from parsing we also give the results of our method after the gold parse (4-stage-gold). Our results are significantly above the most frequent class baseline which gives 72.3% on WSJ and 65.3% on Brown.
Predicate labeling
The task of the second stage is deciding the correct frame for a word given that the word is a predicate. The input of the stage is 11-column data, where the columns contain part of speech, lemma and syntactic dependency for each word. The first stage's decision for the frame is indicated by a string in the predicate column. The output of the stage is simply the replacement of that string with the chosen frame of the word. The chosen frame of the word may be word.X, where X is a valid number in PropBank or NomBank.
The statistics of the training data show that by picking the most frequent frame, the system can pick the correct frame in a large percent of the cases. Thus we decided to use the most frequent frame baseline for this stage. If the word is never seen in the training, first frame of the word is picked as default.
In the test phase, the results are as the following; in the Brown data, assuming that the stage 1 is gold, the score is 80.8%, noting that 11% of the predicates are not seen in the training phase. In WSJ, the score based on gold input is 88.3%, and only 5% of the predicates are not seen in the training phase. Table 1 gives the full results for Stage 2 (4-stage, Verb2, Noun2, All2).
Argument identification
The input data at this stage contains the syntactic dependencies, predicates and their frames. We look at the whole sentence for each predicate and decide whether each word should be an argument of that predicate or not. We mark the words we choose as arguments indicating which predicate they belong to and leave the labeling of the argument type to the next stage. Thus, for each predicate-word pair we have a yes/no decision to make.
As input to the learning algorithm we experimented with representations of the syntactic dependency chain between the predicate and the argument at various levels of granularity. We identified the syntactic dependency chain between the predicate and each potential argument using breadth-first-search on the dependency tree. We tried to represent the chain using various subsets of the following elements: the argument lemma and part-of-speech, the predicate frame and partof-speech, the parts-of-speech and syntactic dependencies of the intermediate words linking the argument to the predicate.
The syntactic dependencies leading from the argument to the predicate can be in the head-modifier or the modifier-head direction. We marked the direction associated with each dependency relation in the chain description. We also experimented with using fine-grained and coarse-grained parts of speech. The coarse-grained part of speech consists of the first two characters of the Penn Treebank part of speech given in the training set.
We used a simple learning algorithm: choose the answer that is correct for the majority of the instances with the same chain description from the training set. Not having enough detail in the chain description leaves crucial information out that would help with the decision process, whereas having too much detail results in bad classifications due to sparse data. In the end, neither the argument lemma, nor the predicate frame improved the performance. The best results were achieved with a chain description including the coarse parts of speech and syntactic dependencies of each word leading from the argument to the predicate. The results are summarized in Table 1 (4-stage, Verb3, Noun3, All3).
Argument labeling
The task of this stage is choosing the correct argument tag for a modifier given that it is modifying a particular predicate. Input data format has additional columns indicating which words are arguments for which predicates. There are 54 possible values for a labeled argument. As a baseline we take the most frequent argument label in the training data (All1) which gives 37.8% on the WSJ test set and 33.8% on the Brown test set.
The features to determine the correct label of an argument are either lexical or syntactic. In a few cases, they are combined. The following list gives the set we have used. Link is the type of the syntactic dependency. Direction is left or right, depending the location of the head and the modifier in the sentence. LastLink is the type of the dependency at the end of the dependency chain and firstLink is type of the dependency at the beginning of the dependency chain.
Feature1 : modifierStem + headStem Feature2 : modifierStem + coarsePosModifier + headStem + coarsePosHead + direction Feature3 : coarsePosModifier + headPos + firstLink + lastLink + direction Feature4: modifierStem + coarsePosModifier The training phase includes building simple histograms based on four features. Feature1 and Fea-ture2 are sparser than the other two features and are better features as they include lexical information. Last two features are less sparse, covering most of the development data, i.e. their histograms give non-zero values in the development phase. In order to match all the instances in the development and use the semantic information, a cascade of the features is implemented similar to the one done by Gildea and Jurafsky(2002), although no weighting and a kind of back-off smoothing is used. First, a match is searched in the histogram of the first feature, if not found it is searched in the following histogram. After a match, the most frequent argument with that match is returned. Table 1 gives the performance (4-stage, Verb4, Noun4, All4).
The generative approach
One problem with the four-stage approach is that the later stages provide no feedback to the earlier ones. Thus, a frame chosen because of its high prior probability will not get corrected when we fail to find appropriate arguments for it. A generative model, on the other hand, does not suffer from this problem. The probability of the whole assignment, including predicates, arguments, and their labels, is evaluated together and the highest probability combination is chosen. Our generative model specifies the distribution of the following random variables: P is the lemma (stem+pos) of a candidate predicate. F is the frame chosen for the predicate (could be null). A i is the argument label of word i with respect to a given predicate (could be null). W i is the lemma (stem+pos) of word i. L i is the syntactic dependency chain leading from word i to the given predicate (similar to Section 3.3).
The generative model
We consider each word in the sentence as a candidate predicate and use the joint distribution of the above variables to find the maximum probability F in the column heading indicates verbal predicates, "Noun" indicates nominal predicates, "All" indicates all predicates. The numbers 1-4 in column headings indicate the 4 stages: (1) predicate identification, (2) predicate labeling, (3) argument identification, (4) argument labeling. The gold results assume perfect output from the previous stages. The highest number in each column is marked with boldface.
and A i labels given P , W i , and L i . The graphical model in Figure 1 specifies the conditional independence assumptions we make. Equivalently, we take the following to be proportional to the joint probability of a particular assignment:
Parameter estimation
To estimate the parameters of the generative model we used the following methodology: For Pr(F |P ) we use the maximum likelihood estimate from the training data. As a consequence, frames that were never observed in the training data have zero probability. One exception is lemmas which have not been observed in the training data, for which each frame is considered equally likely.
For Pr(A i |F ) we also use the maximum likelihood estimate and normalize it using sentence length. For a given argument label we find the expected number of words in a sentence with that label for frame F . We divide this expected number with the length of the given sentence to find Pr(A i |F ) for a single word. Any leftover probability is given to the null label. If the sentence length is shorter than the expected number of arguments, all probabilities are scaled down proportionally.
For the remaining two terms Pr(L i |F, A i ) and Pr(W i |F, A i ) using the maximum likelihood estimate is not effective because of data sparseness. The arguments in the million word training data contain about 16,000 unique words and 25,000 unique dependency chains. To handle the sparseness problem we smoothed these two estimates using the part-of-speech argument distribution, i.e. Pr(L i |POS, A i ) and Pr(W i |POS, A i ), where POS represents the coarse part of speech of the predicate. Table 1 gives the F1 scores for the two models (4-stage and generative), presented separately for noun and verb predicates and the four stages of predicate identification/labeling, argument identification/labeling. In order to isolate the performance of each stage we also give their scores with gold input. The rest of this section analyzes these results and suggests possible improvements.
Results and Analysis
A hybrid algorithm: A comparison of the two algorithms show that the 4-stage approach is superior in predicate and verbal-argument identification and the generative algorithm is superior in the labeling of predicates and arguments and nominalargument identification. This suggests a hybrid algorithm where we restrict the generative model to take the answers for the better stages from the 4stage algorithm (Noun1, Verb1, Verb3) as given. Tables 1 and 2 present the results for the hybrid algorithm compared to the 4-stage and generative models.
Parsing performance: In order to see the effect of syntactic parsing performance, we ran the hybrid algorithm starting with the gold parse. On the other hand, we find that the lexical features are essential for certain tasks. In labeling the arguments of nominal predicates, finding an exact match for the lexical pair guarantees a 90% accuracy. If there is no exact match, the 4-stage algorithm falls back on a syntactic match, which only gives a 75% accuracy.
Future work:
The hybrid algorithm shows the strengths and weaknesses of our two approaches. The generative algorithm allows feedback from the later stages to the earlier stages and the 4-stage machine learning approach allows the use of better features. One way to improve the system could be by adding feedback to the 4-stage algorithm (later stages can veto input coming from previous ones), or adding more features to the generative model (e.g. information about neighbor words when predicting F ). More importantly, there is no feedback between the syntactic parser and the semantic role labeling in our systems. Treating both problems under the same framework may lead to better results.
Another property of both models is the indepen-dence of the argument label assignments from each other. Even though we try to control the number of arguments of a particular type by adjusting the parameters, there are cases when we end up with no assignments for a mandatory argument or multiple assignments where only one is allowed. A more strict enforcement of valence constraints needs to be studied. The use of smoothing in the generative model was critical, it added about 20% to our final F1 score. This raises the question of finding more effective smoothing techniques. In particular, the jump from specific frames to coarse parts of speech is probably not optimal. There may be intermediate groups of noun and verb predicates which share similar semantic or syntactic argument distributions. Identifying and using such groups will be considered in future work.
|
2014-07-01T00:00:00.000Z
|
2008-08-16T00:00:00.000
|
{
"year": 2008,
"sha1": "82328c132aa45f456ffd0146678017fcd94ee60a",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.5555/1596324.1596364",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "82328c132aa45f456ffd0146678017fcd94ee60a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
201742233
|
pes2o/s2orc
|
v3-fos-license
|
A cross sectional study to evaluate the relation between thyroid disorders and abnormal uterine bleeding in reproductive age group
Introduction: Abnormal uterine bleeding (AUB) is a common disorder occurring in reproductive age group females. It can be understood as bleeding that occurs from the uterus outside the normal parameters and there is no structural defects in the genital tract. One of the most common association with AUB is thyroid dysfunctions. Hence this study aimed to see the incidence of thyroid related disorders in AUB and also to assess the menstrual pattern. Material and Methods: 100 women suffering from AUB who presented to OPD of Gynecology department of SMS Medical College, Jaipur were recruited in the study. All females in 19 to 45 years of age group with abnormal uterine bleeding were included excluding those with previously known thyroid disorder, abortion history within 3 months etc. Thyroid function tests were done in all along with ultrasonography of pelvis region. Statistical analysis done. Results: The bleeding abnormality that was found in the most of the women was heavy menstrual bleeding. Women who presented with thyroid dysfunction were 33%. 23% had subclinical hypothyroidism, 6% had hypothyroidism and 4% had hyperthyroidism. Conclusion: Abnormal Uterine Bleeding has strong association with thyroid disorders. Most common type of disorder is subclinical hypothyroidism.Thus all patient of AUB must be evaluated for thyroid dysfunction
Introduction
Abnormal Uterine Bleeding (AUB) means any bleeding that is not normal in amount, duration, frequency and cyclicity. 1 AUB is very common as well as a complicated presentation in gynaecology outpatient department. It is seen in 15-20% of women from the commencement of menarche to menopause and have great impact on quality life of the women. 2 Failure in finding etiology along with debilitating symptoms mostly results in unnecessary surgical interventions causing increase in morbidity and mortality. Endocrinological dysfunctions including thyroid disorders plays major role in its etiopathogenesis.Thyroid hormones exerts multiple effects on the human body specifically on the development,metabolism, growth and functions of major organ system in the human body. 3 Looking to its effects on reproductive age group females, thyroid dysfunction leads to AUB, infertility, delayed puberty recurrent miscarriages as well as premature menopause. 4 Mechanism behind menstrual irregularities due to thyroid disorders are multiple. Some of these are like, alterations in TSH response, altered LH response and conversion of androgens to estrogen peripherally, TRH causing increased prolactin levels, altered SHBG and effect on the coagulation factors. 5,6 Irregularities in the menstrual cycle may accompany or precede with clinically overt hypothyroidism or hyperthyroidism. 7 Many studies have proven that hypothyroidism is more commonly associated with hypothyroidism whereas anovulation or oligomenorrhoea common in those having hyperthyroidism. 8,9 This study was done to evaluate the thyroid dysfunction in patients with AUB and to find out the incidence of thyroid disorders in AUB patient and also to study the menstrual pattern in thyroid disorders.
Material and Methods
It was a hospital based cross sectional descriptive type of observational study conducted in the department of Obstetrics and Gynaecology, SMS Medical College and Attached hospitals, Jaipur from April 2017 to April 2018. 100 women who presented to gynaecology outpatient department with AUB were recruited in the study.
Inclusion Criteria
1. All females in 19 to 45 years of age group with abnormal uterine bleeding and those giving informed consent were included as subjects in proposed study.
Results
A total of 100 patients in reproductive age group with AUB were recruited in the present study. Among 100 women, majority belonged to the age group between 30 to 39 years (39%) and least of them belonged to less than 20years age group. 35% were between age group of 20-29years and 21% were above 40 years of age. (Fig. 1) Commonest pattern of bleeding was heavy menstrual bleeding (45%) followed by heavy menstrual bleeding with frequent cycles (16%). Among others 15% presented with infrequent cycles, 11% had acyclical bleeding, 8% with frequent cycles and 5% with shortened cycles. (Table 1) Among thyroid dysfunction, majority of the women belonged to category of subclinical hypothyroid (23%).Among others 6% were hypothyroid and 4% were hyperthyroid. (Fig. 2) Thyroid dysfunction is related to various types of bleeding abnormalities. Thyroid dysfunction was commonest in patients with acyclical bleeding (63.63%), the next common in women with infrequent cycles (40%). 37.5% of women with frequent cycles, 26.66% of women with heavy menstrual bleeding and 25% of women with heavy menstrual bleeding with frequent cycles were with thyroid dysfunction. (Table 2)
Discussion
Thyroid disorders are common in females with subclinical hypothyroidism as its most common type. Menstrual irregularities are seen in both hyperthyroidism as well as hypothyroidism.
In the study most of the women were in the age group 30-39 years. Similar study done by Mohapatra S et al 10 reported that highest incidence of AUB was seen in the age group 30-39 years (39%). Parveen M ET al 11 also observed that majority of the women were in the age group between 30-39 years (44%). Similar results were in the study by Ali J ET al, 12 Jinger SK et al 13 and George L et al. 14 45% of women presented with complaint of heavy menstrual bleeding making it the most common pattern of AUB. It was followed by heavy menstrual bleeding with frequent cycles which was seen in 16% of women. Parveen M et al 11 also found an incidence of 45% of heavy menstrual bleeding in their study. In study by Ali J et al 12 We observed that 33% of cases were diagnosed with thyroid dysfunction constituting 23% of the cases with subclinical hypothyroidism, 6% of the cases with hypothyroidism and 4% of the cases with hyperthyroidism. In the present study majority of the women were euthyroid, followed by subclinical hypothyroid, hypothyroid and hyperthyroid in decreasing frequency. These results were consistent with the study of Mohapatra S et al, 10 Parveen M et al. 11 In this study amongst those with thyroid dysfunction, women with subclinical hypothyroidism (69.69%) were greater than those with hypothyroidism (18.18%). This was similar to results of the study by George L et al 14 16 who concluded that 45.83% of euthyroid and 41.4% of women with thyroid dysfunction complained of HMB. In our study most common type of menstrual abnormality in women with subclinical hypothyroidism was heavy menstrual bleeding (39%). 25% of women with hyperthyroidism and 33% of women with hypothyroidism presented with heavy menstrual bleeding. In AUB patterns 63.63% of women with acyclical bleeding and 26.66% of patients with HMB were of thyroid dysfunction. Results were similar to study of Mohapatra S ET al, 10 Parveen M ET al 11 and Deshmukh PY ET al. 18 Conclusion Abnormal uterine bleeding is strongly associated with thyroid related disorders. Any abnormality in menstrual cycle can be a possible presenting symptom of thyroid disorders, thus thyroid functions tests must be evaluated in them. It can lead to early diagnosis as well as treatment preventing unnecessary surgical interventions.
|
2019-09-15T03:19:33.235Z
|
2019-06-15T00:00:00.000
|
{
"year": 2019,
"sha1": "92e3d221400f318cd17123809a07d87747d096dc",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijogr.org/journal-article-file/8992",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a1ddfa793b1a12fe2fae1efba39f28a28677de60",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
145035223
|
pes2o/s2orc
|
v3-fos-license
|
Motivation and information management as a tool of job satisfaction of employees in Nigeria
Human resources manager can succeed in motivating workers based on the information available and used by him. To understand the critical importance of people in the organization is to recognize that the human element and the organization are synonymous. It shows that a well managed organization usually sees an average worker as the root source of quality and productivity gains. In order to make employees satisfied and committed to their jobs in the organization, there is need for strong and effective motivation at the various levels. It attempts to explore the relationship between work motivation and job satisfaction. There is positive relationship between information, adequate motivation and satisfaction of workers. The paper aims to know the importance of information resource and management of people at work as an integral part of the management process and to reveal information management is very important and is the life wire of organization and workers’ job satisfaction.
INTRODUCTION
Motivation is an incentive given by the employer to employees to enable them performs their jobs creditably.Nigeria has progressed from a basically agrarian society to a budding, industrial society with a mixed level of educational and technological attainment.Nevertheless, with reference to the lunching of our first satellite on the way boost her ethnological status.Nigerian satellite is the culmination of a long dream of having a wholly owned earth-observing device capable of assisting the nation in dealing with its multifarious problems.Within the agrarian period the country has experienced many changes in the leadership by way of coups and counter-coups, civil war, states creation with all their attendant problems.
Conversely motivation is a by-product of any factors.These factors could either be internally or externally galvanized, depending on the disposition of the individual and prevailing circumstances at any given time.In the present Nigerian situation, the economic rate of activities and the subsequent high inflation rate have made money a relatively high motivating factor.
Behaviour is both directed to, and results from, unsatisfied needs.The limited number of salary reviews in Nigeria as well in corporate organisation has brought about a situation where there have been persistent expressions of dissatisfaction among workers The private organisation or service industry depends largely on government budgets and prevailing stipulated policy to stay in the market.Therefore, salary reviews has invariably led to situation in which a lot of financial pressures have been put on management of the organisation in particular.Incidentally, is the lower category of workers that bear the burden of price increases while the salary of management staff scarcely affected Inflation ordinarily affects the standard of living of people.This in turn adversely affects productivity.Low productivity in most cases is a by-product of dissatisfaction, which could manifest itself in various ways.Some of these ways include indiscipline and general apathy.To contain these traits, it is important to understand the individuals to be motivated, and the society itself.This is based on the number of factors, which influence individuals in different societal set ups differs.In some cases, such a method could prove to be counterproductive because the issue of adequate motivation and consequently workers productivity and job satisfaction is a product of a person's set up of needs, goals, drive and experience.By implication, this means that information management factors which govern motivation, job satisfaction, workers productivity and attitude to work differ from one society to another.However, the entire issue of adequate motivation on workers' productivity is embedded in the various theories of motivation.People are motivated by various factors at different times, the first factor is the combination of the individual perceptions of the expectations other people have of them, and their own expectations of themselves.
People work together in large organizations like bank, or factory or organization where they are expected to follow orders, which they may not approve of.In addition, they may have to obey instruction from supervisors they had no hand in selecting.This is the type of situation where the worker might have no opportunity for selfexpression.The basic question that now arises, relates to how to create a situation in which workers can satisfy their individual needs while working towards organizational goals.How can workers in service organisation feel that their salaries are low when compared with their contemporaries in the private sector be motivated?The management of people at work is a very important aspect of attaining the set organizational objective, with a peculiar reference to Nigeria this paper in regards to information and job satisfaction.
REVIEW OF LITERATURE
Along with perception, personality, attitudes, and learning, motivation is a very important part of understanding behaviour.Luthans (1998) asserts that motivation should not be thought of as the only explanation of behaviour, since it interacts with and acts in conjunction with other mediating processes and with the environment.Luthan has stressed that other cognitive process, motivation cannot be seen.All that can be seen is behaviour, and this should not be equated with causes of behaviour.While recognizing the central role of motivation, Horenstein (1993) states that many recent theories of organizational behaviour find it important for the field to re-emphasize behaviour.Definitions of motivation abound.One thing these definitions have in common is the inclusion of words such as desire".Luthans (1998) defines motivation as, "a process that starts with a physiological deficiency or need that activates behaviour or a drive that is aimed at a goal incentive".Therefore, the key to understanding the process of motivation lies in the meaning of, and relationship among, needs, drives, and incentives.Relative to this, Misra (1991), state that in a system sense, motivation consists of these three interacting and interdependent elements, i.e., needs, drives, and incentives.
Management has long believed that organizational goals are unattainable without the enduring commitment of members of the organizations.Motivation is a human psychological characteristic that contributes to a person's degree of commitment (Stoke, 1999).It includes the factors that cause, channel, and sustain human behaviour in a particular committed direction.Stoke, in Adeyemo (2000) goes on to say that there are basic assumptions of motivation practices by managers which must be understood.That motivation is commonly assumed to be a good thing.One cannot feel very good about itself if one is not motivated.Motivation is one of several factors that go into a person's performance (e.g., as a librarian).Factors such as ability, resources, and conditions under which one performs are also important.Managers and researchers alike assume that motivation is in short supply and in need of periodic replenishment.Motivation is a tool with which managers can use in organizations.If managers know what drives the people working for them, they can tailor job assignments and rewards to what makes these people "tick."Motivation can also be conceived of as whatever it takes to encourage workers to perform by fulfilling or appealing to their needs.To Olajide (2000), "it is goal-directed, and therefore cannot be outside the goals of any organization whether public, private, or nonprofit".
The relationship between job satisfaction and motivation at work has been one of the widely researched areas in the field of information management in relation to different professions, but in Pakistan very few studies have explored this concept especially on personnel sectors employees.According to Khan (1997), in the current human resources environment, organizations in all industries are experiencing rapid change, which is accelerating at an enormous speed.Information management enables companies to recognize that human factor is becoming much more important for organizational survival, and that business excellence will only be achieved when employees are excited and motivated by their work.In addition, difficult circumstances, such as violence, tragedy, fear, and job insecurity create severe stress in employees and result in reduced workplace performance (Klein, 2002).Through information business has come to realize that a motivated and satisfied workforce can deliver powerfully to the bottom line.Since employee performance is a joint function of ability and motivation, one of management's primary tasks, therefore, is to motivate employees to perform to the best of their ability (Nazir, 1998).
Motivation of the workers in the organizational sector largely depends on the social, economic, and cultural circumstances of the country.If the worker does not receive a competitive salary, he will face problem in maintaining his family life status.The pressure of the family will not let this individual show his full potential, thus he will be stressed out and the organizations efficiency will be affected by this individual.Therefore, it is very important to find out the variable that contributes to his motivation at work and job satisfaction.Job satisfaction of the workers who have an important place as forerunners of the society will affect the quality of the service rendered.
According to Sempane et al. (2002), Job satisfaction relates to people's own evaluation of their jobs with adequate information on ground against those issues that are important to them.Job satisfaction is regarded as related to important employee and organizational outcomes, ranging from job performance to health and longevity (Spector, 2003).The nature of the environment outside of the job directly influences a person's feelings and behavior on the job (Ting, 1997).Judge and Watanabe (1993) reinforced this idea by stating that there is a positive and reciprocal relationship between job and life satisfaction in the short term, and that over time, general life satisfaction becomes more influential in a person's life.Akintoye (2000) emphasized that people spend one third to one half of their waking hours at work, for a period of 40 to 45 years, and that this is a very long time to be frustrated, dissatisfied and unhappy, especially since these feelings carry over to family and social life, and affect physical and emotional health.
Motivation
Motivation is defined as the process that initiates, guides and maintains goal-oriented behaviors.Motivation is what causes us to act, whether it is getting a glass of water to reduce thirst or reading a book to gain knowledge.It involves the biological, emotional, social and cognitive forces that activate behavior.In everyday usage, the term motivation is frequently used to describe why a person does something.Motivation is the driving force by which humans achieve their goals.
Motivation is said to be intrinsic or extrinsic.The term is generally used for humans but it can also be used to describe the causes for animal behaviour as well.This work refers to human motivation.According to various theories, motivation may be rooted in a basic need to minimize physical pain and maximize pleasure, or it may include specific needs such as eating and resting, or a desired object, goal, state of being, ideal, it can be attributed to less-apparent reasons such as altruism, selfishness, morality, or avoiding mortality.Conceptually, motivation should not be confused with either volition or optimism.
Components of motivation
There are three major components to motivation: activation, persistence and intensity.Activation involves the decision to initiate behaviour at work, such as controlling job force in the organization.Persistence is the continued effort toward a goal even though obstacles may exist, such as organizational goal is achieved although it requires a significant investment of time, energy and resources.Finally, intensity can be seen in the concentration and vigour that goes in pursuing a goal in the organizational set up.
Intrinsic motivation
Intrinsic motivation refers to motivation that is driven by an interest or enjoyment in the job task itself, and exists within the individual rather than relying on any external pressure.Research has found that it is usually associated with job achievement and enjoyment by workers evaluation theory.Workers are likely to be intrinsically motivated if they: attribute their job performance results to factors under their own control (e.g., the effort expended), believe that they can be effective agents in reaching desired goals (i.e. the results are not determined by luck), are interested in mastering a specific job, rather than just rote-learning to achieve good works.
Extrinsic motivation
Extrinsic motivation comes from outside of the individual.Common extrinsic motivations are rewards like money and satisfaction, coercion and threat of punishment.Competition is in general extrinsic because it encourages the performer to win and beat others, not to enjoy the intrinsic rewards of the activity.
Self-control
The self-control of motivation is increasingly understood as a subset of emotional intelligence; a person may be highly intelligent according to a more conservative definition (as measured by many intelligence tests), yet unmotivated to dedicate this intelligence to certain tasks.Drives and desires can be described as a deficiency or need that activates behavior that is aimed at a goal at work or an incentive.These are thought to originate within the individual and may not require external stimuli to encourage the behavior of workers.Basic drives could be sparked by deficiencies such as hunger, which motivates a person to seek food; whereas more subtle drives might be the desire for praise and approval, which motivates a person to behave in a manner pleasing to others.
Information management
Information management (IM) emerged in mid 1980s, which has established itself by now, and got its root (Name) from association of Special Libraries and Information Bureaus which changed the name to Association of Information Management.Information management is a means by which human resource managers maximizes the work efficiency with which it plans, collects, processes, controls, disseminates, uses its information and through which it ensure that the value of the information is identified and exploited to the fullest extent.
There is a concept called Total Quality Management (TQM) which has become popular, particularly in workplace, what is this TQM?It can be defined as a human resources management philosophy embracing all activities through which the needs and expectations of the workers, community and the objectives of the organizations are satisfied in efficient and cost effective way of maximizing the potential of employees in a continuing drive for work improvement.In this regard, IM and TQM have evolved from a common human resources management philosophy in which the aim was to increase productivity of workers or cut cost regardless of anything else.This approach is now changing to one which is more work oriented-quality service, in case of libraries and information centres; it is user -oriented quality service.
Information is vital for sustainable productivity of Nigerian workers.It is very important for human resources managers in any organization to make decisions, to make plans, to control activities to forecast, in other to motivate workers in the organization.Information, former or informer, is however to be managed by the human resources manager.Information is now seen as a valuable resource within many private and public organizations.It is an organizational resource to promote motivation for workers job satisfaction.It is a selfregenerative resource and is the key economic element to achieve organization objectives.This information can be accessed by anyone from anywhere at any time, yet remain unchanged and undiminished.This requires intensive use of information technologies by the organisa- Kolawole et al. 683 tion to motivate its workers for job satisfaction.It is resource that, if it is properly managed and utilize, can stimulate innovation, speed product development, raise levels of workers productivity, ensure consistent standards of quality and through all of these means raise the relative level of competitiveness for workers job satisfaction.
Management of information by human resources manager plays an important role both in public and private sectors.In private sector, information is particularly important in productivity of Nigeria workers.In many organizations Decision Support Systems (DSS) are used as a part of human resources manager tool to reduce risk.For instance, an application for a personal loan can be approved or disapproved even by a middle manager, provided the manager has access to a DSS to obtain a "credit Score", applicants who score above certain level will receive the loan.Success of such a system is totally depending on the availability of information.Extensive use of information is also made in marketing.Especially, information is very much essential to promote workers productivity.The long term success of private sectors is determined by their capacity to use and mange information to reduce costs, to extend their range of services, and to become more sensitive to customers, workers demand and their job satisfaction.
Information management makes a similar impact on the public sector.Public servants are now realizing that information can change the way they work, quite dramatically.Utilization of information at appropriate time in a right manner enable them to improve their efficiency and job improvement in ways that are similar to those used in private organisation sectors, through automation of their daily routines, through decision support systems and through electronic financial transactions.In other sectors like, education, health, social security, public servants handles information to enhance job satisfaction and their decisions are based on the available information in the circulation.
Job characteristics
Look at core characteristics and job factors when dealing with job satisfaction.The most popular measure of job satisfaction assesses how employees feel about their jobs along five dimensions: the type of work itself, pay, promotional opportunities, supervision, and co-workers (Smith et al., 1969).
Social comparison
The social-information processing approach to job satisfaction assumes that attitudes are determined, in part, by the attitudes of those around us (Jex and Spector, 1989).This looks and relates attitude to how individuals compare themselves with others in the work place.Individuals can bring others down by whining, or motivate them as well based on attitudes.
Disposition
The most recent explanation for job satisfaction is that some employees are more prone to be satisfied or dissatisfied, in spite of the nature of the job or the social environment.Disposition is the mood and temperament of individuals and this allows us to know if they are satisfied or dissatisfied with their jobs.
Performance
Information resources management looks at how individuals perform in their jobs.Under job satisfaction performance information sources enables us have been studied for over 40 years.The ideas of understanding employee performance will make organization to find ways to keep employee performance or exceeding organization standards rather than falling below.
Absenteeism
Looks at individuals and why they may be absent from their jobs.Organization gives employees a certain amount of days for time off work.Under job satisfaction its important to know why employees are taking off work.Is it for vacation or it may be the working conditions at work Turnover Looks at rate in which employees come and go from their jobs.This is an important part of wanting to see other options for employees.If they are dissatisfied they will likely seek similar or other jobs that make them satisfied.
Job characteristics
Research has shown that job satisfaction is determined by the nature and characteristics of jobs (Spector and Jex, 1991).Smith et al. (1969) developed the five facets of job satisfaction that assess how employees feel about their jobs.
Type of work itself
This includes the area of specialization.It includes professional or technical work.The types of job determine the wages while the condition attached to the job determine the satisfaction that worker would benefit from it.The workers' load must be specified and show if it commemorates with the pay packages.
Pay or wages attached
The amount or wages to be paid should be specified with the job opportunities open to the employees.Pay and benefits are reported as being satisfactory to workers.This should include if the employee will received a pension when he or she retires at the ages of retirement which is 65 of his pay for life and full medical benefits.
Promotional opportunities
This is very crucial to job satisfaction of any employee within an organization.Competition for promotions should not be extremely difficult.Promotion should be granted to employees that engage not only in office politics and social network but in hard working with their supervisors.They should also be awarded as best workers of the year
Supervision
The supervision includes work environment that consists of the office building and premises, furniture and physical environment in which workers function.Both office and work environment affects the efficiency, morale, health and attitude of workers.The supervisory manager is a change agent with aims to control, guide and direct work activities.Supervisor will ensure that performance is done according to laid down standard.He must guide and correct workers in order to ensure accuracy and prompt performance.He should be able to detect wrong and unsatisfactory performance and install right and satisfactory approach to motivate workers in the organization
Co-workers
The workers should be given equal opportunity to perform their task in order to maximize their value.People need to work and interact together within an organization to facilitate and improve production.They should desire to be liked with pleasant social relationships to enjoy sense of intimacy and understanding, and also to be ready to help others in trouble, enjoy friendly interaction with other co-workers.
CREATING JOB SATISFACTION
So, how is job satisfaction created?What are the elements of a job that create job satisfaction?Information management can help to create job satisfaction by putting systems in place that will ensure that workers are challenged and then rewarded for being successful.Organizations that aspire to creating a work environment that enhances job sustainability need to incorporate the following: 1. Flexible work arrangements, possibly including telecommuting 2. Training and other professional growth opportunities 3. Interesting work that offers variety, challenge and allows the worker opportunities to put his or her signature on the finished product most especially in cataloguing.4. Opportunities to use one's talents and to be creative 5. Opportunities to take responsibility and direct one's own work 6.A stable, secure work environment that includes job security and continuity 7.An environment in which workers are supported by an accessible supervisor who provides timely feedback as well as congenial team members 8. Flexible benefits, such as child-care and exercise facilities 9. Up-to-date technology 10.Competitive salary and opportunities for promotion Probably, the most important point to bear in mind when considering how information aids job satisfaction is that there are many factors that affect job satisfaction and that what makes workers happy with their jobs varies from one worker to another and from day to day.Apart from the factors mentioned above, job satisfaction is also influenced by the employee's personal characteristics, the worker's personal characteristics and management style, and the nature of the work itself.Managers who want to maintain a high level of job satisfaction in the work force must try to understand the needs of each member of the work force.For example, when creating work teams, managers can enhance worker satisfaction by placing people with similar backgrounds, experiences, or needs in the same workgroup.Also, he can enhance job satisfaction by carefully matching workers with the type of work.For example, a person who does not pay attention to detail would hardly make a good inspector, and a shy worker is unlikely to be a good salesperson.As much as possible, managers should match job tasks to employees' personalities.
Managers who are serious about the motivation and job satisfaction of workers can also take other deliberate steps to create a stimulating work environment.One such step is job enrichment.Job enrichment is a deliberate upgrading of responsibility, scope, and challenge in the work itself.Job enrichment usually includes increased responsibility, recognition, and opportunities for growth, learning, and achievement.Large organization that have used job-enrichment programs to increase employee motivation and job satisfaction Good management has the potential for creating high morale, high productivity, and a sense of purpose and meaning for information organization and its employees.Empirical findings by Ting (1997) show that job characteristics such as pay, promotional opportunity, task clarity and significance, and skills utilization, as well as organizational characteristics such as commitment and relationship with supervisors and co-workers, have signifi-cant effects on job satisfaction.These job characteristics can be carefully managed to enhance job satisfaction.
WORKERS' ROLES IN JOB SATISFACTION
If job satisfaction is a worker benefit, surely the worker must be able to contribute to his or her own satisfaction and well-being on the job.The following suggestions can help a worker find personal job satisfaction: 1. Seek opportunities to demonstrate skills and talents.This often leads to more challenging work and greater responsibilities, with attendant increases in pay and other recognition.2. Develop excellent communication skills.Employer's value and reward excellent reading, listening, writing, and speaking skills.3. Know more acquire new job-related knowledge that helps you to perform tasks more efficiently and effectively.This will relieve boredom and often gets one noticed.4. Demonstrate creativity and initiative.Qualities like these are valued by most organizations and often result in recognition as well as in increased responsibilities and rewards. 5. Develop teamwork and people skills.A large part of job success is the ability to work well with others to get the job done.6. Accept the diversity in people action.Accept people with their differences and their imperfections and learn how to give and receive criticism constructively.7. See the value in your work.Appreciating the significance of what one does can lead to satisfaction with the work itself.This helps to give meaning to one's existence, thus playing a vital role in job satisfaction.
STRATEGIES OF MOTIVATING WORKERS
Hard work and loyalty of employees are the key factors for growth and progress of any organization or company.Before employ any employee, we need to check all the things related to him like his personal background or his professional background.Make various interviews which may be oral as well as written of the applicant.After doing all this, we will certainly have an employee who has all the qualities and knowledge required for the organization.One may that he want to leave your job because there is lack of motivation in the company so it is very necessary to motivate and retain an employees at certain interval of time.Here are some effective strategies for motivating and retaining employees:
Working atmosphere
Effective strategy for motivating and retaining employees is there should be healthy office working atmosphere because employees get lot of motivation by good working atmosphere.Working place should be attractively designed and maintained.The working atmosphere should be free from any kind of politics and bad things.Every employee should be happy by his colleague's growth.It's very essential that everyone must take initiative for doing any type of work.The organization should always be clear in making any rule or policy.
Perfect compensation should be rewarded
A compensation, benefit or incentives are designed on the performance of employee.This should be done yearly or after certain months as it encourages the employee for giving their best.Any organization is set up for offering quality service to its customers, gain profit, reputation in the market and make progress day by day so for getting capable employees who can contribute in attaining all above things an, employee should always see to their needs and reward them for their performance by giving them money or promotion.This strategy aids in encouraging employee's honesty, efficiency, courtesy, and professional pride.
Employees should be independent
If employees do the work in their own style then it can result more beneficial in comparison to the work set on conditions so it is very necessary that they should be independent as it makes the working environment healthy and light.They should be free to give their suggestions related to any issue related to the work.
Work should be recognized
If any employee's work is recognized by his human resources management and he gets appraisal or acknowledgment then he gets motivated to do the work with more sincerity.
Instead of rewarding money, if few words of praise are said to motivate the employee by his employer then it works a lot.The employer should watch their employees recurrently and when he sees any good work done by his employee then he must praise that employee then only this will motivate them to repeat his performance again and again.
Give support
Another effective strategy for motivating and retaining employees is that organization must be ready to give the employees support whenever they have any query or problem.Support by the employer can be given through telephone, by email or on-site.This keeps the employee busy in doing their work with full concentration.
Frequent communication
There should be proper communication atmosphere in the organization because communication gap can prove to be the major problem for any company.If there is proper communication between employee and his colleagues and his employer then he will be able to tell about his problems and concerns regarding various issues.Communication can be done by organizing meetings, giving training and having dialogues daily.Nowadays, email has become the easiest way of communication but if an employer has face to face communication then it will have more impact on the employee.
Provide little fun
In effective strategies for motivating and retaining employees, another step is always try to keep the work atmosphere light and full of fun because people like to work in the atmosphere which is enjoyable.As this kind of environment keep the employees motivating and retaining to do the work with full enthusiasm.
Give respect to the workers
Do not forget to give respect to the employees of organization.Always remember by giving respect to the workers, they motivate to work beyond their limits.
Sense of responsibility
Everyone is wholly responsible for the action done by him.By providing power to employee for taking the decision makes him sincere towards his work.This is another best way to motivate and retain the employees.
Conclusion
The study shows that adequate work motivation improves job satisfaction.When employers are caring, supportive and focus their attention on motivating factors, the outcome is more positive and committed employees.Motivation is a basic psychological process along with perception, personality attitudes and learning motivation behavior.If employers have adequate information at their disposal and utilize it, the result of motivating their workers will be positive.
Hence, a good working atmosphere and proper training are very necessary.There should be support and timely benefit should be rewarded to the employees for encouraging them to give their best in the organization development and progress.These benefits can be given in the form of money, promotion or even if few words of praise are said to the employee then also he will be motivated to do the work enthusiastically.Working atmosphere should be light and full of fun as this helps the worker in doing their work with enjoyment.Employer should have full faith on his employees and they must be free to take decision related to various issues of company.Employer's management and leadership skills make such a working atmosphere where employees feel comfortable, confident and motivated in working.By these techniques, an employer can not only motivate his employees but can easily retain them for long time.
|
2019-01-03T04:16:05.497Z
|
2015-10-14T00:00:00.000
|
{
"year": 2015,
"sha1": "ce5b57539414d0cae31d233f9bc0a5d9c2917bf0",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJBM/article-full-text-pdf/D612AD355624.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ce5b57539414d0cae31d233f9bc0a5d9c2917bf0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
119427947
|
pes2o/s2orc
|
v3-fos-license
|
Tunneling of Macroscopic Universes
The meaning of `tunneling' in a timeless theory such as quantum cosmology is discussed. A recent suggestion of `tunneling' of the macroscopic universe at the classical turning point is analyzed in an anisotropic and inhomogeneous toy model. This `inhomogeneous tunneling' is a local process which cannot be interpreted as a tunneling of the universe.
Timelessness and tunneling
Quantum gravity and quantum cosmology have the reputation of not describing any accessible experiments and for not having any consequences after the Planck era. However, it has been suggested that quantum cosmology could be responsible for some phenomena of everyday physics as the arrow of time and the classical appearance of our universe [1,2], the smallness of the cosmological constant [3,4], or the onset of inflation [5,6].
As a prominent subject over the years, tunneling processes have been considered for different purposes in quantum cosmology. In a recent paper Dabrowski and Larsen discussed the tunneling of a recollapsing universe at the classical turning point into an expanding state with increased scale factor [7]. Similar models have been studied e.g. also by [8] and, for the early stage of the universe, by [5,6]. In this letter I will comment on the meaning of 'tunneling' in quantum cosmology and the model of [7] is substantially generalized.
The Wheeler-DeWitt equation HΨ = 0 which governs quantum cosmology may be regarded as analogous to the stationary Schrödinger equation. The similarity is particularly striking for the popular one dimensional models in which the universe is described by the scale factor a only. The universe then resembles a particle in one dimension and the Wheeler-DeWitt equation reads −∂ 2 a + V (a) Ψ = 0 , with a ∈ IR + . However, this similarity is misleading since, among other differences, there exists an external time parameter in ordinary quantum mechanics in contrast to quantum cosmology.
The quantum mechanical time parameter is crucial even for the most simple examples of tunneling. Consider the reflection and tunneling of a wave function at a square potential barrier. This situation is described by superposing an ingoing wave with a reflected wave in one free region while in the free region on the other side of the potential barrier the wave function consist solely of an outgoing wave, the amplitude of which determines the tunneling probability. The concepts of in-or outgoing waves are justified by either considering a wave packet by superposing different energy eigenstates the sum of which resembles a particle moving in time.
(Due to time-reparametrization invariance there is only the 'zero energy solution' to deal with in quantum cosmology. In one dimension it is thus impossible to form wave packets.) Or more simply by observing that the crests of the solitary waves are moving with ±kx − ωt = 2πn. The necessity of a time parameter is, however, clear from the onset since the very notion of tunneling assumes a state changing in time: While initially there is no particle in some region (and classically there will never be one), there is a finite probability to find one subsequently.
It is worthwhile to point out that a tunneling probability means probability of a tunneling 'event' of a particle and thus implies a measurement. The concept of tunneling thus presupposes both a time parameter and a theory of measurement. Concerning the first point, it does not help that there is a conserved Klein-Gordon current in quantum cosmology in more than one dimension, since its sign cannot be fixed in the absence of an external time. Denoting something as outgoing as e.g. in the definition of the 'tunneling wave function' [9] would completely arbitrary fix a direction on the configuration space.
In the case of a high and broad potential barrier the situation simplifies, since the wave function in the forbidden region can be approximated by the exponentially suppressed solution only. In quantum mechanics the ratio of the (squared) wave function at the beginning and at the end of the forbidden region gives the tunneling probability. The same procedure is usually adopted in quantum cosmology as e.g. in [5] in order to calculate the tunneling probability of a bubble of false vacuum between two classically allowed regions in a 'free lunch' process. Accepting for the moment this interpretation, one arrives at similar conclusions in the semiclassical limit of quantum cosmology as in ordinary quantum mechanics.
Usually, the situation is more complicated than in the examples above, as e.g. in the alpha decay when one of the two classically allowed regions is restricted to a finite region in space. But while in that case it is still possible to have a purely outgoing wave outside the nucleus, this breaks down if both classically allowed regions are finite. (Dabrowski and Larsen used this kind of potential for analyzing the tunneling probability at the classical turning point of a FRW universe with several matter sources.) In Fig. 1 the first situation is depicted by the dashed line while the case with two bounded regions is shown by the solid line. In order to calculate the tunneling probability it is thus not sufficient to consider stationary waves only. The calculation Dabrowski and Larsen. The second classically allowed region is due to domain walls and a negative cosmological constant. In order to get a 'tunneling universe', obviously, the ascending part of the potential is not necessary. In Sec. 2 a potential is used which is depicted by the dashed line. The decreasing is due to a positive cosmological constant.
is much more involved as can be seen e.g. by the deviation from the exponential decay law for the nuclear decay in a box [10]. Without the aid of the external time parameter the situation in quantum cosmology is quite unclear. One cannot even use the above mentioned comparison of the wave function at both ends of the potential barrier as a formal tool, since there will be no purely exponentially suppressed solutions.
In view of the aforementioned problems I will use the notion 'tunneling' only as a formal notion provided there exists a purely exponentially decaying solution: The ratio of the squared wave function at the beginning and at the end of the exponential region is defined as 'tunneling probability'. (Note that the exponentially increasing wave function will also be a solution -'reversed tunneling' -and arbitrary superpositions of this two basic solutions.) This formal concept may only be regarded as a corresponding to a 'real process' if there is (at least) a time parameter and a theory of measurement in quantum cosmology. In more complicated examples as e.g. the one of Fig. 1 one furthermore has to be rather careful with the calculation of the probabilities. One may try to circumvent some of these difficulties by introducing a semiclassical time parameter e.g. due to a decoherence process or due to a Born-Oppenheimer type of approximation. Usually this point of view is put forward by mentioning that the tunneling occurs in a region where the semiclassical approximation should hold, although Kiefer and Zeh have argued that this argument is not sufficient [11]. If, however, decoherence could be used to define a semiclassical time, it is expected to simultaneously suppress the tunneling process by a Zeno-type effect [12]. I will come back to these issues in Sec. 3 when the results of the calculations in the toy model are discussed. As frequently stressed above, the notion of tunneling makes sense only if there is a time parameter. Since this is certainly not the case in the Planck era one cannot sensibly speak of 'tunneling from nothing' of the universe which is frequently used as a quantum alternative to the big bang.
The preceding considerations implicitly assume the so called naïve interpretation of the wave function which itself relies on the similarity of the Wheeler-DeWitt equation with the Schrödinger equation. That is, I assume the wave functions to be normalizable on the whole (unconstrained) configuration space or that at least a conditional probability can be used. Any other interpretation in quantum cosmology which works on the reduced phase space or isolates a time parameter is meaningless in the example of the FRW universe with phenomenological matter. This is because in these interpretations there is but one physical state. But even for more realistic examples in which the time parameter is given by some function of the volume, it does not make sense to compare volumes. As already mentioned above, even if a physical time has been introduced one has to establish a theory of measurement in quantum cosmology before one could sensibly speak of tunneling.
An anisotropic and inhomogeneous model
The Kantowski-Sachs model as the most simple anisotropic and even inhomogeneous model has the advantage that it can be solved exactly. The homogeneous version of the model combines spherical symmetry with a translational symmetry in the 'radial' direction. (First the homogeneous model is discusses before the translational symmetry is relaxed, see below.) The spacelike hypersurfaces of constant times are therefore cylinders Here b(t), z(t) ∈ IR + ; b(t) is the surface measure of the two-spheres with metric dΩ 2 and z(t) measures the spacelike distance between them; r is the radial coordinate.
The model with positive cosmological constant Λ and pressureless dust is considered here. It is well known that it is only the presence of matter which renders the disklike singularity (z → 0) from a mere coordinate singularity which indicates the incompleteness of the model into a curvature singularity. The dust is described by the parameter z m = ρzb 2 = const (analogous to a m = ρa 3 in the FRW model). For homogeneous models this approach is essentially equivalent to a more sophisticated one which starts from a Lagrangian for the dust degrees of freedom, see e.g. [13] and the literature cited therein. Although dust as a matter source in quantum cosmology is unsatisfactory, it does here lead to a toy model which is calculable and complete. In addition, the present context of a macroscopic universe justifies this phenomenological description.
The following account of the classical dynamics of this well known model is rather cursory. More details can be found e.g. in [14,15] and the literature cited therein. That time parameter is used which is equivalent to conformal time in the FRW model (N = b). The dynamics is determined by the Hamiltonian constraint (written in the configuration variables and their velocities) and by one of the equations of motionḃ Here the equation of motion for b(t) has already been integrated, with b m as an arbitrary constant of motion. The equation of motion for the scale factor of the closed FRW model is identical to Eq. (4),ä(t) =b(t), but the corresponding constant a m is fixed by the matter content. In the Kantowski-Sachs model the matter content instead fixes the constant of integration of z(t) as indicated by the notation.
The classically forbidden regions in this model are determined byṼ bm (b) ≤ 0. Note that there is no forbidden region on configuration space due to the Hamiltonian (3) since its kinetic term is indefinite. Consequently, the above defined forbidden region is defined for each classical solution separately, because each solution is uniquely represented by the 'effective mass' b m . There are two classically allowed regions only if the cosmological constant is positive and smaller than the Einstein value: 0 < Λ < 4/(9b 2 m ). This potential which is depicted by the dashed line in Fig. 1 is similar to the situation in [7], see the solid line in the same figure, except that it is not increasing for large values of b.
The one-dimensionality of Eq. (4) suggests, furthermore, the possibility of an inhomogeneous z(t, r). Geometrically, this looks reasonable because a cylinder is defined only by the homogeneity of the surface measure b while an inhomogeneous spacelike distance z between the two-spheres would not deform it. It turns out that an inhomogeneous z(t, r) is even dynamically consistent for pressureless dust as a matter content, as was first noticed by Ellis [16]. A rigorous proof is possible by using the equations of motion for the general spherically symmetric model as is performed in the appendix. There it follows directly, since z ′ is always tied to b ′ and thus does not appear in the Kantowski-Sachs model. Due to this absence of any spatial derivatives, states at different points do not interact: The Hamiltonian (3) remains thus essentially unchanged One thus has a one parameter set of homogeneous solutions b = b(t) which then determines uniquely the solution z = z(t, r) the inhomogeneity of which is due to the inhomogeneity of the dust.
In the recollapsing region one gets a qualitatively correct picture of the dynamics by considering the exact solutions for Λ = 0: with t ∈ [0, 2π] and where K is a physically meaningless 'constant' of time-integration (different functions of K are identified by a redefinition of the radial coordinate and by simultaneously changing the dust potential). The other classically allowed region for large b can be approximated by considering only the cosmological term in the potential. Obviously, the inhomogeneity of z(t, r) is due to inhomogeneous dust z m (r).
As usual Dirac's quantization scheme is used by turning the variables b, z and their conjugate momenta into operators which satisfy the standard commutation relations. The Hamiltonian constraint (3) is turned into an operator which annihilates the physical states:ĤΨ = 0, the so called Wheeler-DeWitt equation. In the configuration space representation it reads where ∂ z and ∂ b are the partial derivatives with respect to z and b, respectively. Factor ordering is partially left open as indicated by the parameter k. The Laplace-Beltrami ordering is given by k = 1. This equation can exactly be solved for a more general potential which contains arbitrary functions of b multiplied by z and z 2 [14].
Although z(r) is a field it suffices to consider the minisuperspace Wheeler-DeWitt equation (7) at each point r since there are no partial derivatives with respect to r (and because of the simple structure of the solutions). Quantum fluctuation are expected to result in interaction between different points. But this effect can only be considered in the context of a more general model as is common to this kind of problems.
In order to solve the Wheeler-DeWitt equation it is convenient to introduce the operator b m : ,Ĥ] = 0. As indicated by the notation, the eigenvalues of this operator is the effective mass for b. The eigenvalue equationb m Ψ bm = b m Ψ bm can easily be solved, and one finally ends up with the following set of exact solutions for the Wheeler-DeWitt equation [14], Alternatively, one can consider the solutions which are given by a superposition of the exponential with plus and minus sign to form e.g. the 'cos' and the 'sin' (the 'cosh' and 'sinh' in the forbidden region). The integral in the exponential can be expressed in terms of elementary function for b m = 0 (that is for large values of b) and for vanishing Λ (that is for small values of b). Neither case is appropriate here.
In order to analyze the formal tunnel probability one has to calculate the ratio prob := |Ψ(b 1 )| 2 /|Ψ(b 2 )| 2 where b 1 and b 2 denote the beginning and the end of the classically forbidden region, respectively. While, the prefactor of the wave function is divergent at the borderline of the classically allowed region it cancels in prob, sinceṼ bm (b) possesses three distinct, simple zeros (the third one is negative and has no physical significance). The first term in the exponent vanishes for b = b 1 , b 2 . There remains the second term in the exponential which is a definite integral between b 1 and b 2 . This integral remains finite since at both limits of integration the integrand is approximately 1/x and the infinities at the two boundaries cancel each other.
If one is interested in a 'tunneling' from the recollapsing region into the region right from the potential barrier, one has to choose the minus sign in the exponent of the wave function. (Note that the Hartle-Hawking wave function has the opposite sign [15] and describes thus a 'tunneling' into the recollapsing region.) The final result then is given by: The simple structure of this result relies on the semiclassical structure of the exact wave function plus the canceling of the prefactor in prob.
Results and remarks
A remarkable feature of the above 'tunneling probability' is the way it depends on the matter content: The logarithm of prob depends linearly on z m . Apart from factor ordering effects the tunneling probability equals one for the vacuum model 1 . Since the matter does not influence the tunneling barrier one has the analogue of a particle with mass z m (squared) running against a potential barrier. However, different points r in the Kantowski-Sachs model will 'tunnel' individually since they do not interact. This situation is analogues to a cloud of non-interacting particles. In more realistic models one might think of weakly coupled points (perhaps galaxies), which nevertheless will behave mainly independent in the 'tunneling process'. This local process clearly cannot be interpreted as a tunneling of a whole universe.
How is this process to be interpreted? Due to the timeless nature of quantum cosmology, and in particular since a semiclassical time might not be defined at the turning point, both 'universes' (that is both classically allowed regions) exist 'simultaneously'. In this case one might interpret the tunneling as a quantum wormhole 2 . However, the same argument of timelessness tells one, that there is no classical observer at the turning point, no incoming and no outgoing wave. The world is completely quantum and consequently one cannot speak of a tunneling process or of wormholes. If, on the contrary, the tunneling occurs as a change 'in time', e.g. defined by decoherence, the variable b in some space regions changes from b max of the recollapsing solution to b min of the solution right from the potential barrier, with b min > b max . This is similar to the 'free lunch' process in the very early universe [5,6]. These tunneling regions behave dynamically different from their environment since they are now expanding instead of recollapsing and because of the sudden change of the b-variable. This results in a destruction of the cylinder geometry of the universe.
It has been emphasized by Dabrowski and Larsen that this tunneling process does not require any change in the matter content as e.g. a change of the vacuum of the involved fields. However, Rubakov has suggested that the matter content may be changed due to the very tunneling process [8]. He considered a scalar field, conformally coupled to the FRW model, and observed that the tunneling probability is enhanced with growing particle content. Probably, this remark will apply too when only small parts of the universe are involved. However, the calculation for the Kantowski-Sachs model is much more involved than that for the FRW model.
A comparison of prob for the Kantowski-Sachs and for the FRW model shows another difference than that of inhomogeneity. Starting from the Wheeler-DeWitt equation for the FRW model [∂ 2 a + 1 3 Λa 6 + a 4 − a m a 3 ]Ψ = 0 one gets (in WKB approximation) whereṼ is defined as in the Kantowski-Sachs model. This result is quite different from the expression (9) for the Kantowski-Sachs model, since in the FRW model a m is determined by the matter content. The analogue for the tunneling in the FRW model is that of a particle running against a barrier the width of which is fixed by the matter content.
It was shown in [14] that one can insert a Kantowski-Sachs cylinder between two FRW half-spheres. The 'tunneling probability' in this compact model changes drastically at the borderline between the different parts. However, since the dust content in the FRW model fixes the constant of integration b m of the Kantowski-Sachs region, the tunneling probability in the Kantowski-Sachs region is a function of the FRW dust, too. The functional dependence is even similar in both regions.
It has been argued in the last section that the calculation of prob is not spoiled by the divergences of the wave function (8). One can even get rid of these divergences by considering superpositions. (In contrast to the delta-functional like solutions (8) it is e.g. possible to consider 'free waves', the form of which reveals nothing of classically forbidden regions [17].) This is to be done in any case since usually a universe is not represented by one eigenfunction but by a wave packet. The above calculation of prob is nonetheless necessary since the forbidden regions are defined for fixed values of b m only. With other words, every single component of the superposition 'tunnels' independently. Moreover, in more complicated models there might not be a sharply peaked wave packet at the classical turning point due to interference effects between the 'incoming' and the 'outgoing' part [18]. This is in agreement with the point of view that there is no tunneling because the wave function at this point is completely quantum.
There might be thus two effects of quantum cosmology at the classical turning point of the universe: The breakdown of the semiclassical approximation and a local change into an expanding state. One might try to circumvent the breakdown of the semiclassical approximation by considering decoherence. If this worked (Kiefer and Zeh have expressed their doubts that it does [11]), the increased classicallity of the universe would further suppress the tunneling probability [12]. Both quantum effects would then be lost simultaneously, by the same token. gauge fixing N = R similar to the one in the main text is chosen, but the homogeneity of N, R is not yet enforced.
The gravitational Hamiltonian reads and the equation of motion are given bÿ The homogeneity requirement R(t, r) → R(t) = b(t) obviously leads to an equation of motion for b(t) which is independent of any other variable. One can thus insert a solution of it into one of the other equations in order to determine L(t, r). Furthermore, since L ′ does only appear in combination with R ′ all partial derivatives cancel and the equations reduce to that of the inhomogeneous Kantowski-Sachs spacetime.
A non-vanishing pressure of a matter source would lead to an interaction of neighboring points. Thus, only incoherent or pressureless dust might serve as a matter source in this model. Since according to the Bianchi identities the dust flow is geodesic one may consider radial moving dust. Furthermore, by the other part of the Bianchi identities one obtains matter conservation ρ(t, r)L(t, r)R 2 (t, r) = c m (r). This general case is known as the Tolman model and in the case R(t, r) → b(t) one gets the inhomogeneous Kantowski-Sachs model. In the above equations there will be an additional term in the Hamiltonian (the potential is supplemented by −c m (r)) but the equations of motion remain unchanged by the dust. Obviously, there are no further complications to the inhomogeneity argument due to dust.
|
2019-04-14T02:24:46.796Z
|
1996-04-18T00:00:00.000
|
{
"year": 1996,
"sha1": "8908d5bd569c288f11d97ae311a001bf4cd9a656",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/9604036",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a0d126a270ecc44ff0db2f2c5c8d9441eb29c259",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
265222585
|
pes2o/s2orc
|
v3-fos-license
|
Crisis management experience in Hungary
The experience of managing the COVID-19 and the migration crises in Hungary has been highly criticized in academic literature. The article delves deeper into the matter by scrutinizing the dual challenge of managing the given crises while facing disciplinary measures from the EU. The study uses the system approach to explain and predict the interactions between the affected nation states and the EU institutions in times of turbulent crises. The article’s affirmations are inspired by the system approach and are substantiated by comparative findings of empirical studies. The article finds that disciplinary interventions are likely to increase autopoietic tendencies in the targeted member states. Disciplinary measures apparently add to the member states’ challenges inevitably increasing self-reliance and autonomous decision-making.
Introduction
This article aims to research the 'dual-challenge' of small-medium-size states being caught up between an actual challenge and external players who pursue their own priorities.Neither the EU nor other global or regional governance institutions have been analyzed as being part of the external challenges the nation state faces.Countries of scarce domestic resources, like Hungary, tend to find themselves between rock and a hard place fighting against the crisis of the day while being surrounded by external expectations of international actors.Such situations create an "either-or" dilemma for the nation state: either to focus on the matter of the given challenge or on the expectations of-relatively powerful-external actors.The article embeds the examination of this dilemma into the "general system theory" developed by Bertalanffy (1968) and Laszlo (1972) using two concrete crises as examples, namely, the 1 3 Crisis management experience in Hungary responses to the COVID-19 and the migration crisis of 2015.The original biologyinspired system model offers an explanatory view of the nation state which struggles to maintain its decision-making sovereignty and views all external forces through the lens of survival.Hence, the study has the following structure: • briefly introducing the main facts of the COVID-19 response policies being followed by the discussion on the external expectations represented by international players (mostly the EU), • a similar pattern is applied regarding the migration crisis (isomorphism).• The facts on responses and expectations regarding both recent challenges are analyzed in the context of general system theory.
The article's summary of the COVID-19 response measures is based on the comparative study of Szabó and Horváth (2021) for the countries of the Visegrád Group.What still makes the Hungarian example different is the plethora of critical remarks regarding its alleged non-alignment with external expectations.A similar but less detailed analysis is given regarding the migration crisis as well to underpin that the 'dual challenge' is far from being unique or isolated to only one challenge or crisis.The reason of the selection of the two crises for research is that both belong to the present or the immediate past, are extensively politicized, and have significance to the decision-makers of the EU institutions.
The discussion offers two explanations to the fact that such small and mediumsize countries may be caught up in the 'dual challenge'.(1) One is the decades-long development path of democracy-transfers versus the organic approaches to the origins of democracy and (2) the other is the struggle for systemness in which one system (the nation state) is muddling through the given crisis to maintain itself (autopoiesis, Bertalanffy, 1968) while certain external players (the EU and other international actors) make attempts to deny their systemness or holism which inevitably leads to a conflict.
Using the logical patterns of general system theory for public administration and public policy responses to various crises
The research design is built on historical examples of recent challenges which are examined using secondary research.The relevant empirical results taken from the scientific literature provide isomorphic examples of handling the same external challenges at the same time.The decisive similarities despite geographical and/or cultural circumstances validate the main affirmation of general system theory that systems (such as nation states) tend to maintain their existence by means of adaptation.In the case of Hungary, the adaptation pressure was dual for institutionalized pressure from the EU was eventually contrary to system self-interest.The academic realms of political science, public administration, and public policy have discerned and internalized that social and historical developments may be viewed as manifestations of complexity (Farazmand, 2009;Morçöl, 2014).Within the vast realm of literature on complexity in public administration and public policy, there are at least two main currents: chaos theory and system theory.(1) The former embraces 'uncertainty' or 'hyper-uncertainty' emphasizing that a general pattern of seemingly minor inputs may have powerful effects (Capra, 1982;Farazmand, 2003Farazmand, , 2009;;Galbraith, 1977Galbraith, , 2006)).Farazmand (2009) refers to an ample collection of propositions on complexity with remarks to public sector change, globalization, and adaptation.He elaborates the idea that the latter is a profound system characteristic aiming to preserve 'holism' (Bertalanffy, 1968, Morçöl 2014).( 2) Authors emphasizing system theory are mostly interested in the question on how public organizations can resist external turbulences.The tendency for systemness relies largely on system adaptation (Morgan, 2006) and proactive, future oriented, and flexible structures which are recommended by a number of authors (Argyris, 2004;Farazmand, 2006;Stacey, 2001).System reactions to an external stressor may take the form of callous resilience (Wee & Asmah-Andoh, 2022), may manifest in a heuristic decision-making (Drack & Pouvreau, 2015. p. 546), may also rely on system anticipation with mistakes and occasional miscalculations (Leydesdorff, 2005).Mistakes in public policy may be pre-empted by contingency planning for developments that are rationally anticipated (Scott, 2001).Furthermore, the increasing pressure for adaptation and adaptability may enable organizational strategies of better human development and organizational learning (Farazmand, 2003), gaining agility by higher level of flexibility and responsiveness (Mergel et al., 2021), collaborative partnerships (Ansell et al., 2021), intensifying collaboration through networks (Krogh, 2022), or open government as a composite notion embracing the dimensions of information availability, transparency, participation, collaboration, and information technology (Gil-Garcia et al., 2020).Moreover, Farazmand (2009) calls attention to the necessity of increasing organizational and administrative capacities as well as policy-making and political capacity-building for the sake of creating more adaptable public administration systems.The referred propositions fit into the thought of system adaptation: maintaining internal manageability (Ansell et al., 2021) and holism which appear to be the underlying factors of systemic self-correcting tendencies, system learning, system memory, system anticipation, and innovation while being in line with general system theory (Bertalanffy, 1968). 1t also needs to be mentioned that Pollitt was largely skeptical about the scientific validity of using system theory or complexity theory (Pollitt, 2009).According to him, complexity theory is too vague, does not really have an edge and represents rather a descriptive than an explanatory approach, furthermore, it lacks a specific scientific method and overemphasizes structure to dynamics.
The practical applicability of the system method is highlighted by Meek and Marshall (2018) using the illustration of the Southern California metropolitan water management system.The authors emphasize that stressors and shocks get absorbed in a process of system learning and system transformation which are emerging properties (Dahlberg, 2015) and create new ways of being (McMillan, 2004, p. 32.) 1 3 Crisis management experience in Hungary while leading to a higher level of resiliency by elevating the level of thinking from reductionism, facilitating new structures, new levels of self-organization, differentiation including new feedback-loops, and new time-and path-dependencies (Koliba et al., 2019).A pattern of similar thinking has emerged due to the outbreak of the COVID-19 pandemic as representatives of social sciences took the initiative to utilize the research potential in the pandemic (e. g., Gkiotsalitis & Cats, 2020, or Brodeur et al., 2021).
3 The COVID-19 pandemic challenge as a conflict between reality and expectations
The descriptive approach to the COVID-19 response of Hungary
To properly contextualize public administration analyses related to the COVID-19 pandemic in Hungary, one ought to pay attention to the legalistic public administration culture inherited from the Austrian-Hungarian Monarchy (Drechsler, 2005;Hajnal, 2003;Hajnal, 2008, p. 132;Hajnal, 2014;Hintea-Ringsmuth-Mora, 2006) entailing that policies, policy measures, and public administration decisions are deeply embedded in the Rechtsstaat concept according to which the state is the main guardian of public good hence, the state is the primary creator and enforcer of law.
In the case of Hungary, the realm of COVID-19 containment policies has proven to be a dual battlefield: primarily the pandemic itself and secondarily the expectations of external polities, mostly of the EU institutions.This distinction has split scientific inquiry into descriptive and normative realms, whereas 'normative' refers not to how the pandemic itself should have been managed to be effective or efficient but to the academic reflections on the external expectations and the extent of their being matched by Hungary.The following accounts used comparative methods which grant sound overview on the developments as they unfolded.
Articles 48-54. of the Constitution (Cardinal Law) of Hungary offer a variety of various sorts of emergency law: (1) exceptional state, (2) state of emergency, (3) preemptive defense state, (4) terror emergency, (5) suffering unexpected attack, and (6) constitutional emergency.The latter option was activated for COVID-containment policies which enabled the Government to suspend or bypass the enforcement of certain laws or to take other exceptional measures.The definition of a human epidemic had already been given by the Act No. CXXVIII of 2011 which the Government used to create Decree No. 40 of 2020 (March 11) but according to section (3) of Art.53. of the Constitution (such decrees may remain in force only up to 15 days).This made it essential that the Parliament enacted the text of the Decree as an Act (No. XII. of 2020) which made it possible for the decrees issued under the term of the constitutional emergency to prolong the effect of the decrees issued under sections (1) and (2) of Art.53. of the Constitution in an arbitrary fashion.While the original (15-day) Decree was accepted by the opposition parties, the latter possibility of prolongation faced their criticism.The Hungarian example is far from exceptional and has been compared with other countries in the region: Slovakia, the Czech Republic, and Poland which-together with Hungary-form the so-called Visegrád (V4) countries. 2Horvat et al. (2021) argue that the countries they compared (Poland, Czech Republic, Slovakia, and Hungary) intensified their regulatory efforts and their public service digitalization in order to contain and manage the pandemic in a rather similar fashion.Furthermore, Szabó and Horváth (2021) applied a descriptive public policy approach (instead of a traditional legalistic public administration analysis) as follows.
On the verge of the COVID-19 outbreak, the Government established a highly centralized Operational Body already on January, 31st, 2020-the first case of infection having been officially recorded on March 4th and the first fatality being reported on March 15th.On March 11th, State of Emergency was declared based on Art.53. of the Constitution3 ; while on March 30th, the Act on Containing Coronavirus was adopted by the Parliament. 4 The Act granted the right to the Government to take any measures necessary to contain and handle the pandemic, including the suspension of certain laws without any specific deadline.The wide authorization had certain limitations though: the authorization was to be ended upon the decision of the Parliament, furthermore, the Government had to observe the principles of necessity and proportionality of its measures.In fact, the first State of Emergency was called off by the Government on June 18th and was replaced by the more specific and much less restrictive state of 'epidemiological preparedness'.Free and volunteer inoculation programs commenced early February, 2021 having been enhanced by a large-scale communication campaign.The peak of the pandemic in Hungary was April 13th, 2021 with 272,974 registered active cases, while by the 1st of September-which is first day at school-there were only 4826 active cases.Until this time there were 30,059 fatalities, 777,646 people pulled through and 5,772,010 people received at least one dose of vaccine 5 (which was cca.59.7% of the population).The descriptive remarks on the Czech, Slovak, and Polish accounts are displayed in the following table.It appears that given their geographical and cultural proximity, the COVID-19 containment patterns had significant similarities (isomorphism) in the four countries (Table 1).
The apparent pattern of events in the V4 countries was to constitutionally interpret the new emergence first, then to react with drastic measures such as certain forms of curfews, closing of public places, and mask mandates to slow down the pandemic until it seemed necessary while boosting up public health capacities until the vaccination became available.The differences within the V4 group-which made for instance Slovakia successful-were more bound to the practical details of implementation and citizen-cooperation than to the concrete government measures which were largely similar.
3
Crisis management experience in Hungary The second and the third waves were controlled in the same fashion, mass-vaccinations were started Spring, 2021 1 3
Crisis management experience in Hungary
As another remark regarding morphological similarities, Grzebalska and Madarová (2021) argue that the V4 countries have undergone a certain level of remilitarization during their COVID-19 containment policies.
Applying the interventionist school of democracy: the normative approach of academic discussion regarding Hungary's counter-COVID-19 measures
Critical authors on the Hungarian handling of the pandemic tend to add political and legal aspects-borrowed from or inspired by the interventionist approach of democracy-to their inquiries upon which they establish their criticism.
Christensen and Ma (2021) put the US, China, Israel, and Hungary into the group of countries in which governments used the pandemic for political purposes by one way or another.Similarly, concern of a political power-grab (Cormacain & Ittai Bar-Siman-Tov, 2020) and that crisis management means may threaten the rule of law by not complying with its liberal interpretation (Drinóczi & Bień-Kacała, 2020) so as the concern of drifting toward authoritarianism (Landman & Splendore, 2020, p. 1063) are amply represented in the relevant literature.Fear of curtailing parliamentary powers by executive means under the pretext of pandemic control was expressed by Bolleyer and Salát (2021).Similarly, Moise et al. (2021) embed their concern into the pre-existing narrative that Hungary is not any more a democracy and COVID-19 just enabled the government to take even more power.Bohle and Eihmanis (2022, p. 497) argue that Hungary is a populist regime, so are such regimes in the region and around the world, because it cannot afford unpopular policy measures, therefore Hungary's policies are less-scientific or anti-scientific.Concerns for civil society were put forth claiming that Hungary's civil society has "considerably shrunk because of repressive policies" of fighting against the pandemic (Feischmidt & Neumann, 2022, p. 17.).Sedláková (2021. pp 79-80) refers to the fact that Hungary was the only country in the EU that used Chinese (and Russian) vaccines and that this policy decision was criticized as "anti-democratic", while Goodwin et al. (2022) found that political behaviors and vaccine preferences may be connected.
The remarks on the descriptive and the normative schools of democracy and COVID-19 containment policies throw light on the duality of challenges: the first being the matter itself while the second being the challenge of expectations that stem from interventionist legacies and tendencies discussed later in this article.
What general system theory teaches us about fighting illegal migration in Hungary
Hungary witnessed a steady inflow of migrants of 20,000 per annum in the early 2000s which increased up to more than 25,000 in 2005.From 2006 to 2013, the number of immigrants floated between 20,000 and 25,000, while the internal composition of migrants also changed.The proportion of migrants coming from Asian countries grew considerably, partly because ethnic Hungarians from the surrounding countries were granted citizenship under simplified rules thus, they were not included in the migration statistics.After a brief correction in 2016, the 2017-2020 period brought a new wave of immigration of 49,312 in 2018, 55,297 in 2019 which was over 64.000 together with immigration of citizens of the surrounding countries in 2019.Due to the COVID-19 pandemic, the numbers decreased in 2020 to gross 51,000 which included 43,785 migrants from other than the neighboring countries (figures from: Gödi and Horváth, 2021).Contrary to legal migration, illegal migration shows a totally different pattern: after a modest figure of 6,903 in 2011, there was a steep growth until 2014 with 50,065 illegal border crossings.An unexpected leap took place in 2015 with 414,237 (Kui, 2016) annual new entries with daily pikes occasionally exceeding 10,000 in August and early September until the Government decided to close the borders with law enforcement personnel and later by physical installments as well.That time the country's population was 9.778.000which gives a rough estimation that if a proportionate occurrence had happened in the US (a population of 320.878.000 in 2015), it would have seen the arrival of 13.595.735illegals, most of which populace would have arrived within a two-month timeframe.Even given the fact that almost all migrants were heading for Western Europe through the Austrian-Hungarian border, this was an utterly unstable situation threatening the entire population-especially threatening to domestic minorities such as the Roma with the outlook of losing their relative positions in public attention to a new populace-there was a realistic threat that if either Austria or Germany had intended to close its borders, a mass of exponentially growing, frustrated, and traumatized population would have remained in the country.The comprehensive presentation of Hungary-critical academic writings would be beyond the limits of this article but to give a hint of the content of criticism, the following accounts are mentioned.
Cantat and Rajaram (2019), Majtényi et al. (2019) take the stance that what had happened was a consequence of Hungarian backsliding in the rule of law and democracy-largely represented by the grievances of NGOs.Others put the emphasis on political developments such as populism (Etl, 2022) or intolerant, xenophobic, islamophobic, and antisemitic6 (Kalmar, 2019) tendencies or even 'Caesarian' (Sata & Karolewski, 2020) rhetoric.Further authors (Pap et al. 2019) use the migration crisis to put forth counterfactual remarks as if the extreme right had any influence in the government (as a matter of fact, they joined the unified opposition in 2020 which lost the elections in 20227 ).Further accounts mention racism and welfarechauvinism (Andits, 2022), de-democratization and politicization (Beger, 2023).Legal scholars tend to emphasize that Hungary's actions are against human rights (Hoffmann, 2022) moreover, that the rule of law failed in Hungary en bloc as a concept but not without the latent participation of the EU (Halmai, 2020).
The enlisted affirmations indicate that many of the criticism itself, however, aired in the academic realm, are overtly politicized and highly resembling the 1 3 Crisis management experience in Hungary cited critiques regarding the Hungarian anti-COVID-19 policy measures.On the other hand, after having analyzed more than 160 corresponding official documents, Canveren and Durcaçay (2017) came to the conclusion that handling the migrant crisis in Hungary should be seen as a series of efforts of securitization and Euroscepticism, which approach possesses significant resemblance to self-preservation aspect of system theory.Still, the Hungary-critical authors are not mistaken that the migrant crisis was highly politicized (Cantat & Rajaram, 2019) but there is no example of any country where a similar occurrence has not become so.
System theory hints that any system-including a nation state-has the 'telos' to maintain its integrity within its means.Apart from the cited accounts, there are relatively scarce remarks on this potential conclusion, although the inflow of migrants necessarily brought the importance of system boundaries or membrane effect into the public realm (Bailey, 2008).Luša (2019) develops an explanatory view on the migration phenomenon applying a small-country perspective, coming to the conclusion that the countries analyzed (Croatia, Slovenia, Austria, Denmark, and Sweden) pursued policies that were not aimed at satisfying pan-European policies, instead, small countries pressed forward "to reduce migratory pressure and maximize national leeway" (Slominski & Trauner, 2017, p. 101).One can add Hungary to this group of small countries which are following their own course, while considering the EU as an external hinderance in pursuing their own system-driven objectives.
The country-critical normative school
The description of the academic debates on the origins of democracy and the rule of law as decisive Western values and organizing principles is beyond the scope of the current paper but have been excessively discussed already (Gellén, 2021).The current chapter continues the expectation-laden remarks of country-critical authors emphasizing that similar criticisms had existed before the two major crisis events (COVID and migration).Reference to this train of thought is necessary to illustrate that the authors' expectations prioritize democracy and the rule of law in crisis management.
According to Ágh (2013), backsliding in democracy could be discerned from 2010.Soyaltin-Colella (2020) and Huber and Pisciotta (2022) uphold the already existing backsliding theory and promulgate the idea that Hungary and Poland should be sanctioned by EU interventions. Closa (2019) hints that there is a rule of law crisis in Hungary and in Poland, citing Pech and Scheppele (2017a).After having interviewed Commission officials, Closa (2019) found that (interventionist) scholarly criticism by Kelemen (2017), Pech andScheppele (2017a, 2017b) and Kochenov (2016) influenced the Commission officials to justify their actions regarding why they had restrained themselves only to infringement procedures against Hungary and Poland.Appel (2019) urges joint international effort combined with street demonstrations to overturn Hungarian policies.Kazai (2021) criticizes the Hungarian legislative process hinting that it does not fit into the frameworks of the rule of law.
Not every scholar accepted such views.Ovádek (2018) found-after having analyzed 80 relevant publications-that academic publications concerned about democracy and the rule of law in Poland and Hungary tend to lack proper methodology, thus, must be rendered ungrounded.
Remarks on transboundary challenges
Both the migrant crisis of 2015 onwards and the COVID-19 crisis fit the category of transboundary crises (Boin, 2019).The term "transboundary" does not only refer to geographical borders but also to potentially all kinds of boundaries in the cognitive, political, and physical realms.Such transboundary crises push states toward centralization (t'Hart, 2023) as well as stronger internal and international coordination which inevitably has aspects of politicization for collective actions of resilience need to be "sold" to the public.Building transboundary crisis management institutions is also inevitable (Boin, 2019, p. 98).It is common in many COVID-19 examples that the top leader assumed in-person command and responsibility which had significant effects on the political unity.This can be viewed as politicization, but transboundary crisis management theory shows that strengthening command and control structures and centralization are, to a certain extent, inevitable or necessary (Boin, 2019).Transboundary crisis management capacity-building can be built through learning and technological development (Farazmand, 2003) as well as national and transnational information exchange, and coordination (Parker et al., 2020, t'Hart, 2023) which entail drastic policy measures within the frameworks of the laws of emergency (Horvat et al., 2021).
From the briefly summarized remarks on transboundary challenges, it appears that the Hungarian case is not special or unique in its content.What makes it different is the realm of expectations.
Inquiry into the system clash problems in managing crises
Should system theory be a valid realm of scientific explanations, it ought to have similar explanatory power in system-to-system or system-to-subsystem interactions, depending on how we define EU-member state relations.There are at least three possibilities to classify EU-member state relations applying the notions of general system theory.(1) The EU is a non-system but the member state is a system.(2) The EU is a system as well as the member state; therefore, there is a system-to-system cooperation or a system-to-system struggle between them.(3) The EU is a supersystem, while the member states are sub-systems.In the latter setting, the EU is a top-down structure with its decision-making center in Brussels being represented locally by the member states.In the following brief discussion, Hungary is taken as an example of a recalcitrant member state and the three possibilities are examined.
3
Crisis management experience in Hungary 1.If the EU is a non-system and Hungary is a system which chose to cooperate with other similar systems (other member states), than the EU-member state relations must be governed by mutual interests, otherwise any nation state under institutional pressure of the EU is likely use its systemic powers to maintain callous resistance (Wee & Asmah-Andoh, 2022) or any other form of resilience toward external stressors of any sort, implicitly the EU.Counties of strong internal cohesion, culture, language, and common legacy of past experience like pain and suffering (system memory) are definitely more system-like than the newly created EU. 2. If the EU is a system and Hungary as a member state is a system as well.Hence, cooperation, competition, or struggle are possible between them while neither denies the other entity's systemness.Based on the given realities, this possibility appears the least plausible.3. The EU-member state relations may also be modeled as system-subsystem relations which assumes the liaison between lower-level components of the system (member states) that are controlled and those (higher-level) ones which exercise control (Laszlo, 1972. p. 68.).Based on this definition we get the formula for a top-down relation between the EU and a member state as a system-subsystem interaction whereas the member states might benefit from being a subsystem in the EU by being able to interact and mutually have access to each other's resources at significantly lower transaction costs (Garoupa, 2012) but at the cost of maintaining the super-system (Laszlo, 1972).
Based on research of EU documents, Rech (2018) developed a classification on the legitimization of disciplinary efforts by EU institutions toward Hungary as a member state which hint that the EU views itself as a super-system.Rech (2018) offers the following overview of various grounds of democratization by top-down intervention within the EU: • Agreement by all member states to yield to EU law and be democratized alike (minimalist-positivist argument: super-system regulation).• Upholding constitutional values is necessary to maintain a supranational entity (existential argument for the EU as a super-system).• Upholding rule of law and democracy contribute to the stability, peace, and prosperity of the EU (teleological argument for the EU as a super-system).
Especially, the latter two intervention grounds bare theoretical weight regarding the system-driven aspect of the EU's exercising disciplinary power over member states based on the existential interest (systemness) and the telos of the super-system.Both point at the EU's viewing itself as a super-system with its own existential arguments and/or telos.The minimalist-positivist argument can also be better understood in the light of the EU's ambition of acting as a super-system and applying internal disciplinary powers accordingly, even at the cost of the member states' adequate crisis-management efforts.Rech's (2018) findings corroborate the validity of system theory from the angle of the EU's self-definition.
Which one to manage: the matter of the crisis or external expectations?
The term 'managing external expectations' refers to the challenge that the EU supersystem's disciplinary power poses to recalcitrant member states.Probably the best illustration to the dilemma of either managing the actual crisis or the external expectations is the non-disbursement of the COVID-19 recovery funds 8 which illuminates the following sequence: 1. Crisis penetrating the system's barriers.2. Internal response: managing the crisis by applying hierarchies, regulations, and special (crisis-specific) measures.3. External response: criticism and denying access to recovery funds.4. Internal consequences: aggravated/lengthened crisis.
At step 4, a real policy challenge occurs whether to concentrate on the internal materiality of the crisis or rather turn to mitigate external pressures by managing expectations to have the COVID-19 recovery funds disbursed and, thus, mitigating the secondary effects of the crisis.The fundamental question related to system theory in the democracy/rule of law debate is whether democracy and the rule of law are system-bound or system-neutral phenomena.If the former position is true, then democracy and the rule of law necessarily stem from the organic processes of the given system.On the other hand, if they are system neutral, they can be transferred by external forces from system 'A' to system 'B'.This is a question of fundamental importance because if we accept the principle of democracy as an inherently system-bound phenomenon, then all attempts to transfer democracy from 'A' to 'B' are per definition un-democratic or even anti-democratic.
The question whether the rule of law is system dependent or system neutral can be answered by simply referring to sound legal methodology.Traditional dogmatic legal inquiry offers grammatical, logical, historical, and systemic methods of interpretation in civil law thinking while case law research also offers extensive use of analogies of previous cases.Both legal realms (civil law and case law) require sound contextualization embedded in the legal culture of the given jurisdiction (Möllers, 2017) without which the concept of the rule of law lacks methodological anchorage which undermines the validity of the findings drawn from it.
The following Chart 1 displays a simplified model of the 'dual challenge' of the pressing question of managing the materiality of external crises or external expectations.
The chart above summarizes the approach of this article in a simplified model.External challenges such as the pandemic and the migrant crisis tend to appear according to a stochastic function in time and the nation states have developed their adaptive-resilient approaches adequately throughout history.The EU and potentially other international (global or regional) governance entities tend to use the crises as 8 https:// www.polit ico.eu/ artic le/ bruss els-turns-down-hunga rys-recov ery-plan/ Retrived: 01.30.2023.
3
Crisis management experience in Hungary opportunities to leverage their own priorities regardless of whether they align or collide with the organic autopoietic tendencies of the nation states.
Findings and concluding remarks
The current article elaborates on the dual challenge of small EU member states which face crises such as the COVID-19 and the migrant crisis of 2015 while having to manage the challenge of the EU institutions' expectations.The study found that such countries-primarily but in certain aspects not exclusively Hungary and Poland-found themselves in a sequence of events as follows: 1. External challenge.2. Policy responses.3. New challenge of unmet external expectations posed mostly by the EU. 4. Policy dilemma to choose between managing the external challenge or the challenge of the expectations.
The research found that system theory offers a robust explanation as well as predictive remarks to the behavior of small and medium member states in times of crises.Small countries tend to demonstrate more system-likeness in their self-protective tendencies (Luša, 2019;Slominski & Trauner, 2017).The relatively small size and a sense of isolation alike appear to enhance system thinking in the decisionmakers' mindsets and systemness (holism) in general.
The article emphasizes that tendencies putting forth expectations represented in public policy and in academia alike are not new but they gained throttle with the EU's becoming a disciplinary "casual Behemoth" (Vachudova, 2005).The EU's drive to stay in control has also certain implications in system thinking, according to 2018) but this creates system-to-system tensions or the recalcitrant member states' struggle for systemness-depending on whether we identify the EU as a system or as a non-system.Rech's findings are in contrast of the EU's explanations for applying disciplinary measures against Poland and Hungary, namely, the pro-democracy and pro-rule-of-law arguments which-according to the findings of the article are decisively more system-bound than system-neutral values.
The question follows whether either the EU or its recalcitrant member states follow the right pattern.Regarding this question, the following remarks can be made based on the findings of this work.
1.It can be affirmed with high certainty that the EU did not support the targeted member states' (Hungary, Poland or potentially other countries) efforts to contain or manage the crisis of the day by overemphasizing opportunities for discipline at the cost of potentials for help.Discipline-in the legal sense-refers to norms issued in the past while new emergences require system-creativity and developing new properties (Dahlberg, 2015); therefore, disciplinary efforts lean toward being non-crisis responsive and ultimately contrary to the autopoietic tendencies of the member states.2. The small/medium member state especially with a sense of relative isolation apparently seeks refuge in enhanced system thinking by extensive reliance on its own resources and initiatives.This was discernible as a general pattern in the entire EU (see: "coronationalism" by Bouckaert et al., 2020) but smallness and the sense of isolation generate a certain awareness of vulnerability which leads to higher level of self-reliance (autopoiesis).
Based on these two remarks, it appears substantiated that the EU's disciplinary efforts using the crisis for enhanced impacts entail a higher sense of vulnerability in the targeted nation states which leads to a higher reliance on the member state's own initiatives, leading ultimately to a tendency of system-driven decoupling from the EU in times of crises.
The concluding remarks follow that a new epoch of crises are expected to bring a vicious circle of EU-member state interactions.If the EU enhances its disciplinary efforts, it strengthens autopoietic efforts from the member states to which new disciplinary measures are to come in the following sequence (Table 2).
Having scrutinized a significant chunk of the relevant literature pertaining to the realm of public administration and public policy, it appears to be a substantiated finding that the EU uses crisis situations to leverage its power position against Table 2 The vicious circle of the EU-member state system interaction in the epoch of crises 1.Higher level of crises, more challenges.2. Higher system-reliance (autopoiesis) from the member states.3. Higher autonomy, higher deviation from EU policies.4.More efforts to discipline by the EU. 5.More challenges for the member state.
3
Crisis management experience in Hungary members.The inference follows from this observation that from the member states' point of view, the crisis becomes more complicated because of crossing a new boundary (Boin, 2019), namely, entering the EU-member state debate.To put it in the language of system theory: the EU has certain tendencies to utilize external stressors to articulate its own systemness (Bertalanffy, 1968) at the expense of its members' holism.The article found that the COVID-19 management approaches of Poland, Hungary, the Czech Republic, and Slovakia were apparently very close to each other with only slight differences.What differentiated the Hungarian and partly the Polish cases from the other examples was the EU's enhanced disciplinary actions toward them.In addition, timely crisis management appears to have been hindered by the EU's tendency to adhere to its previous positions regardless of their gradually becoming asynchronous with reality.This phenomenon by contrast underlines the nation states' being organic systems with an inherent drive for autopoiesis being engaged in solving, avoiding, or mitigating challenges or crises where and when they emerge.This behavior contradicts the EU's clinging on its own interests which inclines the EU to put pressure on the member states to comply with its own regulatory, existential, and/or teleological initiatives (Rech, 2018).One must admit that the nation states' crisis responses are far from being flawless.The currently experienced turbulent pattern of public policy and public administration crises and challenges are expected to enhance predictive thinking-in connection with system thinking-despite predictions' being occasionally vague and erroneous (Drack & Pouvreau, 2015, p. 546;Leydesdorff, 2005).Still, the timeliness and ownership of decisions tilt the balance toward autonomous, system-driven crisis management in the near future.
Limitations
The paper uses secondary data sources, all of which are referenced.However, the data collection methods used by the cited authors differ from each other therefore they are not entirely comparable.
The study uses general system model as an explanatory framework to unify the findings of the following: theories regarding state responses to the COVID-19 pandemic and the 2015 migration crisis, theories of the nature and transferability of democracy, country-critical theoretical accounts of political science and law, and cross-boundary challenge theory.Thus, the use of multiple theories may affect theoretical clarity but the author's conviction is that even different scientific approaches may come to similar conclusions regarding the same element of reality.Contradicting scientific affirmations require further clarification to which general system theory bares one possibility.
Another source of limitation is that the paper focuses on one country, while other countries are unevenly discussed in the paper.It will require further research to clarify whether the enhanced autopoietic tendencies described in this paper are applicable to other countries that are more or less isolated for any reason.
Funding Open access funding provided by National University of Public Service.
Simplified model of the 'dual challenge' situation the findings of Rech (
Table 1
Legal steps and policy measures by the Czech Republic, Slovakia, and Poland the Senate The Slovakian Constitution invests the Constitutional Court to control norms enacted in state of emergency for which there are 3 categories.The Constitution allows to issue a law on the bases of emergency which was issued in 2002.According to this, emergencies can be announced up to 90 days.Act No. 42/1994 allowed the declaration of "crisis situation" C. of 1998 Art.5-8 only.The Government may enact a max.30-day state of emergency in case of natural, industrial or ecological disaster of which the Government informs the House of Representatives without delay.The House may annul the decision or extend the period.During state of emergency, the House must discuss bills within 72 h and submit them within 25 h to
|
2023-11-17T14:46:01.536Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "93767249e5d5f28c948f9f6bfb9ee8e9fedb2270",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s43508-023-00077-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "93767249e5d5f28c948f9f6bfb9ee8e9fedb2270",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
235361357
|
pes2o/s2orc
|
v3-fos-license
|
Looking around the corner: COVID‐19 shocks and market dynamics in US medical tourism
Abstract Patients have historically travelled from across the world to the United States for medical care that is not accessible locally or not available at the same perceived quality. The COVID‐19 pandemic has nearly frozen the cross‐border buying and selling of healthcare services, referred to as medical tourism. Future medical travel to the United States may also be deterred by the combination of an initially uncoordinated public health response to the pandemic, an overall troubled atmosphere arising from widely publicized racial tensions and pandemic‐related disruptions among medical services providers. American hospitals have shifted attention to domestic healthcare needs and risk mitigation to reduce and recover from financial losses. While both reforms to the US healthcare system under the Biden Presidency and expansion to the Affordable Care Act will influence inbound and outbound medical tourism for the country, new international competitors are also likely to have impacts on the medical tourism markets. In response to the COVID‐19 pandemic, US‐based providers are forging new and innovative collaborations for delivering care to patients abroad that promise more efficient and higher quality of care which do not necessitate travel.
| INTRODUCTION
Pandemic-imposed travel restrictions and the redirection of hospital resources to treat COVID-19 patients have created both demand and supply side shocks around the world, freezing the cross-border buying and selling of medical services referred to as "medical tourism." The marketplace dynamics are further complicated in the United States by the recent change in administration and the country's troubled atmosphere that has created an inhospitable environment for medical visitors from other countries. This COVID-19 cocktail of confusion is transforming the landscape for US hospitals and clinics that serve international patients, as well as opening the door to new forms of competition.
| MEDICAL TOURISM-CROSS BORDER TRADE IN MEDICAL SERVICES
The General Agreement on Trade in Services (GATS) of the World Trade Organization deconstructs cross-border trade in health services into four modes. 1,2 While medical tourism has historically focused on patients travelling out of their home country for medical care (Mode 2, Consumption Abroad) and hospitals establishing operations in another country (Mode 3, Commercial Presence), the COVID-19 pandemic has been a tipping point for providers to develop more sophisticated telehealth strategies to reach patients abroad (Mode 1, Cross-Border Supply of Services). This rapid shift to telehealth is leading to broader access to services, without patients needing to leave their home, and it is unlikely that the efficiencies and market access gained by the increased utilization of telehealth will be rolled back after travel restrictions are relaxed or eliminated. The commercial presence of hospitals abroad (Mode 3) will also be impacted in the near term. US companies, including academic medical centers (AMCs), have been establishing operations in foreign markets for many years.
| FACTORS DETERRING MEDICAL TOURISM
The plummet of international travel has had a dramatic impact on medical tourism. Globally, international travel decreased by 87%, comparing January 2021 to January 2020. 3 Total travel and tourism spending for inbound travel to the United States was down 79% relative to 2019 (Figure 1), with January 2021 inbound spending on passenger airfares only 10% of spending in January 2019. 4 The continued pattern of lockdown/release/lockdown/release is expected to continue until vaccines are widely distributed, and uncertainty in future travel restrictions will continue to deter foreign patients from planning cross-border medical travel.
The domestic and international media have described the national public health response to the pandemic before Biden took office as a failure, although recent vaccination distribution has been more favorably reviewed.
The variability in social distancing guidelines, mask requirements and other mitigation efforts among states and local communities has created confusion for residents and failed to stop the spread of the virus. The United States has the largest number of deaths due to COVID-19 of any country in the world, with more than 550,00 deaths (as The overall image of the United States may be a deterrence that healthcare providers must overcome to attract international patients to again travel to the country for medical care, although the impact of the pandemic may be temporary unless the negative effects are prolonged and affect the identity or culture of the country. 6 The violence and protests ignited George Floyd's death, rising Anti-Asian violence, gun violence, antimask activities and violence against healthcare workers, anti-immigration policies, restrictions against travel to the United States from some of our closest Allies 7 have presented the country as a less than welcoming destination.
| Short term impact of COVID-19 for US hospitals & clinics
Unlike other developed economies, medical services in the United States are considered the responsibility of private citizens, rather than a government obligation. Therefore, hospitals and clinics are considered, and see themselves as consumer-based enterprises. The dramatic increase in the global burden of non-communicable diseases (NCDs), coupled with increases in wealth among emerging economies create a robust and sophisticated market from which US providers attracted foreign patients seeking treatments. 8 Elective surgeries generated $147 billion in revenue for US hospitals in 2019, 9 and the short-term halting of and subsequent restrictions on elective procedures has resulted in deep financial losses. In the early phase of COVID-19, non-COVID inpatient admissions were approximately 60% of pre-pandemic levels, 10 and hospital losses were estimated in excess of $50 billion per month between March and May 2020. 11 These unprecedented financial challenges have strained the US healthcare system overall.
The pent-up demand in the immediate post-pandemic period (late 2021 through 2022) will create a surge, challenging hospitals' capacities to provide timely access to care, although pre-pandemic US hospital occupancy was 65.5%, suggesting that there will be physical inpatient capacity. 12 However, US hospitals are apt to focus on the domestic rather than international patient market. Since the prices for US medical procedures are the highest in the world, some price-sensitive international patients seeking elective procedures may be attracted by lower cost options offered by hospitals or specialty clinics in other countries. Other patients may opt to forego services altogether or delay care. An estimated 40% of Americans may have delayed or avoided care due to the COVID-19 pandemic, 13 suggesting that there may be a substantial number of people globally who have delayed care, exacerbating underlying health conditions. The short-term impact is loss of global market share to US providers and opportunities for others.
| Close to home
US hospitals and clinics will refocus on short-term revenue sources domestically and away from international markets in response to international travel restrictions and mounting financial losses. Established international patient programs in US hospitals generate approximately $15.7 million in gross revenue less direct international program expenses on average, 14 which is less than 1% of $1192 billion spent on hospitals in the $3.8 trillion national healthcare expenditure. 15 The costs of international marketing and complexity of managing foreign consumers has increased dramatically with the onset of COVID-19 as fewer patients are able to travel to the country. In response to these challenges, AMCs will re-examine their foreign market strategies, 16,17 and those with significant investments in offshore locations will "hedge their bets" by lowering their exposure in those foreign operations and locations. AMCs that have established foreign beachheads may try to sell them, establish them as regional hubs, or reformulate them as license-only agreements, and each of those strategies have implications for international patient flow to the United States.
-
With a heightened focus on the domestic market in the post-pandemic period as economic pressures mount, US hospitals may coordinate referrals and pool risks. For example, Hospital A specializes in musculoskeletal disorders, while Hospital B specializes in cardiovascular disease. Hospital A and B may exchange referrals to improve outcomes and efficiencies while reducing overhead, especially for expensive technology and hard-to-find specialty personnel. The two hospitals may then jointly contract with self-funded employers to offer employees specific services at lower cost and better outcomes. These direct contracts already occur and augur major changes in healthcare insurance and risk management markets, which in turn will impact medical tourism. 18,19
| HEALTHCARE INSURANCE AND MEDICAL TRAVEL
Many AMCs and other hospitals are consumers of and active market actors in the US healthcare insurance marketplace. As large-scale employers in many places, AMCs are trying to manage the costs of health insurance and medical care of their employees. The recent unsuccessful, high profile collaboration among Amazon, JP Morgan Chase, and Berkshire-Hathaway-"Haven"-is evidence of the ambitious attempts to rein in spending by US employers. In the immediate post-pandemic period, AMC's will become more heavily involved in risk, like insurance companies. Value-based healthcare is an example of the efforts to engage healthcare providers to reduce costs and improve outcomes. As US hospitals become more risk-sensitive the interest in riskier foreign patients may wane.
| SELF-INSURED EMPLOYERS
Historically, companies that manage their own health insurance plans (ERISA compliant, self-insured plans) were considered reasonable targets for those trying to "sell" medical tourism (medical services imports), since selfinsured employers directly bear the cost of medical care for their employees and paying for employees obtain lower cost care abroad may be a strategy to reduce overall healthcare costs. While this has been modestly successful, this market has not scaled. As a result of supply disruptions resulting from COVID-19 and more risk being shifted to hospitals due to direct contracting with employers, there will be a consolidation among benefits managers handling self-insured plans. Simple bundles or packages for episodes of care will be more transparent and available to more small and mid-sized employers. While the overall number of employer-based self-insured plans will increase, their focus will be on the competitive offers from nearby hospitals and clinics that offer lower cost, high quality medical care domestically. Here too, COVID-19 shocks will have ripple effects on foreign markets.
| THE POLITICAL UNKNOWNS
The 2020 Presidential and Senate elections had a profound impact on the US healthcare system. The political landscape has changed with a Democratic President, Joseph R. Biden, and a 50-50 split in the Senate with a deciding vote cast by Vice President, Kamala Harris. Initial signs indicate that the harsh divide between the two political parties as well as in American society overall has not abated. While still in the early days of the Biden Presidency, it is clear that healthcare policy will be redirected to protect and expand the provisions of the Affordable Care Act (ACA or Obamacare) that had been under attack from the Republican-controlled Senate. -1411 The number of uninsured Americans increased from 26.7 million in 2016 to 28.9 million in 2019-a 4-year high-even before the pandemic hit the United States, 20 and is estimated at 31.5 million Americans in 2021. 21 The pandemic has had a substantial impact on unemployment (Figure 2), and many individuals who lost jobs are also at risk of losing health insurance coverage, although recent legislation made temporary coverage more affordable. 22 Average premiums and out-of-pocket costs have risen steadily, with the average premium for employer-sponsored health insurance at $21,342 in 2020. 23 There are also hidden costs to uninsured individuals. Various studies estimate that between 20,000 and 35,000 Americans die each year due to lack of health insurance. 24,25 The impact of COVID-19 continues to create havoc globally as borders open and close depending on spikes in the numbers of cases and the portion of the population that has been vaccinated. The initial responses to COVID-19 in 2020 were slow and haphazard due to a lack of a coordinated national plan. Each state was and is still left to its own discretion as to the types of measures to take to reduce the risk of transmission, though the mortality rate is dropping. While deaths are decreasing, the total number of COVID cases is remaining steady or increasing as the spread of the disease progresses faster than the vaccine roll-out.
Efforts to aggressively roll-out vaccines to the US public are gathering steam but are hampered by vaccine hesitancy as well as widespread inequities within the society that are leaving out large swaths of the country. A recent study estimates that more than one-third of Americans between 18 and 88 are vaccine hesitant for various reasons. 26 Vaccine hesitancy prevents herd immunity as experts estimate 60%-70% of the population must be vaccinated for herd immunity to take effect, although it may be impossible to reach in the near future. 27 Substantial logistical and behavioral challenges remain. 28 The SARS-CoV-2 coronavirus continues to mutate at a faster pace than vaccines can be administered around the world. The possibility of a new variant evolving to which vaccines are ineffective is a reality. 29 utilizing digital health to expand access to Korean healthcare services. 32 Malaysia has adjusted its medical tourism strategy to include aggressive publicity and branding campaigns that showcase Malaysia's healthcare quality and build confidence in Malaysia as a healthcare travel destination and facilitation of end-to-end infrastructure including digital adoption. 33 In January 2021, Malaysia Healthcare Travel Council, the organization responsible for promoting medical travel to the country, launched a partnership with DoctorOnCall to accelerate the adoption of telemedicine and other technologies. 34 A surprise contender for medical tourists is Israel. As of 6 February 2021, the country had fully vaccinated more than 80% of its adult population over the age of 60. The Abraham Accords peace agreement signed between Israel and the UAE ushers in a new dynamic in the Gulf. 35 Bahrain has signed a similar deal with Israel. 36 Travel between Israel and Gulf Cooperation Council (GCC) countries is likely to increase, including travel to Israel for medical care.
| NEW INTERNATIONAL COMPETITION
Once borders open and international travel re-emerges as a viable option for medical tourism, patients will begin to travel again. Pent-up demand for dental services, for example, will most likely cause a surge of traffic at the US-Mexico border as patients seek treatment as soon as it is seen to be safe. While some medical tourists have continued to cross the borders into other countries during periods when borders opened or venturing into other countries despite travel warnings, the numbers have been modest compared to pre-pandemic numbers. 37 Competition by existing providers as well as new entrants into the market is already taking place for the high dollar value, low volume complex medical services like specialized surgeries and cancer treatments, which had been sought by foreign patients in the United States before COVID-19. Healthcare providers in South Korea, Singapore and Malaysia among others, are actively attempting to serve complex patients with urgent medical needs who cannot or do not want to travel to the United States. This market shift is supported by the foreign governments in these countries, which are actively promoting their countries as destinations for foreign patents, unlike the United States government which takes no active role in abetting US hospitals' competitive position abroad. 38 Once new pathways are established or existing pathways of care are re-established, it will be difficult to shift those patterns of travel and trust. The US healthcare system experienced a similar shock post 9/11 when visitors from the GCC Region were prevented from entering the country or felt unwelcome because of the post-event anti-Muslim sentiment portrayed widely in the media. Some of these patients went to other destinations like Germany, and the region invested heavily in its own healthcare infrastructure. International patient travel was slow to recover, but did rebound over the past 19 years, despite a global recession impacting both the United States and many countries that had residents seeking care abroad. The GCC Region is investing in healthcare workforce and infrastructure, [39][40][41] and consultants report an estimated 161 healthcare projects were underway at the end of 2020, with a combined value of $53.2 billion and will add more than 40,000 beds to the current capacity. 42 Furthermore, GCC countries continue to promote the region as a hub for medical tourism as part of their economic diversification plans.
| LOOKING AROUND THE CORNER
As the world continues to struggle with the pandemic and many countries again implement measures to restrict movement within and across borders, the COVID-19 pandemic is compelling new ways of delivering healthcare services, leaving behind many of the traditional models. For the foreseeable future, borders will continue to openclose-open-close, discouraging people from making international travel plans. In the short term, US hospitals will look to specialization, improved services for their domestic markets, aggressive pricing and innovative offers to insurance companies and self-insured employers to survive the pandemic-before rebuilding international patient services to pre-COVID-19 levels. The future of international patient services across the globe will integrate and expand telemediated service delivery, particularly at the diagnostic and rehabilitation phases of the care continuum for patients who will travel. New cross-border relationships and collaborations are already being created to share knowledge and expertise, especially in the remote delivery of healthcare services. Robotic surgery, for example, reduces the need for surgeons to travel, opening the possibility for delivery of care beyond national borders.
Clinical trials among international partners will improve personalized medicine by diversifying pooled genomic information to combat disease. 43 Changes in cross-border exchange in health services in response to the pandemic may create opportunities for emerging economies, such as Russia, China, India and Brazil, which may recover more quickly than developed economies, to further develop their medical tourism markets. 44,45 Like other times in history, change has emerged from times of crisis. International patient travel will return at some point in the future. As a short-term solution, providers are forging new and innovative collaborations for delivering care to patients abroad that do not necessitate travel.
CONFLICTS OF INTEREST
Irving Stackpole is the President of Stackpole & Associates, which provides consultation related to medical tourism.
Elizabeth Ziemba is President and Founder of Medical Tourism Training, which provides training and consultation related to medical tourism. Tricia Johnson is provides consultation to the US Cooperative for International Patient Programs, a non-profit association of international patient programs of US hospitals.
|
2021-06-08T06:16:43.808Z
|
2021-06-06T00:00:00.000
|
{
"year": 2021,
"sha1": "dd0b619bdf875cb2acab15e6a90b93fd10f41378",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hpm.3259",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bc718f715995573149bd47b47ab1ad27f857d49",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56340428
|
pes2o/s2orc
|
v3-fos-license
|
Beware More than Just the Yellow Snow! A Norovirus Outbreak Associated with a Ski Resort
A laboratory-confirmed Norovirus outbreak occurred at an Arizona ski resort February 2013. A case-control study was performed by the University of Arizona’s student outbreak response team (SAFER) in collaboration with county and state health departments. 46 cases and 25 controls were interviewed quickly. Having eaten at a restaurant at the ski resort was significantly associated with being a case (odds ratio [OR]=9.7) and in particular, the base restaurant (OR=24.9). While the outbreak was likely the result of environmental contamination in the restaurant, the consumption of French fries may have played a significant role. This investigation highlights a successful collaboration between academic and applied public health.
Introduction
Each year norovirus accounts for 90% of all non-bacterial gastrointestinal illnesses and 50% of acute outbreaks worldwide [1][2][3]. Norovirus is spread through the fecal-oral route [4], and transmission can occur person-to-person, through consumption of contaminated food or beverages, or from contact with fomites [5].
Transmission commonly occurs in closed environments such as cruise ships, day care centers, hospitals, nursing homes, and restaurants [2,6]. Symptoms generally last one to three days, and include nausea, vomiting, diarrhea, and headaches [4,[7][8][9]. Annually in the United States (U.S.), norovirus causes an estimated 21 million cases of gastroenteritis, more than 70,000 hospitalizations, and 800 deaths across all age groups [2,10].
The estimated annual cost in the U.S. is $2 billion, with a quarter of those costs being attributed to hospitalizations [2].
Each February, schools in Tucson, Arizona close observe "Rodeo break". During this time, many families opt to spend the five day break (Wed-Sun) at winter resorts upstate. In February 2013, there was a laboratory-confirmed norovirus outbreak linked to eateries at a ski resort.
Stool specimens collected from five people tested positive for norovirus GII + at the Arizona State Public Health Laboratory. Interviews for this outbreak investigation were conducted, in large part, by the Student Aid for Field Epidemiology Response (SAFER) team at the Mel and Enid Zuckerman College of Public Health [11], led by Pima County Health Department (PCHD), the primary jurisdiction of the cases.
This outbreak is an example the successful partnership between academic and applied public health and highlights the type of surge capacity, both for interviewing and data analysis, available to health departments when these relationships exist.
Student run outbreak investigation
The SAFER team was contacted by PCHD on Wednesday, February 27 after they received numerous calls from citizens self-reporting acute symptoms of vomiting, diarrhea and nausea. All cases reported travel history to a local ski resort the previous week. PCHD provided SAFER with an initial line list and outbreak questionnaire.
Within four hours of the request for assistance, the team had identified and interviewed 13 cases and 8 controls. By Monday, March 4, 46 cases and 25 controls, had been interviewed. On Friday, March 8, the Arizona Department of Health Services (ADHS) and PCHD closed the investigation.
Statistical analyses
To determine if secondary transmission was taking place, indicated by an onset time longer than the initial 24-48 incubation period for norovirus, case onset times were converted to hours since the onset of the earliest case. This was necessary because initial analyses did not indicate a particular point source of exposure (such as one common shared meal) and it was uncertain when during the five days the first exposure occurred. Summary statistics were calculated for both cases and controls including: onset date; restaurant exposure; lodging type and occupancy; illness duration; onset time relative to the first case; and age.
Basic univariate analyses were performed, including student's t-test or ANOVA (or their non-parametric equivalent) or simple logistic or linear regressions. For the multivariate analyses, logistic models were built using the case definition dependent variable, while for the onset time and illness duration dependent variables, multiple linear regressions was used.
Due to the communicability of norovirus, all multivariate models were adjusted for clustering by shared lodging. When selecting the multivariate models, potential interaction variables and confounders were considered and tested for using standard techniques. Diagnostics and assumptions were tested on all models using goodness-of-fit tests. All analyses were performed using State versions 11 and 12 (Statacorp, College Station, Texas). For analyses of food items, combination foods, such as chili fries, were analyzed for each separate component as well as combined. All food analyses were also adjusted for food and shared lodging.
Results
Among the subjects interviewed, 46 (64.8%) were cases, and 25 (35.2%) were controls; 55 (77.5%) had eaten at one of the restaurants while 16 (22.5%) had not. There were 14 different lodging sites, with the number of people staying at each ranging from 1 to 13.
More than half (59.2%) of people reported having eaten at the base restaurant, while less than ten people reported eating at each of the other restaurants. The mean onset time following the first case was 47.3 hours with average illness duration of 47.8 hours (range 5-72). The mean age was 27.3 years, the median age of all people interviewed was17 years ( Table 1).
The epidemic curve followed a standard shape of a point source outbreak, with only a few cases occurring on the first and last days (4 and 5, respectively), and the bulk of cases occurring on the middle two days (n=21 and n=15 respectively). The majority of the cases had exposure to the base restaurant, as opposed to the other options ( Figure 1). Based on the univariate analyses, whether someone ate at a restaurant (OR=9.7, 95% confidence interval [CI]: 2.3-46.7) and lodging location (Fisher's exact probability=0.02) were significantly associated with reporting symptoms. Among the individual restaurants, the base and "Mid-Mountain 2" restaurants were significantly associated with whether someone became ill. None of the factors analyzed were strongly significantly associated with illness duration, although whether someone ate at any restaurant showed borderline significance (Wilcoxon p-value=0.0794). Age was significantly associated with onset time (coefficient=-0.352, 95% CI: -0.656 to -0.0477), meaning that younger people tended to get sick before those who were older.
Food analyses
There were seven major food items reported by patrons of these restaurants (hamburgers, cheeseburgers, chili, French fries, cheesesteak, fry bread, and pizza). Analyses of foods found only one food to be statistically significant. Consumption of French fries was found to significant in the univariate (OR=4.36; 95% CI: 1.27-17.1) and when adjusted for both lodging and age (OR=4.25; 95% CI: 1.15-15.8). Only cases reported eating hot dogs (n=8), but the proportion of cases with this exposure was low (17%).
Model diagnostics
When the assumptions for logistic regression were assessed, we found the linearity of the log-odds assumption to be questionable, as the plot of the log-odds against age had a parabolic shape. The required sample size to include a quadratic term was not available. The small sample size made it difficult to judge the other diagnostics normally used for logistic regression. Linear models related to illness duration and onset time were found to meet the linearity assumption. Neither the constant variance nor the normality assumptions were met, but this may have been a product of the small sample size.
Cases peaked on Friday and Saturday, with 80 percent of cases occurring on those two days. Eating at a restaurant was found to be a statistically significant risk factor for illness. Age was negatively, significantly associated with onset time. Eating at a restaurant was significantly associated with shorter illness durations; however this may only be anomaly in the data. It could also be that people who ate at restaurants developed symptoms and transmitted the illness at higher dosages (through environmental contamination of a shared bathroom/living space) to people staying in the same places. As mentioned in the methods section, due to the nature of pathogens such as norovirus, the multivariate models adjusted for clustering by lodging type, which improves the robustness of the standard errors of the parameters, without including lodging type as a covariate.
Limitations
The small sample size limited the (scope) of the analysis, including: making it difficult to test individual foods; affecting the statistical power, including by forcing the use of non-parametric alternatives to commonly-used tests; and making the assumptions for regression models difficult to check.
Unfortunately, as is common in many foodborne outbreaks, analyses of food items were not available for laboratory testing. Results found there to be a high odds of illness associated with consumption of French fries, although this could be due to the handling practices or an environmental contaminant near the food area, rather than the food itself given its cooking procedures.
This investigation which covered multiple jurisdictions at the state and county levels was performed in conjunction with the environmental health investigation. The environmental health portion of the investigation did not note any violations, but were conducted approximately a week after the potential exposure period.
Conclusions
This outbreak investigation is an example of a timely response of a health department working with a student response team. The time from notification to the completion of the interviews was five days from Wednesday, February 27, to Monday, March 4 with the vast majority of interviews being conducted by the students on a voluntary basis. Students also volunteered to clean and analyze the data, submitting their results to the health department such that action could be taken as deemed appropriate by environmental services.
The analyses found the source of the outbreak was likely the base restaurant at the ski resort, followed by secondary transmission. French fries were identified as a possible source, but could not be confirmed through laboratory analyses. While at least one previous study [12] has found an outbreak of Staphylococcus aureus associated with fried chicken, it is more likely that the association with French fries found in our analysis was the result of environmental contamination, e.g., a sick food worker.
|
2019-03-15T13:14:22.689Z
|
2016-05-20T00:00:00.000
|
{
"year": 2016,
"sha1": "3db0f58029f7e6ea208d12ab18d3a61764dbbddf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2161-1165.1000244",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ac32d8f561b14a76f2685f1c40592937ef879405",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222344264
|
pes2o/s2orc
|
v3-fos-license
|
Circulating miR-15a and miR-222 as Potential Biomarkers of Type 2 Diabetes
Background In recent years, considerable attention has been paid to the role of microRNAs (miRs) as biomarkers in type 2 diabetes (T2D). The aim of the study was to evaluate the expression levels of miR-15a and miR-222 in diabetic, pre-diabetic, and healthy individuals. Materials and Methods Ninety individuals, who were referred to the Yazd diabetic center, were enrolled in this study and then classified into three groups as healthy, pre-T2D, and diabetic based on the clinical manifestations. Real-time PCR was performed to explore miRs expression in the plasma samples of the studied population. The correlation between the biochemical characteristic and the expression of these miRs as well as specificity and sensitivity of different clinical markers in healthy and pre-diabetic groups was evaluated. Results miR-222 expression was significantly upregulated in the pre-T2D cases compared to the control subjects (P<0.001), while no significant difference was found between the pre-T2D and T2D groups (P<0.05). The expression of miR-15a was statistically downregulated in the pre-T2D and T2D subjects (P<0.05). The receiver operating characteristic (ROC) curve analysis of miR-15a expression with a cutoff point of 1.12 resulted in the area under the curve (AUC) of 85% (95% CI 0.865–0.912; P<0.001) with 84% and 85% sensitivity and specificity, respectively. Similarly, for miR-222, the cutoff point of 4.03 and AUC of 86% (95% CI 0.875–0.943; P<0.001) discriminated against the pre-T2D and control subjects via the sensitivity and specificity of 86% and 87%, respectively. Moreover, miR-15a values showed a negative correlation with FG (R=−0.32, P=0.005); however, miR-222 values were positively correlated with FG (R=0.25, P=0.03) in the pre-T2D group. Furthermore, miR-222 values were correlated with OGTT in the pre-T2D group (R=0.27, P=0.001). In addition, LDL-C had a negative correlation with miR-222 values in the pre-T2D group (R=−0.23, P=0.02). Conclusion This study indicated that the plasma expression levels of miR-222 and miR-15a can be considered as non-invasive, fast tools to separate the pre-T2D individuals from their healthy counterparts. Accordingly, this information could be used to predict the development of the disease as well as a direction for optimal therapy, thus refining outcomes in patients with diabetes.
Introduction
Type 2 diabetes (T2D) is a common disease that has an increased prevalence rate over the past years as a result of genetic predisposition and lifestyle alterations such as the reduced physical activities and the increased consumption of high-calorie foods. 1 Currently, T2D affects approximately 380 million individuals, which is expected to reach 592 million by 2035. 2,3 Based on the international federation of diabetes atlas, the Middle East and North Africa have the highest prevalence rates of diabetes (10.9%). 4 In 2015, about 85% of 30,202 diabetic patients referring to the university-affiliated adult diabetes clinics were diagnosed with T2D. Also, despite the broad availability of medications in Iran, the frequencies of some diabetes complications like chronic vascular problems are relatively high among T2D individuals. 5 miRs are included in a class of small non-coding small RNAs including 21-25 nucleotides that regulate the expression of various genes by playing a significant role in several biological and pathological processes. 6 Numerous studies have proven the expression of miRs in various tissues and cell types using a tissue-specific expression pattern. 7 Furthermore, the roles of miRs in the regulation of metabolic pathways have been well documented in which they play crucial roles in adipocyte differentiation, energy homeostasis, lipid metabolism, glucose-induced insulin secretion, and inflammation. 8 Therefore, any dysregulation in miRs expression could affect several important cellular functions such as cell cycle regulation, apoptosis, differentiation, and maintenance of the immune system cell, which consequently affect health and disease development. 9 Alteration in miRs is connected to the development, diagnosis, and prognosis of different diseases such as various cancers, 10,11 metabolic, 12 and cardiovascular diseases. 13,14 Nowadays, the individuals at risk of developing T2D are recognized through testing the readily accessible serum factors such as their levels of glucose, cholesterol, lipoproteins triacylglycerol, and HbA1c. Furthermore, several physical and lifestyle characteristics including BMI, waistto-hip ratio, blood pressure, sex, food consumption, physical inactivity, and smoking can be also utilized to assess the risk of the developed T2D. 15 Although biomolecules such as cytokines, adipokines, ferritin, and C reactive protein have been addressed as novel biomarkers that are potentially beneficial, their predictive values are similar to the classic ones. 16,17 Besides, neither classic nor novel biomarkers can efficiently predict the T2D emergence. 18 The application of miRs as biomarkers were firstly proposed for recognizing different types of cancers 19,20 and autoimmune diseases. 21 In case of T2D, different population-based studies have found a close association among different miRs with T2D and its complications. [22][23][24] The first study suggesting a blood miRs signature was conducted by Zampetaki et al through evaluating plasma samples. Accordingly, in this study, they identified a subset of five miRs (miR-29b, miR-28-3p, miR-223, miR-15a, and miR-126) that demonstrated a dysregulation in 80 pre-diabetic or in participants with type 2 diabetes. 25 Subsequently, miRs profiling was performed by Kong et al who discovered an elevation in the expression of seven miRs (miR-146a, miR-30d, miR-29a, miR-124a, miR-34a, miR-9, and miR-375) in T2D cases compared to those who were susceptible to T2D. 26 Karolina et al assessed the miRNAs of the whole blood of T2D patients and detected an increase in their miR-192, miR-150, miR-27a, miR-375, and miR-320a levels. Moreover, they also observed a strong association between the elevated fasting levels of glucose and the increment in levels of miR-320a and miR-27a. 27 These pioneering studies additionally justified the prospective of miRs as biomarkers for T2D. Afterward, different experiments have suggested various miRs such as miR-126 and miR-23a, as potential biomarkers of T2D in general population. 17,28 In this context, the current study was aimed to assess the expression levels of miR-222 and miR-15a in healthy, pre-diabetic, and diabetic individuals to examine the applicability of circulating levels of the aforementioned miRs, as biomarkers, to calculate the risk of the T2D development.
Materials and Methods Patients
Ninety individuals aged between 35 and 80 years old were enrolled by Yazd Diabetes Research Center. Afterward, they were equally classified into 3 groups as follows: healthy (fasting glucose (FG), ≤5.4 mmol/L, oral glucose tolerance test (OGTT) as 2-h post-load glucose (2hPG), <7.8 mmol/l); pre-T2D (FG, 5.4-6.9mmol/L; 2hPG 7.8-11.0 mmol/l); and diabetes (FG, ≥7.0mmol/L; 2hPG >11.1 mmol/l) groups. 29 An informed written consent form was attained from all the individuals before participating in this research. The study procedure was approved by the ethics committee of the Shahid Sadoughi Medical University (Ethics code: IR.SSU.SPH.REC.1397.028) in terms of the declaration of Helsinki. 30 Control subjects had a body mass index (BMI) <30 kg/m2, had no background of diabetes and were not on medications disturbing glucose metabolism. Moreover, the exclusion criteria were as follows: consumption of anti-diabetic medication; systemic acute or chronic inflammatory diseases; acute respiratory infection; being under physical treatment; history of malignancy and liver cirrhosis; body mass index (BMI) higher than 40 kg/m 2 ; and diabetic complications including retinopathy, nephropathy, and cardiovascular diseases. Also, standardized methods were applied for anthropometric values including weight (kg) and height (m). Body mass index (BMI) was then measured as [weight (kg)/(height) 2 (m) 2 ].
Biochemical Analysis
Venous blood samples were collected from all the studied subjects after 8 h of fasting, which were then handled for biochemical and molecular investigations. For plasma preparation, 2 mL blood was acquired on EDTA-containing tubes followed by centrifugation at 1000 × g for 5 min. Fasting and plasma glucose levels were measured using routine laboratory tests. Subsequently, glycated hemoglobin (HbAlc), total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-C), and lowdensity lipoprotein cholesterol (LDL-C) were measured as previously reported. 31
Plasma Preparation and RNA Isolation
Circulating miRs were purified from the plasma samples acquired using an EDTA-containing blood tube. Also, venous blood samples of the subjects were taken and then stored on ice until centrifugation. Afterward, to detach the plasma from the buffy coat fractions and the erythrocytes, the samples were instantly centrifuged at 2.000×g for 10 min at 4°C. Subsequently, the supernatant phase (the plasma) was carefully removed and stored at −80°C until further analyses. The plasma samples were then thawed on ice and 500 μL of each one of these plasma samples was removed for RNA extraction. Next, GeneAll kit (General Biosystems, Seoul, Korea) was utilized to extract total RNA including long ncRNA and small ncRNA in terms of the manufacturer's instruction.
CDNA Synthesis and Real-Time qPCR Assay
The contamination content was measured by the A260/A280 ratio using a Nanodrop spectrophotometer (Thermo Scientific, Wilmington, USA). Afterward, 10 μL of RNA was added to each reaction. miRs were reversely transcribed using the TaqMan MicroRNA Reverse Transcription Kit (Thermo Scientific,) in terms of the manufacturer's manual. Relative expression was calculated by the 2 ΔΔCt method where SNORD47 was utilized as the housekeeping gene. Primer sequences of miR-15a, miR-222, SNORD47 were 5-TAGCAGCACATAATGGTTTGTG-3, 5-CTCAGTA GCCAGTGTAGATCCT-3, and 5-ATCACTGTAAAACCGT TCCA-3, respectively. Also, universal reverse primers were purchased from the Bonyakhte company (Bonyakhte, Tehran, Iran). In addition, RT-PCR was accomplished using 7 μL of SYBR green PCR Master Mix (Applied Biosystems) and 0.5 μL of primers, 5 μL of cDNA (100 ng/μL), 7 μL of RNAse free water for each one of the reactions with 20 μL of a final volume. RT-PCR reactions included an initial denaturation for 10 min at 94°C, followed by 40 cycles of denaturation for 30 s at 94°C, annealing, and extension for 30 s at 60°C. The melting curve was then considered by increasing the temperature from 75°C to 95°C to assure that no primer dimers or undesirable genomic DNA were amplified. All the reactions were conducted in triplicates on 48-well plates (Applied Biosystems, Step One Plus, USA).
Statistical Analysis
Characteristics of the studied subjects were reported as mean ± standard deviation (SD). Pearson's correlation analysis was also used to compare the differences among logarithmically transformed clinical values (FG, OGTT, TG, HDL, LDL, and HbA1c) and miRs expressions. miR expression values showed a normal distribution as established by the Shapiro-Wilk test. Moreover, one way-ANCOVA test was applied to compare the adjusted means of miRs expressions among the three groups. Statistical analysis was conducted using IBM SPSS Statistics, version 19 (IBM Corp, Armonk, NY, USA) at a statistical significance level of P-value <0.05. ROC curves and the area under the ROC curve (AUC) were constituted to assess the stratification capability of plasma miRs. In terms of the ROC analysis, the best statistical cutoff amounts of plasma miRs were computed, and the sensitivity and specificity were then measured for particular cutoff points. AUCs were also calculated for other clinical features as shown in Table 1. Table 2 shows the clinical and biochemical parameters of the studied individuals. In this regard, no significant differences were found in the age, sex, BMI, and lipids profile of the studied participants. While HbA1c and FG were significantly higher in the T2D patients as compared to the healthy and pre-T2D individuals (P<0.001). As shown in Table 3 According to Figure 1, miR-15a expression was remarkably downregulated in the plasma samples obtained from the pre-T2D and T2D (P<0.001) cases compared to the control subjects. However, no significant difference was found between the pre-T2D and T2D group (P<0.05). Likewise, the difference in the expression of miR-222 was also statistically significant among these three groups. miR-222 expression was found to be higher in the T2D and pre-T2D in comparison with the healthy group (P<0.001), while it was no statistically significant difference between the T2D and pre-T2D groups (P<0.05).
Results
The diagnostic assessment of miR-15a and miR-222 expressions was conducted based on the AUC, specificity, and sensitivity by drawing the ROC curve. To discriminate against the healthy individuals from the pre-T2D subjects, in the case of miR-15a, the optimal cutoff point was 1.12 with (Figure 2), respectively. As indicated in Table 1, AUC relevant to the combinations of miRs was 73% (95% CI 0.822-0.903) with the sensitivity and specificity of 73% and 70% by comparing the healthy and pre-diabetic groups, respectively. By comparing between the healthy and pre-diabetic groups, AUC for FG, BMI, TG, and HbA1c were 94%, 62%, 67%, and 85%, respectively.
Discussion
This study is the first experiment that evaluated the expressions of miR-15a and miR-222 in pre-T2D and T2D subjects among the Iranian population. Altogether, these results provide an association between miR-15a and miR-222 DovePress expression levels in diabetic subjects. Levels of miRs expression were also found to be significantly altered in the pre-T2D subjects. Besides, a significant association was also observed between various biochemical parameters and the expression levels of miR-15a and miR-222. In a study performed by Zampetaki et al, which was conducted in caucasian Italian population, miR-15a has also significantly decreased in plasma before the manifestation of T2D. 25 In this study, 92% of diabetic cases were correctly diagnosed using expression levels of miR-15a as well as the levels of the other miRs (miR-126, miR-320, miR-223, and miR-28-3p). In another study by Flowers et al on the Asian Indians population, a negative relationship was found between plasma miR-15a levels and glycemic progression. 32 Also, an in vitro study revealed a significantly higher miR-15a level after 1-h of glucose exposure, which was declined upon longterm exposure. 33 In a study by Houshmand et al in Denmark, the expression of miR-15a has increased in skeletal muscle of the gestational diabetic and type 1 diabetic subjects compared to healthy individuals. Besides, maternal 2 h post-OGTT glucose levels were significantly associated with miR-15a level. 34 This study also indicated that hyperglycemia can increase offspring miR-15a expression. Moreover, a decreased miR-15a level was also reported in the skeletal muscle of hyperglycemic T2D patients in the Dutch population . 34 Conversely, Wang et al reported the upregulation of miR-15a independent of the other risk factors among the odds of T2D in native Swedish samples. 35 Concerning miR-222 expression, Shi et al found an increase in miR-222 expression in omental adipose tissues from the patients with gestational diabetes mellitus in Chinese population. 36 Regarding the contribution of obesity to diabetes development, Ortega et al conducted a study on obese Spanish population and found a significant overexpression in miR-222 level. 37 Accordingly, in this study, a discriminant function of miR-222 along with three other miRs (miR-15a, miR-520c-3p, and miR-423-5p) was specific for morbidly obese patients, with a 94% diagnostic accuracy. Furthermore, Li et al showed an upregulation in miR-222 serum levels of T2D Chinese women. 38 On the contrary, investigation of miRs expression pattern in the Italian population performed by De Candia et al revealed a decline in miR-222 expression of T2D subjects, which was inversely correlated with glycated hemoglobin and modulated in the plasma samples of diabetic subjects. 39 Inconsistent results concerning miR-15a and miR-222 expressions mentioned in previously performed studies can be attributed to different factors. Also, it was obvious that different studies with different population backgrounds (ie, genetic and environmental factors) can affect miRs expression. For example, in a study by Wang et al, it was identified that the expression of several miRs including miR-15a have had different expression patterns in Iraqis and Swedes living in a similar environment. 35,40 On the other hand, the high sequence similarity of miRs family members and the accuracy of various miRs measurement platforms are the other factors accounting for some of the disparities. Additional challenges could also result in more inconsistency such as the definition of disease's stages and specific tissue expression in which the miRs quantification was conducted. As an example, various miRs expression results could be obtained by investigating plasma, serum, and whole blood in a group of T2D subjects with the same ethnicity. 41,42 Totally, regardless of the high number of studies aimed to identify miRs involvement in diabetes, only a few number of them repeatedly discovered a subset of promising miRNAs. Indeed, most of the reported studies cannot be reproduced using the other sets of specimens. Thus, further studies are required to specify the diagnostic and prognostic potentials of miRs, as a T2D biomarker. So far, several studies have proposed various miRs such as miR-23a, miR-15a, and miR-375 as potential biomarkers for detecting T2D in the general population. 28,43,44 Merely, miR-126 has been repeatedly reported in different studies as a potential predictor of both pre-T2D and T2D individuals. 45,46 Herein, we reported a remarkable difference between expression values of miR-15a and miR-222 in pre-T2D subjects compared to healthy individuals. ROC analysis demonstrated considerable sensitivity and specificity of plasma miR-15a and miR-222 in distinguishing pre-T2D subjects and healthy individuals. AUC pertinent to the combination of miR-15a and miR-222 was 73%, which shows a lower stratification power compared to miR-15a and miR-222. Although AUC of HbA1c is comparable with miR-15a and miR-222; and also, AUCs for BMI and TG have lower values than AUCs of miRs. We also observed an association among miR-15a and miR-222 and FG, OGTT, and LDL-C, which candidates these miRs in diabetes development, while more studies are needed to further clarify these miRs roles.
Concerning miR-222 role in the pathophysiology of diabetes, it has been identified that miR-222 regulates estrogen receptor (ER) and glucose transporter 4 (GLUT4) protein levels in 3T3-L1 adipocytes cells. Moreover, using antisense oligonucleotides of miR-222 to silent miR-222 can lead to an increment in ER expression, GLUT4 translocation from cytoplasm to the cell membrane, glucose uptake, and an improvement in insulin sensitivity. 36 Furthermore, Tsukita et al showed that bone marrow-secreted miR-222 following the bone marrow transplantation in streptozotocin-induced diabetic rats led to β-cell regeneration as well as amelioration of hyperglycemia. 47 Additionally, overexpression of miR-222 in normal human liver cell line inhibited Coenzyme A oxidase 1, which is the key enzyme in fatty acid β-oxidation, and prevented β-oxidation of fatty acids and remarkably augmenting the triglyceride content. 48 Alteration in miR-15a expression is associated with various biological processes. In this regard, some studies suggested a critical role for miR-15a in endothelial cell function and angiogenesis in peripheral vascular and cerebrovascular tissues. 49 Upregulation of miR-15a reduces the angiogenesis in pro-angiogenic cells in vitro by targeting vascular endothelial growth factor A (VEGFA), which is associated with myocardial ischemia/reperfusion injury in mice. Moreover, the inhibition of miR-15 in cerebral vascular endothelial cells boosts the pro-angiogenic activity in animal models and in vitro studies. 50 Besides, the downregulation of miR-15a has been found to be correlated with genomic instability and postnatal mitotic arrest of cardiomyocytes, as reported in several studies. 51 Longterm glucose exposure results in the decreased miR-15a expression, which is associated with the decreased insulin expression and biosynthesis. In the same study, miR-15a upregulation led to the promotion of insulin gene expression in mouse insulinoma cells. 33 Notably, the conventional biochemical indicators for T2D including levels of glucose and HbA1c may predict the initiation of T2D a few years prior to disease emergence. However, these biomarkers are not specific for T2D, which make them impractical to evaluate disease predisposition in the general population. Hence, the discovery of new biomarkers could be helpful in the identification of those people who are at risk leading to an appropriate management of T2D if necessary. The present clinical biomarkers can predict the development of diverse medical disorders such as myocardial infarctions, diabetes, and cancer. 46 The most recent trend in biomarker discovery is searching for sensitive biomarkers applicable for the differentiation of the affected individuals from their healthy counterparts as well as specifying different stages of the disease. Another anticipated merit of these biomarkers is their availability, so that they can be readily attained from body fluids such as saliva, urine, or blood. Moreover, the stability of circulating miRs is remarkable in different situations such as tolerance to ribonucleases and freezing/thawing cycles. 52 For example, plasma or serum specimens can be kept at −80°C for several months with no remarkable degradation. 53
Conclusions
Overall, this study investigated the expression patterns of miR-15a and miR-222 for possible discernment among healthy, pre-T2D, and healthy subjects. The results revealed different expression patterns for miR-15a and miR-222 in the Iranian population, which can be considered as a valuable cornerstone for upcoming studies on the biomarker capability of these miRs for T2D.
|
2020-10-15T05:09:28.299Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6cc1e68306d4e62a205e9508c38b6e61b6192dda",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=62070",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cc1e68306d4e62a205e9508c38b6e61b6192dda",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17845614
|
pes2o/s2orc
|
v3-fos-license
|
Thrombospondin expression in myofibers stabilizes muscle membranes
Skeletal muscle is highly sensitive to mutations in genes that participate in membrane stability and cellular attachment, which often leads to muscular dystrophy. Here we show that Thrombospondin-4 (Thbs4) regulates skeletal muscle integrity and its susceptibility to muscular dystrophy through organization of membrane attachment complexes. Loss of the Thbs4 gene causes spontaneous dystrophic changes with aging and accelerates disease in 2 mouse models of muscular dystrophy, while overexpression of mouse Thbs4 is protective and mitigates dystrophic disease. In the myofiber, Thbs4 selectively enhances vesicular trafficking of dystrophin-glycoprotein and integrin attachment complexes to stabilize the sarcolemma. In agreement, muscle-specific overexpression of Drosophila Tsp or mouse Thbs4 rescues a Drosophila model of muscular dystrophy with augmented membrane residence of βPS integrin. This functional conservation emphasizes the fundamental importance of Thbs’ as regulators of cellular attachment and membrane stability and identifies Thbs4 as a potential therapeutic target for muscular dystrophy. DOI: http://dx.doi.org/10.7554/eLife.17589.001
Introduction
Muscle degenerative diseases such as muscular dystrophy (MD) are most commonly caused by mutations in genes that are part of the dystrophin-glycoprotein (DGC) complex or the integrin complex of proteins (Grounds et al., 2005;McNally and Pytel, 2007). In addition, proper post-translational processing and trafficking of these complexes to the sarcolemma are essential to form a molecular attachment network between the myofilament proteins within the myofibers and the basal lamina and extracellular matrix (ECM) outside the cell (Goddeeris et al., 2013;Liu et al., 2012;Xu et al., 2009). This attachment network provides critical structural support to the plasma membrane (sarcolemma) to withstand contractile forces (Burr and Molkentin, 2015;Grounds et al., 2005;Gumerson and Michele, 2011;Lapidos et al., 2004). When this attachment network is deficient in MD, membrane ruptures occur leading to intracellular calcium influx that causes myofiber necrosis, an inflammatory response, fibrosis and fatty tissue replacement, and ultimately muscle functional loss and death (Burr and Molkentin, 2015;Gumerson and Michele, 2011;Lapidos et al., 2004). Skeletal muscle is perhaps the most sensitive of all tissues to genetic mutations in genes that impact cellular attachment complexes or membrane repair capacity, in part because of the dynamic changes in length that occurs in each myofiber during contraction (Burr and Molkentin, 2015;Grounds et al., 2005;McNally and Pytel, 2007).
Thrombospondins (Thbs) comprise a family of 5 genes in mammals that encode secreted matricellular proteins involved in diverse biologic processes (Adams and Lawler, 2011 ;Schellings et al., 2009). The thrombospondin family consists of two subgroups based on their sequence conservation and oligomeric structure. Thbs3, Thbs4 and Thbs5 form pentamers and are the most similar to Thbs genes found in lower organisms (Adams and Lawler, 2011). Thbs1 and Thbs2 form trimers and have evolved additional domains such as a type 1 repeat important for transforming growth factor-b binding and a region that affects angiogenesis (Adams and Lawler, 2011;Schellings et al., 2009). Drosophila contains a single thrombospondin gene (Tsp) that forms pentamers, and when deficient causes developmental lethality due to disruption in muscle and tendon attachment within the body wall segments of the embryo (Adams and Lawler, 2011;Subramanian et al., 2007).
While traditionally characterized as a secreted ECM or matricellular protein over the past 3 decades, (Adams and Lawler, 2011;Schellings et al., 2009) Thbs can also function within the cell, and in some systems this appears to be their primary role (Ambily et al., 2014;Baek et al., 2013;Brody et al., 2016;Duquette et al., 2014;Lynch et al., 2012;McKeown-Longo et al., 1984;Posey et al., 2014). For example, Thbs4 was recently shown to have a critical cardioprotective function from within the endoplasmic reticulum (ER) where it mediates an adaptive ER stress response (Brody et al., 2016;Lynch et al., 2012). The traditional ER stress response involves sensing of calcium and unfolded or damaged proteins within the ER through the calcium binding chaperone protein BiP (GRP78), which binds/regulates at least 3 distinct stress response pathways initiated by either PKR-like ER kinase (PERK), inositol-requiring enzyme 1a (IRE1a) or activating transcription factor 6 (ATF6), each resident within the ER membrane (Glembotski, 2007;Mori, 2009). These 3 ER stress response mediators initiate a cascade of signaling that alters protein synthesis and other features of cellular adaptation to stress or protein unfolding and aggregation (Glembotski, 2007;Mori, 2009). Here, Thbs4 directly binds the ER luminal domain of ATF6a to promote its shuttling to eLife digest Muscle cells, also known as myofibers, need to be robust in order to withstand the physical stresses of contracting and relaxing. As a result, the cell surface membrane that surrounds myofibers is more strongly anchored to its surroundings than that of other cells. Muscular dystrophies are a group of muscle-wasting disorders that usually arise when this surface membrane becomes less stable. For example, mutations that affect a protein called dystrophin-glycoprotein or integrin protein complexes can cause muscular dystrophy since these proteins normally keep the membrane anchored and stable when the muscle contracts and relaxes.
When myofibers in mammals become injured, as is the case during muscular dystrophy, they produce more proteins called thrombospondins -with thrombospondin-4 being the most common. However, until now it was not clear what these proteins did in muscle cells.
Vanhoutte et al. hypothesized that thrombospondin-4 may protect injured myofibers and tested their theory by first deleting the gene for thrombospondin-4 from mutant mice that were predisposed to develop muscular dystrophy. This worsened the muscle wasting in the mutant mice, and furthermore, deleting the gene for thrombospondin-4 also caused otherwise normal mice to develop muscular dystrophy in their old age. Conversely, when Vanhoutte et al. artificially increased the levels of thrombospondin-4 in the myofibers, it protected the mice against muscular dystrophy. Additional experiments conducted in fruit flies demonstrated that the protective effects of thrombospondin are conserved or similar in insects too. Lastly, biochemical experiments in mouse and rat cells showed that thrombospondin-4 aids dystrophin-glycoproteins and integrins in getting to the cell surface membrane, increasing its stability.
Overall these findings provide a clearer picture of the molecular underpinnings of muscular dystrophies. In the future, more experiments will have to focus on exactly how thrombospondins stabilize and direct dystrophin-glycoproteins and integrins to the cell surface membrane.
the Golgi and then nucleus, thereby inducing genes underlying adaptive aspects of the ER stress response (Brody et al., 2016;Lynch et al., 2012).
Thbs proteins move through the secretory pathway where they appear to facilitate secretion of ECM proteins or perhaps chaperone protein complexes to the cell membrane (Adams and Lawler, 2011). Once secreted, Thbs proteins transiently or permanently reside in the ECM where they interact with fibronectin, collagens and proteoglycans (Adams and Lawler, 2011;Frolova et al., 2014Frolova et al., , 2012Hauser et al., 1995;Sö dersten et al., 2006). Thbs proteins are also recycled back into the cell through the low-density receptor-related protein (LRP) (Wang et al., 2004). One critical feature of the Thbs family is that each member is induced following injury events or in response to processes requiring tissue growth, healing and remodeling. Interestingly, Thbs4 is largely restricted to cardiac and skeletal muscle where its expression is induced with injury or disease (Adams and Lawler, 2011;Chen et al., 2000;Frolova et al., 2014Frolova et al., , 2012Hauser et al., 1995;Lynch et al., 2012;Schellings et al., 2009;Sö dersten et al., 2006). In addition, markers of ER stress are upregulated during progression of skeletal muscle disease and MD (Lavery et al., 2008;Moorwood and Barton, 2014).
Here we observed that in response to MD in skeletal muscle, Thbs4 mRNA and protein are induced. Overexpression of Thbs4 in skeletal muscle of transgenic (Tg) mice protected against MD, while mice lacking Thbs4 (Thbs4 -/-) showed signs of spontaneous MD with aging. Mechanistically, Thbs4 directs a membrane attachment intracellular vesicular trafficking network that promotes greater stability of the DGC and integrin complexes at the sarcolemma of skeletal muscle fibers. This function of Thbs is conserved in Drosophila as overexpression of either mouse Thbs4 or Drosophila Tsp in muscle rescues MD that occurs in Drosophila deficient in its d-sarcoglycan-related gene (Allikian et al., 2007).
Thbs4 augments adaptive ER stress signaling in skeletal muscle and mitigates MD
In agreement with previous findings, Thbs4 RNA is induced in muscle biopsies from human patients with Becker MD, Duchenne MD, and limb-girdle MD (LGMD) ( Figure 1A; Figure 1-figure supplement 1A) (Chen et al., 2000). We next turned to 2 different mouse models of MD. One due to deletion of the d-sarcoglycan (Sgcd) gene to model LGMD2F, and a second due to defective dystrophin expression resulting from the mdx mutation that models Duchenne MD (Durbeej and Campbell, 2002). Thbs4 protein is induced in skeletal muscle of each mouse model at six weeks and three months of age, along with induction of an ER stress response associated with greater cleaved ATF6a-N (nuclear form) and increased total BiP levels ( Figure To model the known increase in Thbs4 protein that occurs in dystrophic skeletal muscle we generated Tg mice with Thbs4 protein overexpression specific to skeletal muscle ( Figure 1D). High levels of Thbs4 protein overexpression were observed in fast-twitch containing muscles such as the quadriceps, gastrocnemius, with intermediate levels in the diaphragm and very low levels in the soleus, while the heart lacked expression ( Figure 1D). Tissue Immunofluorescent analysis revealed that Thbs4 protein was undetectable in uninjured skeletal muscle while the transgene produced abundant expression that co-localized with calreticulin to a vesicular network on the periphery of the myofibers of paraffin-embedded quadriceps and was also clearly inside of collagen I staining that marks the ECM of cryo-embedded quadriceps ( Figure 1E). Furthermore, although Thbs4 protein localization appeared slightly different between paraffin-and cryo-embedded skeletal muscle of the Sgcd -/mouse, induction of endogenous Thbs4 again showed localization within the vesicular network inside the myofibers, and only within limited regions outside of myofibers where fibrotic tissue deposition was prominent ( Figure 1E).
In agreement with the above observations of Thbs subcellular localization, biochemical analysis revealed that Thbs4 was high in the type of glycosylation that typifies ER resident proteins, but it also contains glycosylation that is observed on proteins that transit through the Golgi in route to be deposited in the ECM (Figure 1-figure supplement 2A Immunogold Thbs4 H Thbs4 Calreticulin g g g g g g g g g g g g g g g g g g g g g g g g g g g g g g g g g
Thbs4
Collagen I * Thbs4 Tg WT Figure 1. Thbs4 is induced in dystrophic skeletal muscle and its overexpression augments ER activity and vesicle content. (A) Thbs4 mRNA levels in human skeletal muscle biopsies from normal or patients with Becker MD (BMD; n = 5), Duchenne MD (DMD; n = 10) or 2 different types of limb-girdle MD (LGMD; n = 10 for both). *p<0.05 vs. normal (n = 18) by Student's t test. Data are presented as mean ± SEM. Full analysis including all 11 human muscle diseases is shown in Figure 1-figure supplement 1A. (B,C) Western blot for the expression of Thbs4, ATF6a-N (50 kDa, nuclear) and BiP in the quadriceps of WT, Sgcd -/and mdx mice at six weeks (w) and three months (mo) of age (n = 4 biological replicates). (D) Schematic diagram showing the skeletal muscle-specific transgene to overexpress Thbs4 and (lower) Western blots for Thbs4 or gapdh control from WT and Tg mice at 6 w of age from Quad, quadriceps; Gas, gastrocnemius; Sol, soleus; Diaph, diaphragm; and heart (n = 2 biological replicates). (E) Upper micrographs represent coimmunofluorescent labeling of intracellular Thbs4 (green) with calreticulin (red) on paraffin embedded quadriceps (Quad.) of WT, Thbs4-Tg and Sgcd -/mice at 3 mo of age (scale bar = 20 mm). Arrowheads indicate co-localization of Thbs4 with calreticulin in intracellular vesicles in the myofibers. Lower micrographs represent co-immunofluorescent labeling of Thbs4 (green) with collagen I (red) on cryo-embedded Quad of WT, Thbs4-Tg and Sgcd -/mice at 3 mo of age (scale bar = 50 mm). Arrowheads indicate co-localization of Thbs4 with collagen I in the extracellular milieu; the star marks a myofiber with both intra-and extracellular Thbs4 labeling from a diseased muscle. Nuclei are visualized in blue. Representative images of 4 mice per genotype are shown. (F) Western blot analysis of Thbs4 and the ER-stress proteins ATF6a-N (50 kDa, nuclear), BiP, PDI, calreticulin (Calret.), and Armet in 6w old WT, Thbs4-Tg and Sgcd -/quadriceps (n = 4 biological replicates). (G) Transmission electron microscopy in WT versus Thbs4 Tg quadriceps at 3 mo of age showing a massive expansion of intramyofibrillar and subsarcolemmal ER and associated vesicles with Thbs4 overexpression (arrowheads, scale bar = 2 mm). Representative images of 2 mice per genotype are shown. (H) Immunogold electron microscopy shows that Thbs4 (6 nm gold particles; yellow arrows) robustly localizes to the expanded sub-sarcolemmal vesicular compartment in Thbs4-Tg quadriceps, compared to endogenously expressed Thbs4 in WT quadriceps. Representative images of 2 mice are shown. Scale bar = 50 nm DOI: 10.7554/eLife.17589.003 The following figure supplements are available for figure 1: myotubes and at least in the case of myoblasts is transported to rab7-positive late endosomes, potentially explaining why Thbs4 protein is low or undectable in the ECM of healthy Thbs4 Tg muscles (Figure 1-figure supplement 3). Similar to the results observed with collagen I, co-labeling with another ECM/matricellular protein periostin again showed that Thbs4 could co-localize to the ECM region in Sgcd -/diseased myofibers, although under non-diseased conditions overexpressed Thbs4 was again only appreciably observed intracellularly under the sarcolemma within a peripheral vesicular pattern ( Figure 1E; Figure 1-figure supplement 2C). Hence, although our observations do not exclude the possibility that non-muscle cells might also express Thbs4, our data collectively identify the myofiber as an important cellular source of Thbs4 expression, secretion and re-uptake.
Careful analysis of other markers of the ER compartment and ER stress showed that Thbs4 overexpression in skeletal muscle induced a profile very similar to diseased skeletal muscle in Sgcd -/mice, with increased levels of nuclear ATF6a, BiP, protein disulfide isomerase (PDI), calreticulin and Armet, as compared to WT muscle ( Figure 1F; Figure 1-figure supplement 1D). Remarkably, transmission electron microscopy and immunogold detection revealed that Thbs4 overexpression in skeletal muscle caused a dramatic induction of sub-sarcolemmal and intramyofibrillar ER and post-ER vesicles that contained Thbs4 protein ( Figure 1G,H; Figure 1-figure supplement 2D). These Thbs4-dependent vesicles were highly uniform in size and more electron dense compared with similar vesicles in subsarcolemmal regions from WT muscle. Future studies will investigate the nature of these Thbs4-expanded vesicles and their composition based on known variables (Malhotra and Erlmann, 2015;Paczkowski et al., 2015).
To determine if Thbs4 induction in MD was adaptive or maladaptive we first crossed the Thbs4 Tg into both the Sgcd -/and mdx backgrounds. Importantly, Thbs4 overexpression itself in skeletal muscle caused no histopathology or functional defects compared to WT mice at three and 12 months of age ( More importantly, Thbs4 overexpression significantly reduced multiple histopathological hallmarks of dystrophic disease, including elevated serum creatine kinase (CK) levels, reduced myofiber degeneration/regeneration cycles as marked by reduced centrally nucleated myofibers, reduced fibrotic remodeling and less functional decline in skeletal muscle at both three and 12 months of age in both Sgcd -/and mdx mice, compared with each dystrophic model alone ( The sarcolemma of dystrophic myofibers are weak and frequently rupture, which can be assessed in vivo by Evans blue dye (EBD) uptake into muscle fibers after systemic injection (Goonasekera et al., 2011;Lapidos et al., 2004). Here, the percentage of myofibers with ruptured membranes after forced treadmill running was significantly reduced in Sgcd -/and mdx mice that contained the Thbs4 Tg, compared with Sgcd -/and mdx mice alone, while no EBD uptake was observed in WT or Tg muscle ( Figure This dramatic protection from MD observed in 2 mouse models of this disease with Tg-mediated overexpression of Thbs4 led us to investigate whether overexpression of Thbs4 would be sufficient to reduce acute dystrophic disease for the first time using a gene therapy-related approach to bypass possible developmental effects of overexpression. Hence, here we performed a study in Sgcd -/mice with an adeno-associated virus serotype-9 (AAV9)-Thbs4 vector, which was injected into the gastrocnemius of three day-old neonates, followed by harvesting at six weeks of age to assess histopathology ( Figure 2-figure supplement 4A). Littermates injected with an eGFP expressing AAV9 were used as a control. In agreement with our previous findings, this approach also resulted in abundant expression of either the control eGFP protein or a 5-fold increase in Thbs4 in the muscle
Loss of Thbs4 predisposes to MD
The induction of Thbs4 that normally occurs in skeletal muscle with dystrophic disease was further shown to be an adaptive and physiologic mechanism through the analysis of mice lacking the Thbs4 gene. Here we crossed Sgcd -/and mdx mice into the Thbs4 null background and performed a full analysis of pathogenesis at three months of age. Thbs4 -/mice alone at three months of age showed minimal or no pathological changes in skeletal muscle, although combinatorial Thbs4 -/-Sgcd -/or Thbs4 -/mdx mice showed a significant worsening of MD, including greater histopathology, greater serum CK levels and reduced treadmill running performance, compared with single null Sgcd and mdx mice (Figure 3A-C; Figure 3-figure supplement 1A-C). Sgcd -/-Thbs4 -/mice also showed significantly greater EBD uptake in skeletal muscle after forced treadmill running versus Sgcd -/mice alone ( Figure 3D,E).
The observation that loss of the Thbs4 gene makes dystrophic pathology significantly worse in both Sgcd -/and mdx mice suggests that induction of this gene product plays an important protective role, and we reasoned that with aging loss of Thbs4 might eventually become pathologic to muscle given that low levels of continuous expression are present. Indeed, we observed that by six months and one year of age, Thbs4 -/mice showed a significant reduction in treadmill running capacity and increased serum CK levels compared to WT muscle ( Figure 3F,G). Furthermore, by one year of age Thbs4 -/muscle had increased signs of ongoing myofiber degeneration/regeneration, as marked by centrally nucleated myofibers, as well as noticeable histopathological and ultrastructural changes and greater EBD uptake compared to WT control muscle ( Figure 3H-J). Collectively, these results suggest that Thbs4 induction with dystrophic disease and low levels of expression during aging produce an adaptive physiologic response that protects skeletal muscle.
Thbs4 directly impacts sarcolemma stability of myofibers
To directly evaluate the structural integrity of the sarcolemma we first employed a model of 3 successive lengthening-contraction injury cycles to the tibialis anterior (TA) muscle in a whole leg immobilization preparation ( Figure 4A). Remarkably, overexpression of Thbs4 significantly protected against lengthening-contraction injuries over all 3 bouts compared to WT mice ( Figure 4B). As anticipated, the TA from Sgcd -/mice showed much greater loss of functional recovery compared to WT after lengthening-contraction injury, but the presence of the Thbs4 Tg provided significant protection, achieving a recovery response now similar to WT levels ( Figure 4B). Moreover, loss of Thbs4 resulted in greater injury with all 3 cycles of lengthening-contraction injury, similar to Sgcd -/-( Figure 4C). Interestingly, the passive force of the muscle after simple stretching was greater with the Thbs4 Tg, while it was reduced in Thbs4 -/muscle, although the twitch force itself was not significantly different between any of the groups (Figure 4-figure supplement 1A,B). We also noticed that the tendons were weaker in Thbs4 -/-TA muscle, which were more likely to rupture in the isolated lengthening-contraction injury assay (Figure 4-figure supplement 1C).
Individual myofibers were isolated from the flexor digitalis brevis (FDB) muscle and subjected to laser injury with subsequent measurement of FM1-43 fluorescent dye uptake as a direct measure of membrane stability and repair ( Figure 4D). FDB myofibers from Thbs4 Tg mice showed less dye Membranes of myofibers are shown in green. *p<0.05 vs WT; #p<0.05 vs. Sgcd -/by one-way ANOVA with post hoc Tukey's test. N = 5 and 6 mice for Sgcd -/and Sgcd -/-Thbs4 -/-, respectively. Scale bar = 40 mm. (F,G) Time to fatigue in seconds with forced downhill treadmill running and quantitation of serum CK levels in WT and Thbs4 -/mice at the indicated ages; abbreviations, y = year. n = 6 mice per genotype per age for panel F. For panel G, n = 7 mice per genotype at six weeks of age; n = 5 WT and 6 Thbs4 -/mice at three months of age, n = 5 mice per genotype at six months of age and n = 9 WT and 8 Thbs4 -/mice at one year of age. *p<0.05 vs WT at the same age by Student's t test. (H) H&E histological staining (upper) and transmission electron microscopy (lower) of tissue pathology in Quad at one year of age in WT and Thbs4 -/mice. The H&E scale bar = 25 mm. The electron microscopy scale bar = 2 mm. The arrows show myofibers with central nucleation due to loss of the Thbs4 gene. Representative images of 6 mice per genotype for H&E staining and 2 mice per genotype for electron microscopy. (I) Percentage of myofibers with centrally located nuclei in Thbs4 -/compared to WT Quad at one year of age. *p<0.001 vs WT by Student's t test. n = 6 mice per genotype. (J) Masson's trichrome stained (upper) and EBD uptake (lower) histological images in WT and Thbs4 -/mice in the Diaph at one year of age. The EBD images show membranes in green and fibers with EBD uptake produce red fluorescence. Representative images of 6 mice per genotype studied. Scale bars = 50 mm. All data are represented as mean ± SEM. DOI: 10.7554/eLife.17589.012 The following figure supplement is available for figure 3: uptake compared with WT myofibers, suggesting that Thbs4 overexpression was inherently protective to the membrane ( Figure 4D,E). As expected, FDB myofibers from Sgcd -/mice showed greater dye uptake suggesting greater injury with less efficient repair, while the presence of the Thbs4 Tg was protective in Sgcd -/myofibers, bringing dye uptake levels back to WT ( Figure 4D,E). FDB myofibers from Thbs4 -/mice also showed greater dye uptake compared with WT myofibers after laser injury, while double Thbs4 -/-Sgcd -/myofibers displayed an even greater injury response than either Thbs4 or Sgcd single deletions alone ( Figure 4F). Taken together these results indicate that Thbs4 overexpression provides greater stability to the sarcolemma, while its loss renders the sarcolemma less stable. More importantly, since this assay uses isolated myofibers devoid of ECM attachments, it suggests that part of Thbs4-dependent protection occurs from within the myofiber.
Thbs4 augments intracellular trafficking through ATF6a
To investigate a molecular mechanism whereby Thbs4 might directly regulate the stability of the sarcolemma in skeletal muscle we examined intracellular vesicular trafficking and membrane attachment complex formation. Two important clues that directed our investigation were the known augmentation in the adaptive ER stress response pathway and the dramatic increase in sub-sarcolemmal vesicles observed in skeletal muscle with Thbs4 overexpression. Hence, we instituted two in vitro assays to assess ER-to-Golgi and Golgi-to-sarcolemma vesicular trafficking in response to Thbs4 activity in primary neonatal rat ventricular myocytes ( Figure 5; Figure 5-figure supplement 1A,B, technical issues prevented such studies in myotubes or myofibers). To assess ER-to-Golgi trafficking a red fluorescent protein (RFP)-labeled Golgi resident enzyme, GalNacT2-RFP, was used with and without Thbs4 overexpression by fluorescent recovery after photobleaching (FRAP, Figure 5-figure supplement 1A). In parallel, ATF6a activity was also modulated since Thbs4 is known to directly regulate this ER-stress transcription factor (Brody et al., 2016;Lynch et al., 2012). Here, Thbs4 overexpression significantly accelerated ER-to-Golgi vesicular trafficking, which was fully inhibited by cooverexpression of a dominant negative (dn) ATF6a construct ( Figure 5A). Similarly, Golgi-to-sarcolemmal trafficking rates, measured with VSVG-enhanced green floursecent protein (eGFP) after inverse (i) FRAP, were significantly accelerated upon Thbs4 overexpression, which was again inhibited with ATF6a-dn ( Figure 5B; Figure 5-figure supplement 1B). Furthermore, accelerated trafficking with Thbs4 was mimicked by overexpression of a constitutively nuclear (cn) ATF6a construct ( Figure 5C,D).
To examine this effect of Thbs4 in greater molecular detail we also employed 2 domain-specific Thbs4 constructs and a related oligomeric glycoprotein Nell2 ( Figure 5E-J) (Brody et al., 2016;Kuroda et al., 1999;Lynch et al., 2012). Like Thbs4, Nell2 contains an N-terminal laminin-G like (LamG) domain and an epidermal growth factor (EGF)-like repeat domain but lacks the ATF6a interacting Type III repeat (T3R) and TSP-C domains from Thbs4. Importantly, unlike full-length Thbs4, overexpression of Nell2 or the LamG domain of Thbs4 did not increase ER-to-Golgi or post-Golgi trafficking rates ( Figure 5E,F,I,J). However, overexpression of just the T3R domain of Thbs4, which functions as the ATF6a interacting region, was sufficient to accelerate ER to Golgi and post-Golgi trafficking ( Figure 5G,H). Thbs4 overexpression in skeletal muscle also augmented the levels of trafficking regulatory proteins such as Sar1, Rab24, Rab6, Rab3 and Rab8, which control ER-to-Golgi and post-Golgi vesicular trafficking (Figure 5-figure supplement 1C) (Brandizzi and Barlowe, 2013;Stenmark, 2009). Collectively, these results indicate that Thbs4 accelerates intracellular vesicular trafficking in an ATF6a-dependent manner, thereby suggesting at least one molecular mechanism whereby Thbs4 might enhance membrane stability through greater fluxing of vesicles to the sarcolemma.
Thbs4 stabilizes sarcolemmal attachment complexes
To more definitively investigate if greater vesicular trafficking rates associated with Thbs4 activity might augment the residency of membrane attachment complex proteins, we generated membrane- WT . Thbs4 regulates skeletal muscle sarcolemma stability. (A) Schematic representing the first of 3 consecutive lengthening contraction-induced muscle injury cycles using an in situ tibialis anterior (TA) muscle preparation. Briefly, an isometric contraction was performed to determine baseline force generation, followed by 2 consecutive eccentric contractions and finally another isometric contraction (= injury cycle 1). The force deficit shown in panels B and C was calculated between the first and second isometric contraction in between the two lengthening-contractions, which was repeated 3 Figure 4 continued on next page specific protein preparations for Western blotting, as well as performed immunohistochemistry on skeletal muscle for direct visualization of the membranes in cross-section. As previously observed, loss of d-sarcoglycan in skeletal muscle of Sgcd -/mice resulted in the near complete absence of the other sarcoglycans at the membrane (as they all form a complex; Figure 6A) (Durbeej and Campbell, 2002). Interestingly, Thbs4 Tg alone displayed increased sarcolemmal levels of b-dystroglycan, dystrophin and b1D-integrin ( Figure 6A). More importantly, the Thbs4 Tg in the Sgcd deficient background resulted in greater membrane localization of a-, b-, and g-sarcoglycan and b-dystroglycan, as well as in dystrophin, utrophin and b1D-integrin ( Figure 6A). More quantitative assessment of this effect by Western blotting of membrane-specific protein extracts showed that even the Thbs4 Tg alone gave increased membrane levels of utrophin, a-and b-dystroglycan, as well as b1D-, a7 and a5-integrins compared to WT controls ( Figure 6B; Figure 6-figure supplement 1A,B; red boxes), which likely explains our earlier observations whereby Thbs4 Tg skeletal muscle was protected from lengthening-contraction injury and laser injury in isolated myofibers versus WT. The Thbs4 Tg also augmented the membrane residency of these same proteins in the Sgcd -/and mdx background, as well as increased membrane levels of a-sarcoglycan and b-sarcoglycan in skeletal muscle of Sgcd -/mice ( Figure 6B; Figure 6-figure supplement 1A,B, for replicate samples). Importantly, previous work has shown that overexpression of DGC components augment the assembly of the entire complex and are inherently protective to MD (Allikian et al., 2004;Grounds et al., 2005;Gumerson and Michele, 2011;Tinsley et al., 1998). Furthermore, we observed that expression of b1-and a7-integrin from total cytoplasmic protein extracts was not increased, suggesting that Thbs4 overexpression directly augmented membrane trafficking and localization of these critical attachment proteins to the surface ( Figure 6B, lower panel). Finally, we also observed that loss of Thbs4 in skeletal muscle partially reduced membrane residency of a-and b-dystroglycan, b1D-and a7-integrins ( Figure 6C, burgundy boxes). Collectively, these observations solidify a mechanism whereby Thbs4 regulates membrane stability in skeletal muscle by augmenting the trafficking of membrane attachment protein complexes to the sarcolemma. As previously reported with other Thbs proteins (Adams and Lawler, 2011), we observed that Thbs4 could directly bind intracellular b1D-integrin, but not a7-integrin, in primary neonatal rat cardiomyocytes ( Figure 6-figure supplement 1C). In addition, both Thbs4 and a5-integrin localized to b1D-integrin-positive intracellular vesicles in WT, Thbs4 Tg and Sgcd -/quadriceps muscle, placing Thbs4 within the same vesicles and intracellular compartment as the attachment proteins themselves ( Figure 6-figure supplement 1D,E).
ATF6a drives vesicular expansion in skeletal muscle but does not protect against MD
Given the results presented to this point, we hypothesized that ATF6a induced ER and post-ER vesicular expansion was responsible for increased intracellular trafficking of membrane attachment protein complexes to the sarcolemma and hence, protection from MD. Indeed, ATF6a was previously shown to expand the ER and post-ER vesicular compartment when activated or overexpressed (Brody et al., 2016;Lynch et al., 2012). Thus, to directly test this hypothesis we generated skeletal (B,C) Reduction in isometric force generation as a percentage of baseline force after each lengthening contraction injury cycle in the indicated genotypes of mice shown in the legend. *p<0.05 vs WT; # p<0.05 vs WT and Thbs4-Tg; § P<0.05 vs Sgcd -/and Thbs4-Tg by one-way ANOVA with post hoc Tukey's test. n = 6 mice for WT, Thbs4 Tg and Sgcd -/-Thbs4 Tg and n = 10 mice for Sgcd -/for panel B. n = 6 mice for WT, n = 10 mice for Sgcd -/and n = 5 mice for Thbs4 -/for panel C. (D) Representative images before and after laser injury and influx of FM1-43 dye (green fluorescence) in FDB myofibers in the presence of 1.25 mM Ca 2+ isolated from indicated genotypes. The white tag in each image is the position for the laser injury. Scale bars = 10 mm. (E,F) Quantitative time course in seconds of FM1-43 fluorescent dye entry in FDB myofibers from the indicated genotypes of mice in the presence of 1.25 mM Ca 2+ . Laser injury occurred at 60 s. n = 6 fibers per animal from 3 animals per genotype for panels D-F; *p<0.05 vs WT; #p<0.05 vs Sgcd -/-; §p<0.05 vs Thbs4 -/by one-way ANOVA with post hoc Tukey's test. Data points for Sgcd -/in panel E and F were derived from a single set of experiments. All data are represented as mean ± SEM. DOI: 10.7554/eLife.17589.014 The following figure supplement is available for figure 4: Utro. Dystro. Dysfer.
Gapdh Sarcolemmal y y y y y y y g g g . Similar to Thbs4 Tg mice, ultrastructural analysis of skeletal muscle from ATF6a Tg mice showed a remarkable expansion of ER and the sub-sarcolemmal vesicular compartment, although these ATF6a-dependent vesicles appeared less dense in comparison to those observed in Thbs4 Tg mice ( Figure 7D versus Figure 1G). Next, ATF6a mice were crossed with both Sgcd -/and mdx mice to directly examine the hypothesis that the adaptive ER stress response and sub-sarcolemmal expansion of vesicles induced by ATF6a was a protective mechanism underlying Thbs4 action. However, ATF6a overexpression in the Sgcd -/or mdx dystrophic background provided no protection whatsoever ( Figure There was no reduction in histopathology or serum CK levels or membrane rupture as assessed with EBD, nor was treadmill running improved by the ATF6a Tg in either the Sgcd -/or mdx backgrounds. More importantly, ATF6a overexpression did not increase the sarcolemmal localization of any of the membrane attachment proteins observed with Thbs4 overexpression in skeletal muscle ( Figure 7K). Thus, while ATF6a overexpression activated an adaptive ER stress response in skeletal muscle with a dramatic induction of intracellular and sub-sarcolemmal vesicles to an even higher level than observed in our Thbs4 overexpressing mice, it did not augment the membrane residency of membrane stabilizing proteins in muscle. Hence, our data indicate that ATF6a is only one part of a more integrated mechanism whereby Thbs4 regulates membrane stability of skeletal muscle.
Taken together, our data so far indicate that increased levels of Thbs4 itself and its trafficking through the secretory pathway are essential to increase membrane attachment protein complexes at the sarcolemma. To test this hypothesis, we took advantage of a previously established adenoviral construct encoding a Thbs4 calcium-binding containing mutant that is retained in the ER (Ad-Thbs4-mCa 2+ ) (Brody et al., 2016). Importantly, the mutant still induces an ATF6a mediated ER-stress response, both in neonatal rat cardiomyocytes and when expressed in the gastrocnemius muscle of early postnatal rat pups (Brody et al., 2016). Utilizing an identical in vivo approach with adenoviral gene transfer into the gastrocnemius of early neonatal rat pups, we compared bgal control with fulllength Thbs4 versus the Thbs4-mCa 2+ for effects on b1 Integrin membrane levels. The data showed that only the secretion competent full-length Thbs4, but not the full-length ER-retained Thbs4 mutant, promoted greater b1 integrin membrane occupancy (Figure 7-figure supplement 3). Hence, Thbs4 must move through the secretory pathway to chaperone at least b1 integrin to the sarcolemma.
Conservation of the Thbs membrane stability mechanism in Drosophila
Our results in mice were reminiscent of data from Drosophila, which have a single Tsp gene that when deficient causes embryonic lethality due to ruptures in tendon/muscle attachments (Subramanian et al., 2007). Moreover, Tsp in Drosophila was also shown to interact with aPS2/bPS integrin (Chanana et al., 2007;Subramanian et al., 2007). Thus to investigate a potential Figure 6 continued biological replicates). Abbreviations: Utro, utrophin; Dystro, dystrophin; Dysfer, dysferlin; a-DG, a-dystroglycan; b-DG, b-dystroglycan; d-SCG, dsarcoglycan; a-SCG, a-sarcoglycan; b-SGC, b-sarcoglycan; b1D-, a7-and a5-integrin. The red boxes show increased protein levels. Also see Figure 6-figure supplement 1 for replicates. (C) Representative immunoblotting for structural components of the DGC-and integrin-associated protein complexes in sarcolemmal preparations from Thbs4 -/and WT quadriceps at four months of age (n = 4 biological replicates). The burgundyboxed areas show reduced protein levels. Ponceau staining of a nonspecific band and dihydropyridine receptor a1 (Cav1.1) were used as loading controls for sarcolemmal protein extracts; Gapdh was used as loading control for total cell protein extracts. DOI: 10.7554/eLife.17589.018 The following figure supplement is available for figure 6:
Calret. Collectively, these results indicate that Thbs proteins underlie an ancient program for membrane stabilization through regulation of intracellular attachment protein complexes and their content at the surface membrane, although ER expansion through ATF6a appears to have evolved phylogenetically after Drosophila.
Discussion
A vast majority of the Thbs literature over the past 3 decades have invoked or interpreted data consistent with a primary extracellular function for these proteins, while only a handful have shown a direct intracellular function (Adams and Lawler, 2011;Ambily et al., 2014;Baek et al., 2013;Brody et al., 2016;Christopherson et al., 2005;Duquette et al., 2014;Frolova et al., 2014Frolova et al., ,2010Frolova et al., , 2012Hauser et al., 1995;Lynch et al., 2012;McKeown-Longo et al., 1984;Posey et al., 2014;Schellings et al., 2009;Sö dersten et al., 2006). Here, we identify a fundamental yet previously unrecognized intracellular role for Thbs4 in skeletal muscle, where it directly augments selective vesicular trafficking and chaperones DGC and integrin attachment complexes to the membrane, leading to greater stability and levels of select complexes at the sarcolemma, and thereby enhancing the mechanical stability of the myofiber (Figure 9). In fact, our findings identify Thbs4 as a crucial component to maintain muscle fiber integrity as loss of Thbs4 results in sarcolemma weakness that causes spontaneous dystrophic changes with aging. Importantly, the Thbs protective effect holds true in both mouse and Drosophila skeletal muscle.
As secreted matricellular proteins, Thbs' are first produced in the ER where they are glycosylated and transit to the Golgi for additional modifications, where after they traverse the remainder of the secretory pathway (Adams and Lawler, 2011). Interestingly, we observed that skeletal muscle-specific overexpression of Thbs4 did not result in accumulation within the ECM, but predominantly produced an intracellular protein localization pattern within the ER and post-ER vesicular network. In fact, we have also since generated cardiac-specific Tg mice overexpressing Thbs1, 2, 3, 4 or 5 in the heart ( [Lynch et al., 2012] and data not shown). In these hearts all five Thbs' reside mainly within the intracellular vesicular network and ER with only limited detectable protein accumulation outside cardiomyocytes. This is in dramatic contrast to the overexpression of an array of other matricellular proteins, such as periostin, which saturates the ECM when overexpressed (Oka et al., 2007). In contrast, dystrophic skeletal muscle did reveal occasional Thbs4 protein accumulation in fibrotic regions around myofibers, confirming that this protein can reside for a period of time in the ECM. These dynamic localization differences at baseline versus during fibrotic disease could be attributed to Thbs recycling at the cell surface (Adams and Lawler, 2011;Wang et al., 2004). Indeed, we observed rapid up take of recombinant Thbs4 when given exogenously to cultured C2C12 myoblasts or myotubes. It is possible that extracellular Thbs4 reuptake could occur through its designated receptor or through the integrin and DGC complexes. Indeed, other matricellular proteins such as SPARC were shown to be actively taken up back into myofibers through such a process with integrin associated endocytosis, sorting and recycling (Chlenski et al., 2011;De Franceschi et al., 2015;Nakamura et al., 2014). Our data identify ATF6a as a transcriptional regulator of secretory pathway activity in muscle, a function that was previously established for the inositol-requiring enzyme 1a / X-box binding protein (IRE1a/XBP-1) axis of the canonical ER stress response pathway during plasma cell differentiation (Shaffer et al., 2004). Importantly, although enhancement of the ATF6a-mediated adaptive ER stress pathway was sufficient to drive the dramatic expansion of the ER and post-ER vesicular content and increase vesicular trafficking to the membrane, it did not augment membrane residency of DGC and integrin attachment complexes at the sarcolemma, nor was it sufficient to protect against MD. Thus, ATF6a is only part of a more complex intracellular mechanism whereby Thbs4 regulates mechanical stability of the muscle fiber and its sarcolemma, in coordination with greater vesicle formation and trafficking (Figure 9).
Data from both Drosophila and zebrafish show that thrombospondin proteins localize outside of cells within the tendinous junctions (Chanana et al., 2007;Subramanian and Schilling, 2014;Subramanian et al., 2007). In addition, vertebrate Thbs1 and Thbs2 genes have evolved domains that are tailored to affecting processes outside the cell, such as altering transforming growth factorb activity and the angiogenic response (Adams and Lawler, 2011;Bornstein, 2001;Carlson et al., 2008). Hence, the simplest interpretation of our data and that reported in the literature is that Thbs proteins are complex, multifactorial proteins that function both inside and outside the cell. However, our working hypothesis is that Thbs4 appears more tailored to intracellular functionality in cardiac and skeletal muscle (Figure 9), and this same paradigm appears to hold true for the other Thbs family members in other tissues. For example, Thbs1 was previously shown to localize to the intracellular side of membrane attachment sites by immunogold-electron microscopy in endothelial cells (Hiscott et al., 1997). Furthermore, Thbs proteins are known to strongly interact with many different integrin heterodimers, and singular proteomic analysis of the a5b1 integrin complex identified Thbs2 as a core element of its 'interactome' (Adams and Lawler, 2011;Bouvard et al., 2013;De Franceschi et al., 2015;Plow et al., 2000;Schiller et al., 2013). Our ultrastructural and biochemical analyses showed that the Thbs-integrin complex resides within the lumen of vesicles. Although further, in-depth analyses will suggest whether Thbs4 co-regulates the signaling competence of this integrin complex and the exact preassembly stage that is influenced by Thbs4 on the way to the cell surface. Finally, Thbs1 silencing or overexpression in human cancer cells was shown to decrease or enhance integrin protein levels, respectively, in the intracellular and plasma membrane compartment and thereby modulate cellular adhesion (Duquette et al., 2013;John et al., 2010).
Taken together, various lines of evidence indicate that Thbs proteins stabilize integrins and the DGC at membrane attachment complexes of the sarcolemma through an intracellular function. Importantly, this observation holds true in both mouse and Drosophila skeletal muscle where integrin protein content at the cell membrane was increased with Thbs4 overexpression, yet very little Thbs . Model of how Thbs4 functions as an intracellular regulator of muscle cellular attachment and membrane stability. As a matricellular protein, thrombospondin-4 (Thbs4) pentamers are synthesized in the ER lumen and then transported to the Golgi, where after they traverse the secretory pathway to fuse with the plasma membrane for secretion. Thbs4 can then reside within the extracellular matrix (ECM) or be actively endocytosed and returned Figure 9 continued on next page protein was observed outside the cell in either species at baseline. While previously published data showed a large concentration of Drosophila Tsp protein to the myotendinous junction in stage 16 embryos, this is also the very same region where the integrins are highly concentrated in identical foci and the data do not distinguish if Tsp in inside or outside the cell (Subramanian et al., 2007). At earlier embryonic stages (stage 12-13) however, the Tsp protein is intracellular with a diffuse pattern similar to that of integrins that have yet to be deposited at the cell membrane (Subramanian et al., 2007). Moreover, in later stage Drosophila larvae Tsp is no longer detected in the myotendinous junctions or in tendons. Rather, a network of Tsp positive staining is observed within the muscle of larvae (unpublished observations, Talila Volk). However, in mammals Thbs4 can oligomerize with Thbs5 and be deposited in the tendon (Hauser et al., 1995;Sö dersten et al., 2006), loss of Thbs4 -/in mice showed altered ECM composition and weakened muscle-tendons (Frolova et al., 2014), and altered inflammatory responses associated with arteriogenesis (Frolova et al., 2010), whereas Thbs1 and Thbs2 were shown to function from outside the cell in augmenting developmental synaptogenesis (Christopherson et al., 2005). Hence, Thbs proteins are clearly complex regulatory proteins with intra-and extracellular functions.
One aspect of the biology that was not conserved in Drosophila was the ability of Thbs or Tsp overexpression to expand the intracellular vesicular compartment in this lower organism, likely because Drosophila does not rely on an ATF6-like mechanism for the ER stress response, and the Thbs interacting domain within ATF6a is not contained in the Drosophila homologue of this gene (Mori, 2009). Thus, the ability of Thbs proteins to activate ATF6a to augment ER protein production and secretory pathway activity is a later evolutionary adaptation beyond just stabilizing membrane attachment complexes, which in higher organisms more effectively coordinates tissue remodeling and healing through the Thbs proteins.
Taken together, the unique aspects of muscle membrane biology allowed us to uncover a previously unknown and possibly dominant intracellular function of the Thbs proteins that is evolutionarily conserved (Figure 9). This new model for Thbs protein function has many disease ramifications, especially in skeletal muscle, a tissue that is highly sensitive to mutations in genes that cause weaknesses in cellular attachment and membrane stability. Indeed, there is a lack of therapeutic strategies to effectively treat MD and our study suggests that this protein may provide a universal approach to strengthen the sarcolemma in skeletal muscle if employed in a gene therapy approach. Importantly, this protein is already present in muscle and it would not be perceived as a neo-antigen by viral-mediated overexpression, hence it should be well tolerated and best used in patients where the genetic basis of their MD disease is due to a loss of structural support of the sarcolemma (majority of cases).
Mouse models
Skeletal muscle-specific transgenic mice for Thbs4 and ATF6a were generated using the modified human skeletal a-actin (Ska) promoter construct as previously described (Goonasekera et al., 2011;Lynch et al., 2012). Briefly, full-length mouse Thbs4 cDNA was obtained from Open Biosystems Figure 9 continued to the intracellular compartment by a recycling receptor. In addition to its established extracellular functions, combined studies in the heart and skeletal muscle now reveal that while in the ER, Thbs4 can compete with BiP (GRP78) for binding to the ER-resident transcription factor ATF6a, thereby facilitating ATF6a translocation to the Golgi for processing and subsequent shuttling to the nucleus where it regulates expression of ER stress responsive genes that are also part of the unfolded protein response (UPR). ATF6a induction in cardiac and skeletal muscle, or by overexpression in Tg mice (lower panel) causes a dramatic expansion of the ER and post-ER vesicles, as well as increased vesicular trafficking to the membrane. The ability of Thbs4 to induce ATF6a processing and nuclear trafficking also causes this same ER expansion and augmentation of intracellular vesicular trafficking to the membrane. However, Thbs4 uniquely regulates trafficking of selected integrins and dystrophin-associated glycoprotein complexes (DGC) members to the sarcolemma, thereby enhancing the mechanical stability of the myofiber, such as observed in Thbs4 Tg muscle (middle panel). DOI: 10.7554/eLife.17589.026 (Accession number: BC139414) and amplified by PCR and cloned into the BamHI and EcoRV sites of the Ska-promotor expressing vector (forward: 5'-CGCGGATCCATGCCGGCCCCACGCGCG-3', and reverse: 5'-ATCTCAATTATCCAAGCGGTC AAAACTCTGGG-3'). Full-length mouse ATF6a cDNA was obtained from a previously generated pcDNA1-ATF6a plasmid (Lynch et al., 2012). Mouse ATF6a cDNA was amplified by PCR and subsequently cloned by PCR into the KpnI and NotI sites of the Ska-promotor expressing vector (forward: 5'-GGGGTACCATGGAGTCGCCTTTTAGTCC-3', and reverse: 5'-ATAAGAATGCGGCCGCCTACTGCAACGACTCAGGGAT-3'). All constructs were confirmed by DNA sequencing.To make Tg mice, the Ska-plasmid backbone was removed and the Ska-Thbs4 and Ska-ATF6a fragments were gel purified followed by Elutip-D column purification (Schleicher and Schuell Bioscience; Dassel, Germany, Cat. 10462617) for newly fertilized oocyte injection at the Cincinnati Children's Hospital Transgenic Animal and Genome Editing Core Facility. All transgenic mice were produced in the FVB/N background. Mice deficient for Thbs4 (Thbs4 -/-; Strain: B6.129P2-Thbs4tm1Dgen/J) and mdx mice (Strain: C57BL/10ScSn-Dmd mdx /J) were purchased from Jackson Laboratories (Bar Harbor, Maine). Sgcd -/mice were previously described (Hack et al., 2000). Next, Ska-Thbs4-Tg, ska-ATF6a-Tg and Thbs4 -/mice were backcrossed for at least 6 generations into the Sgcd -/background to generate Sgcd -/-Thbs4-Tg mice, Sgcd -/-ATF6a-Tg mice and Sgcd -/-Thbs4 -/mice, as well as their littermate controls. An identical breeding strategy was used to generate mdx Thbs4 -/mice. In addition, males from each transgenic line were crossed to mdx heterozygous females to generate mdx-Tg and mdx non-Tg male littermates and their appropriate controls. All animal experiments were approved by the Institutional Animal Care and Use Committee of the Cincinnati Children's Hospital Medical Center (Protocol# IACUC2013-0013). No human subjects or human tissue was directly used in experiments in this study.
Thbs4 mRNA expression levels in various human muscle diseases
A search of the 'National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO)' database (Barrett and Edgar, 2006) revealed that Bakay and colleagues recently performed microarray experiments on human muscle biopsies of various muscle diseases (GEO accession GDS1956/204776, [Bakay et al., 2006]). Available data included 11 different muscle diseases, with a total of 121 human muscle biopsy specimens tested on Affymetrix U133A microarrays. Individual Thbs4 mRNA levels from samples with Becker Muscular Dystrophy (BMD, n = 5); Duchenne Muscular Dystrophy (DMD, n = 10); dystrophy due to calpain-3 mutations (LGMD2A, n = 10); and dystrophy due to a paucity of dysferlin (LGMD2B, n = 10) were averaged and compared to those from healthy muscle biopsies (n = 18) using an unpaired two-tailed t-test.
Adeno-associated virus (AAV) serotype-9 production and injection (gene-therapy)
Mouse Thbs4 or an eGFP cDNA was amplified by PCR and inserted into the BamHI and XhoI sites of pAAV-MCS vector. AAV9-CMV-eGFP and AAV9-CMV-Thbs4 were produced using the triple transfection method in HEK293 cells as previously described and stored at À80˚C until commencing the in vivo experiments (Gray et al., 2011;Zincarelli et al., 2008). Next, both left and right gastrocnemius muscles of three-day-old Sgcd -/mice were injected with either AAV9-Thbs4 or AAV9-eGFP (both 1E10 viral particles in 30 ml isotonic saline; [Goonasekera et al., 2011]). Mice were sacrificed at six weeks of age. The left gastrocnemius of each mouse was fixed, processed, paraffin embedded, sectioned, and stained with H&E and Masson's trichrome, whereas the right muscles were snap-frozen in liquid nitrogen for storage in À80˚C. A subset of muscles were embedded in Optimal Cutting Temperature Compound (O.C.T, Tissue-Tek, Sakura Americas, Torrance, CA, Cat #4583), frozen, and 7 mm cryosections were generated to confirm eGFP expression by direct fluorescence (not shown).
Protein preparations and western blotting
Quadriceps, gastrocnemius, soleus, diaphragm and hearts were harvested and immediately frozen in liquid nitrogen for storage at À80˚C. To evaluate ER-stress and Thbs4 protein expression, muscles were homogenized (Fisher Scientific, Waltham, MA, TissueMiser) in ice-cold RIPA buffer containing Halt Protease Inhibitor cocktail (ThermoScientific, Waltham, MA, #78430). Next, samples were sonicated (SP Scientific, Warminster, PA, VirSonic 60, power setting 3 for 3 times 10 s), lysates were cleared by centrifugation at 14,000 rpm for 14 min at 4˚C and stored at À80˚C.
To evaluate glycosylation pattern, quadriceps protein extracts were treated with Endoglycosidase H (Endo H; New England Biolabs Inc., Ipswich, MA, P07P2), peptide N-glycosidase F (PNGase F, New England Biolabs Inc., P0704) or protein deglycosylation mix (New England Biolabs Inc., P6039) prior to SDS-PAGE, according to the manufacturer's instructions. Endo H cleaves high mannose residues at hybrid oligosaccharides present on proteins in the ER, whereas PNGase cleaves both these and more complex oligosaccharides that result from processing in the Golgi (Hewett et al., 2004). Control samples were treated the same way without addition of enzymes.
To evaluate DGC-associated proteins, fresh quadriceps muscle was harvested and crude sarcolemmal isolates were prepared as previously described (Kobayashi et al., 2008). Briefly, freshly harvested quadriceps was homogenized in 7.5x volumes of ice-cold lysis buffer (20 mM Na 4 P 2 O 7 , 20 mM NaH 2 PO 4 , 1 mM MgCl 2 , 0.303 M sucrose, 0.5 mM EDTA, pH 7.1 with 5 mg/ml aprotinin and leupeptin, 0.5 mg/ml pepstatin A, 0.23 mM PMSF, 0.64 mM benzamidine, and 2 mM calpain inhibitor I and calpeptin), then centrifuged 14,000 g for 20 min at 4˚C; the pellet was re-suspended, rehomogenized and both supernatants were centrifuged 30,000 g for 30 min at 4˚C after which the pellet was re-suspended in 100 ml lysis buffer and stored at À80˚C until further use. Extracellular protein fractionation from quadriceps muscle was essentially performed as previously described (Tjondrokoesoemo et al., 2016). In summary, freshly harvested quadriceps muscle was minced, washed with PBS and subjected a 1 hr washing step in 0.5 M NaCl, 10 mM Tris-HCl, 25 mM EDTA (PH 7.5). Next, samples were decellularized overnight in 0.1% SDS, 25 mM EDTA, followed by extracellular matrix extraction with 4 M guanidine hydrochloride, 50 mM C 2 H 3 NaO 2 , 25 mM EDTA (pH 5.8). Finally, proteins were precipitated overnight in 80% EtOH, air dried and treated with protein deglycoylation mix (New England Biolabs Inc., P6039).
b1D-integrin positive intracellular vesicles were isolated from quadriceps using an endoplasmic reticulum isolation kit (Sigma Aldrich, ER0100), according to the manufacturer's instructions. Briefly, tissues were homogenized in isotonic extraction buffer using a 2 ml Dounce homogenizer. Homogenates were cleared by centrifugation at 12,000 g for 15 min at 4˚C. Two mg of vesicles was incubated with for 12 hr at 4˚C with an antibody raised against the cytoplasmic domain of b1D-integrin (Millipore, MAB1900), immunoprecipitated using A/G magnetic beats (ThermoFisher Scientific, #88803) at 4˚C for 1hr, and subsequently subjected to SDS-PAGE.
Forced treadmill running, Evan's Blue Dye (EBD) uptake and serum CK levels
To assess the exercise capacity of mice and sarcolemmal stability, mice were subjected to forced treadmill running in the presence of EBD as previously described (Goonasekera et al., 2011). Briefly, adult mice were intraperitoneally injected with EBD (10 mg/ml; 0.1 ml per 10g body weight) and 24 hr later subjected to forced downhill treadmill running to measure membrane rupture events. For the exercise protocol, exhaustion of the mice was assessed as greater than 10 consecutive seconds on the shock grid without attempting to re-engage running on the treadmill. Mice were then sacrificed and quadriceps, gastrocnemius and diaphragm was embedded in O.C.T. and frozen in liquid nitrogen. Tissue sections were cut at a thickness of 7 mm, air-dried, washed in PBS and stained with wheat germ agglutinin conjugated to FITC (Sigma-Aldrich, green) for 1 hr at RT to visualize the membranes. Images were taken on a Nikon Eclipse Ti-S inverted microscope system equipped with NIS Elements Advanced Research (AR) microscope imaging software (Nikon Instruments Inc. Melville, NY) to determine the percentage of EBD-positive fibers. In addition, blood was taken from a separate cohort of un-exercised mice of each genotype to evaluate their baseline serum CK levels as previously described at the clinical laboratory of Cincinnati Children's Hospital Medical Center by an observer blinded to the genotypes (Kobayashi et al., 2008).
Lengthening contraction based injury and isometric muscle force measurements
Mice were anesthetized with an intraperitoneal injection of pentabarbitol and placed supine on the muscle testing apparatus (Aurora Scientific, Aurora, ON, Canada). A midline incision running from the ankle to the thigh was created and the skin and fascia was gently removed leaving the tibialis anterior (TA) muscle exposed. The leg was immobilized by securing it in a custom jig (Aurora Scientific) with thumbscrews at the distal femur. A 4-0 nylon suture was tied to the distal TA securing it with a small plastic ring at the muscle tendon junction. The distal tendon was transected and the TA elevated to remove its contact with the tibia, and the muscle was mounted to a servomotor (Aurora Scientific, 305C) using the plastic ring. Two intramuscular electrodes were placed on either side of the peroneal nerve and stimulation voltages and optimal muscle length (L 0 ) were determined and then adjusted to produce maximal isometric force (P 0 ) at a stimulation frequency of 200 Hz. Five consecutive isometric contractions were averaged as a measure of the maximal specific tension. An additional lengthening contraction injury protocol was added in which an isometric contraction was performed to determine baseline force generation, followed by 2 consecutive 20% L 0 lengthening contractions, and finally another isometric contraction performed at L 0. The force deficit was calculated between the first isometric contraction and those following the lengthening injury cycles (See Figure 4A for schematic representation). This contraction injury protocol was repeated 2 more times and the force deficit was calculated relative to the pre-injury isometric contraction. In a subset of experiments passive tension was measured prior to the lengthening contraction protocol. Here the TA was set to L 0 and passively stretched to 5, 10, 15, and 20% of L 0 . Each stretch was held for 2 min before the length was returned to its starting position. Maximal passive tension was recorded at the peak of the stretch. A 2-min rest period occurred between each contraction for all experiments and force values were normalized to the muscle's physiologic cross-sectional area. In some cases we observed tendon breaks during the lengthening contraction protocol. Percentage of tendon breaks, assessed by complete physical rupture of the tendon at the muscle aponeurosis, was recorded.
To assess internalization of rThbs4 by cultured C2C12 myoblasts and myotubes, established approaches were used as previously described (Chlenski et al., 2011;Nakamura et al., 2014). First, cells plated in Ibidi m-slide 8-well dishes were treated with either 1 mg/ml Alexa-488 labeled rThbs4 or equal amounts of Alexa-488 labeled BSA control for the indicated periods. Next, cells were rinsed with sterile PBS and fixed with 4% paraformaldehyde for 10 min. After fixation cells were washed three times with PBS, followed by blocking with 3% normal goat serum/PBS/0.1% Triton for 20 min at room temperature and subsequently incubated with Alexa Fluor-568 labeled phalloidin (Life Sciences, Cat# A12380; 1/100 in blocking buffer) at room temperature to visualize the F-actin cytoskeleton or incubated with anti-Rab7 (late endosomes; Cell Signaling, #9367; 1/100 in blocking buffer) overnight at 4˚C, followed by an Alexa fluor-568 (red) secondary antibody for 45 min (Invitrogen, 1:400 in blocking buffer). In both conditions nuclei were counterstained with DAPI nuclear DNA stain (Invitrogen, 1:10.000), mounted in Ibidi mounting medium for fluorescent microscopy (Ibidi USA, Cat# 50001) and visualized using a Nikon A1 confocal laser microscope system (Nikon Instruments Inc. Melville, NY) as described above. In parallel, cells plated in 6-well plates were treated with either 1 mg/ml Biotin labeled rThbs4. Next, cells were rinsed with PBS, lysates were prepared as described above and intracellular biotin labeled proteins were visualized and quantified by western blot analysis using Streptavidin DyLight 650 conjugate (ThermoFisher Scientific, Cat# 84547) at a 1:1000 dilution on the Odyssey CLx Imaging System (Li-COR Biosciences, Lincoln, NE). All experiments described above were performed in triplicate.
Muscle fiber isolation and laser induced membrane injury
Flexor digitorum brevis (FDB) muscle fibers were isolated from male age-matched mice of each genotype as previously described and plated onto 35 mm glass-bottomed MatTek dishes (MatTek Corp., Ashland MA, P35G-0-10-C) in isotonic Tyrode buffer containing 1.25 mM Ca 2+ (Cai et al., 2009). Membrane damage was induced in the presence of 2.5 mM FM1-43 dye (Molecular Probes, Eugene OR) using a Nikon A1 confocal laser microscope through a Plan Apo 60x H 2 O immersion objective. To induce damage, a 5x5 pixel area of the sarcolemma on the surface of the muscle fiber was irradiated using a UV laser at full power (80 mW, 351/364) for 10 s at t = 60 s (Cai et al., 2009). Images were captured 5 min after irradiation at 5-second intervals (Cai et al., 2009). For each image, fluorescence intensity in an area of about 200 mm 2 directly adjacent to the injury site was measured using ImageJ software. To allow for statistical analysis from different experiments, data are presented as fluorescence intensity relative to the value before injury (DF/F0).
In vivo adenoviral transduction and tissue processing
Experiments were performed as previously described (Brody et al., 2016). Briefly, either purified AdThbs4-Flag, AdThbs4-mCa 2+ -Flag or Adbgal control were injected into the left and right gastrocnemius muscle of individual one-day-old Sprague Dawley rat pups (Envigo, Indianapolis IN, USA), followed by an additional injection 48 hr later (10 8 viral particles for each injection). Rat pups were sacrificed at eight days of age and muscles were embedded in O.C.T. frozen, and 10 mm cryosections were generated. To visualize adenoviral transduction (Flag) and the effects of our constructs on the membrane residency of b1 integrin, tissue sections washed three times with PBS, followed by blocking with 3% normal goat serum/PBS/0.1% Triton for 30 min at room temperature and then incubated with anti-flag and anti-b1 integrin primary antibodies (Cell Signaling, #2368; 1:500 and EMD Millipore, MAB1997, 1:100, respectively, in blocking buffer) overnight at 4˚C. Primary antibodies were detected by applying Alexa Fluor-488 conjugated goat-anti-rabbit and biotinylated antimouse (Vector Laboratories, M.O.M kit, 1:500) followed by Alexa Fluor-568 streptavidin conjugate (Invitrogen,1:200) for 45 min at RT. Sections were mounted in Vectashield Hard Set (Vector Laboratories, H-1400) to prevent photobleaching and imaged as described above. Sarcolemmal localized b1 integrin was quantified as percentage of red area per square surface of adenoviral transduced (Flag-positive) myofibers using ImageJ software.
Life cell quantitative imaging and photobleaching to evaluate ER to Golgi (FRAP) and Golgi to membrane vesicular trafficking (iFRAP) was performed using a Nikon A1 confocal laser microscope system and equipped with Plan Apo 40x oil immersion objective (NA = 1.0), an INU-TIZ-F1 stage top incubator (Tokai hit CO, Ltd, Shizuoka-ken, Japan) and NIS Elements AR microscope imaging software (Nikon Instruments Inc.) as previously described with modifications (Hirschberg et al., 1998;Patterson et al., 2008;Zaal et al., 1999). For ER-to-Golgi protein trafficking experiments, 24 hr after adenoviral infection, Ibidi dishes were infected with CellLight Golgi-RFP Bacmam 2.0 (ThermoFisher Scientific, c10593), a baculovirus containing a fusion construct of human Golgi resident enzyme N-acetylgalactosaminyltransferase and TagRFP (GalNacT2-RFP), according to manufacturer's instructions and incubated overnight. The next day, 100 mg/ml cycloheximide (Sigma-Aldrich, C4859) was added to the NRVMs to block new protein synthesis, 30 min prior to imaging. After acquiring a few baseline images, fluorescence from the Golgi pool of RFP was bleached by irradiating a region of interest (ROI) that encompasses the juxtanuclear Golgi region with a high intensity laser at 561 nm (100% laser power; Figure 5-figure supplement 1A). Next, recovery of GalNacT2-RFP was monitored by time-lapse imaging (5% laser power) at 5-min intervals for 2 hr as a measure of ER to Golgi protein trafficking.
For Golgi to membrane protein trafficking experiments, 24 hr after initial adenoviral infection, NRVMs in MatTek dishes were infected with adenovirus harboring the temperature sensitive VSVG-eGFP and incubated at 40˚C for 24 hr to retain the VSVG-eGFP in the ER (Hirschberg et al., 1998;Patterson et al., 2008). Approximately 60 min prior to imaging, 100 mg/ml cycloheximide was added to the cells. Thirty minutes prior to imaging MatTek dishes were shifted to 32˚C allowing the VSVG-eGFP to traffic to the Golgi. After acquiring a few baseline images, the cargo pool in the Golgi was selectively highlighted by photobleaching VSVG-eGFP from the entire cell excluding the perinucelear Golgi network using a high intensity laser at 488 nm (100% laser power; iFRAP; Figure 5-figure supplement 1B). Then, time-lapse imaging (5% laser power) at 1-minute intervals for 2 hr was performed to monitor export of VSVG-eGFP molecules from the Golgi using a second area that encompasses the Golgi network as a measurement for Golgi to membrane protein trafficking.
The combination of low energy, high attenuation, and the less concentrated excitation laser beam caused by the low NA objective resulted in negligible photobleaching during repetitive imaging in all experiments. As such, control experiments performing either time-lapsed imaging for 2 hr or the above described FRAP experiment in Golgi-RFP expressing cells in the presence of cycloheximide and brefeldin A (Sigma-Aldrich, B5936; 5 mg/ml) and iFRAP experiment in VSVG-eGFP expressing cells in the presence of cycloheximide and AlF (AlCl 3 , 60 mM and NaF, 20 mM; 30 min after shift to 32˚C) showed no difference in Golgi fluorescence intensity (data not shown) (Hirschberg et al., 1998;Patterson et al., 2008). Analysis of FRAP and loss of fluorescence after inverse FRAP (iFRAP) experiments was performed as previously described (Patterson et al., 2008;Zaal et al., 1999). The Golgi fluorescent values were normalized to the average baseline Golgi fluorescence prior to FRAP or to the first data point after iFRAP.
Drosophila husbandry and life-span assay
Male Drosophila of each genotype were collected at one-day posteclosion. Throughout the study, Drosophila were aged at 25˚C with a maximum of 12 flies in 25x95 mm polystyrene vials (Fischer Scientific, AS515) and transferred to new vials containing fresh food every three to four days without the use of anesthesia. Survival was recorded every day until 40 days of age. Kaplan-Meier statistical analysis was performed and significance determined by log-rank (Mantel-Cox) tests.
Drosophila negative geotaxis assay
The negative geotaxis assay was performed as previously described (Allikian et al., 2007). For each genotyped tested, male Drosophila were collected and kept at no more than 12 flies per vial. At 2, 5, 10, 15 and 20 days of age, they were immobilized using CO 2, and 12 groups of the various genotypes were transferred into empty polystyrene vials with a line drawn at 80 mm from the base of the vial. (Fisher Scientific, AS515). Drosophila were allowed to recover for 1 hr before testing. Each vial was assayed by gently tapping the flies down to the bottom of the vial, thereby engaging their negative geotactic response. The number of Drosophila able to climb across the 80-mm against gravity in 10 s was recorded. Four separate trials were performed with a 1-minute resting period in between. Percentage of Drosophila across were averaged and expressed as 'physical performance'. Genotypes were assayed simultaneously to eliminate variability attributed to RT and room humidity.
Drosophila histological analysis and immunohistochemistry
To obtain longitudinal sections of the dorsal median indirect flight muscles, 30-day old Drosophila of each genotype were positioned into mounting collars (Genesee Scientific,San Diego, CA 48-100) and subsequently fixed overnight at 4˚C in Carnoy's solution (6:3:1 ethanol:chlorophorm:acetic acid), dehydrated, and infiltrated with paraffin. Longitudinal sections were cut at 7 mm thickness and stained for H&E to evaluate dystrophic pathology in the dorsal median indirect flight muscle. To obtain cryosections of the dorsal median indirect flight muscles, the same mounting collars were positioned into O.C.T. freezing medium and placed into liquid nitrogen cooled-isopentane. For Thbs4, Tsp and calreticulin immunohistochemistry, paraffin sections were cleared of paraffin in xylene (2 times for 4 min), rehydrated in ethanol (100% EtOH 2 X 4', 95% EtOH 1 X 3', 70%EtOH 1 X 2', H 2 O 1'), incubated for 15 min in PBT (PBS/0.2% Triton X-100) and blocked in PBTB (PBT, 2% BSA) for one hour at RT. The sections were incubated with primary antibody overnight at 4˚C (Thbs4: Santa Cruz, Sc-7657-R, 1:50 dilution in PBTB; Tsp, 1/50 [Subramanian et al., 2007]; calreticulin: Abcam, ab2907; 1:100 dilution in PBTB), rinsed with PBT and incubated with the appropriate secondary antibody (goat-anti-rabbit-Alexa Fluor-488, Invitrogen; 1:400 in PBTB) for 2 hr at RT along with counterstains. Counterstains included membrane marker WGA conjugated to Alexa Fluor-647 (Invitrogen, W32466; 5 mg/ml, Far Red) and/or a nuclear marker DAPI (Invitrogen, D3571; 1 mg/ml in H 2 O). For Drosophila integrin subunit bPS immunostaining, cryosections were post-fixed in ice-cold methanol for 10 min, washed in PBT, blocked for one hour at room temperature with PBTB and incubated with primary antibody overnight at 4˚C (DSHB: CF.6G11; 1:50 in PBTB). Then, tissue was then washed in PBS and subsequently incubated with rabbit-anti-mouse-Alexa Fluor-488 (1:300 in PBTB; Invitrogen) for 2 hr. For each immunostain, consecutive sections were incubated with secondary antibodies alone as a control for specificity (not shown). Images were obtained using a Nikon A1 confocal laser microscope system equipped with 40x H 2 O immersion objective (NA = 1.1) and NIS Elements Advanced Research (AR) microscope imaging software (Nikon Instruments Inc.).
Transmission electron microscopy
Fresh quadriceps was harvested and immediately immersed in relaxing buffer (0.15% sucrose, 5% dextrose, 100 mM KCl in PBS), subsequently fixed overnight (3.5% glutaraldehyde, 0.15% sucrose in 0.1 M sodium cacodylate ph 7.4) and post-fixed in 1% OsO 4 (in water) for 2 hr at RT. Next they were washed, dehydrated and embedded using Epoxy resin. Tissue processing for electron microscopy of the dorsal median inferior flight muscle was performed as previously described (Allikian et al., 2007). Briefly, 25 day-old flies were positioned dorsal side up on a spot of OTC freezing medium, dipped in liquid nitrogen, bisected sagittally using a pre-cooled razor and then fixed overnight in 2.5% glutaraldehyde in NaH 2 PO 4 0.1 M pH 7.4 at 48˚C. Next, they were washed and post-fixed in 2% osmic acid in phosphate buffer for 2 hr at room temperature, dehydrated and embedded using Epon resin. Ultrathin sections of all tissues were counterstained with 1.5% uranyl acetate, 70% ethanol and lead nitrate/Na citrate. Images were obtained using a Hitachi 7600 transmission electron microscope connected to an AMT digital camera.
Sub-sarcolemmal vesicular expansion in myofibers of mouse quadriceps relative to the length of the sarcolemma was determined from 8 to 10 images per myofiber at 2000X of non-overlapping longitudinal regions that were randomly collected. In each image, the size of the vesicular content perpendicular to the sarcolemma -between the sarcolemma and the first sarcomere, and the length of the sarcolemma was determined using ImageJ software. The number of sarcomeric tears per myofiber in Drosophila dorsal median inferior flight muscle was determined by imaging complete myofibers at 1500X from longitudinal sections. Clear disruptions within the sarcomeric structure as indicated in Figure 8G and the associated Figure 8-figure supplement 1G.
For immunogold labeling of Thbs4, mouse quadriceps and Drosophila dorsal median inferior flight muscle were fixed with 4% paraformaldehyde in 0.1 M phosphate buffer (pH 7.4) overnight. After 2 buffer washes, samples were post-fixed with 0.1% OsO 4 in the same buffer for 30 min, dehydrated and embedded in hydrophilic acrylic resin following manufacture instruction (L.R. White, #14380; Electron Microscopy Sciences, Hatfield, PA). Ultrathin sections were cut using a Leica Ultra-Cut S or UC6rt ultramicrotome at a thickness of 90 nm and placed on Formvar and carbon coated 200-mesh nickel for immunogold labeling of Thbs4. Briefly, ultrathin sections on grids were first treated with 1% sodium metaperiodate for 60 s to quench OsO 4 . After several washes with distilled water, grids were placed on drops of PBS containing 5% BSA and 0.1% cold-water fish gelatin to block nonspecific binding. Sections were then incubated overnight at 4˚C with goat anti-thrombo-spondin4 polyclonal primary antibody (R&D Systems, AF2390) at a final concentration of 5 mg/ml. Following several washes, sections were incubated in 6 nm colloidal gold particles conjugated rabbit anti-goat (Electron Microscopy Sciences; #25223) at a concentration of 10-20 mg/ml for 2 hr. After additional washes, all ultrathin sections were stained with 5% uranyl acetate for 2 min and 2% lead citrate for 15 min. For each experiment, both non-transgenic and transgenic sections were labeled and consecutive sections were incubated with secondary antibody alone as a control for specificity (not shown). Immunogold labeling was imaged on a JEOL JEM-1400 transmission electron microscope (JEOL Ltd, Japan) equipped with a Gatan US1000 CCD camera (Gatan, Pleasanton, CA).
Statistics
All results are presented as mean ± SEM. All data was normally distributed. Statistical analysis was performed with unpaired two-tailed Student's t test for two independent groups or one-way ANOVA with post hoc Tukey's test for multiple comparisons of 3 or more independent groups, as indicated in the individual figure legends. For survival analysis, Kaplan-Meier statistical analysis was performed and significance was determined by log-rank (Mantel-Cox) tests. All statistics were performed using GraphPad Prism 5.0 for Mac OS X and values were considered statistical significant when p<0.05. No statistical analysis was used to predetermine sample size. The experiments were not randomized and no animals were excluded from analysis. The investigators were not blinded to allocation during experiments and outcome assessment, except for data displayed in Figure 2B Sample size for the mouse and Drosophila experiments were estimated based on previous experiments with similar procedures but also based on past power calculations for appropriate group sizes, and based on this all data reported here were based on adequate sampling. No outlier data were excluded.
|
2018-04-03T02:31:33.096Z
|
2016-09-26T00:00:00.000
|
{
"year": 2016,
"sha1": "98e0477b8cc9a9ea388daf8df9bec9599409a2b8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.17589",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98e0477b8cc9a9ea388daf8df9bec9599409a2b8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
55570312
|
pes2o/s2orc
|
v3-fos-license
|
An Influence of Concentration of Polyvinylpyrrolidone on theMorphology of Silver Metal Formed from AgNO 3 Aqueous Solution
Metal silver rods having a partly regular direction on the substrate are synthesized from the fine copper particles on acrylic plastic plate immersed in 50 μM-PVP and 0.1 M-AgNO3 aqueous solution. An increase of PVP concentration in the AgNO3 aqueous solution inhibits the growth of the string-shaped silver and dendrite-shaped silver as well as polyol method. The absorbance of the plasmon peak around 410 nm immersed in 0.1 M-AgNO3 aqueous solution at 25◦C for 24 hours increased with an increase of the PVP concentration.
INTRODUCTION
In the past years, as for the silver nanoparticles, a research on the particle diameter and the form is active in the application field of catalyst, optical material, nanodevice material, and so on.While the silver nanowire and the nanorod play an important part as a machine for the mutual connection in the electronic device of nanoscale, nanostructure silver has been synthesized by polyol method [1][2][3][4][5][6][7][8], electron beam irradiation under high vacuum [9], and template method [10][11][12][13][14].When silver nanorod is synthesized by the polyol method, polyvinylpyrrolidone (PVP) is added as a protecting agent in silver nitrate ethylene glycol solution [1-3, 5, 8].However, when we use it for catalysts and antibacterial materials, it is important to fix the nanosilver on the substrate from the viewpoint of the environmental protection.
This study concerns the formation of nanostructure silver from AgNO 3 aqueous solution with PVP and the substitution technique to change from copper fine particles to silver on a substrate, based on the displacement plating and PVP reduction effect [15].
Plastic plates were ultrasonically cleaned in distilled and deionized water for 5 minutes, and dried by air splay as a substrate.Copper particles were daubed by pressing to be a support between the plastic plates, and then the weak contact particles were brown off by gas splaying.The plastic plate was hung by nylon yarn with sticking the side of copper particles into the solution, and set on parallel to the bottom of beaker.Then the silver was grown at 25 • C on this plate in a 0.1 M-AgNO 3 aqueous solution with 0 to 100 μM-PVP for 24 hours.The electron transfer reaction from copper metal particle to silver metal is a simple reduction-oxidation reaction of where the reaction takes place at the surface of copper metal and proceeds from the difference in the redox potential between Cu/Cu 2+ and Ag/Ag + .
Evaluation methods
The morphology of the silver metal obtained from each sample was observed by scanning electron microscope (SEM, JEOL-5310LVB).The absorbance of solution containing nanoparticles silver was measured with a Shimadzu UV-3150 spectrometer at room temperature in the wave length ranging from 200 to 800 nm.
RESULTS AND DISCUSSION
SEM images of the silver metal obtained by ionic exchange from fine copper particles on acrylic plastic plate in the various concentrations of PVP and 0.1 M-AgNO 3 aqueous solutions are shown in Figure 1.When the silver was grown in the region from 1.0 μM to 10 μM PVP concentration solution, the silver composed of needle-shaped, string-shaped, and board-shaped was grown on the acrylic substrate.It is understood that the string-shaped silver being twisted has a tendency to decrease in comparison with the case of no addition of PVP as shown in Figure 2. In this case, a growth direction was unified, because PVP molecules had a coordination place around the silver [16].
Particle-shaped silvers were observed in the AgNO 3 aqueous solution with PVP concentration of 100 μM.Even if the silver particles had been aggregated, dendrite-shaped silver would not have been formed.We can consider that the silver grows uniformly, because the reducing PVP molecules, which have a reduction effect [15], are fully coordinated around the silver particles in the region of high PVP concentration.Figure 3 shows the SEM image of the silver rod, which was focused on the substrate, obtained from the fine copper particles on acrylic plastic plate in 50 μM-PVP and 0.1 M-AgNO 3 aqueous solution at 25 • C for 24 hours.An average diameter of silver rods was 119 ± 25 nm in size.As the angle of branching is 60 degrees between branch and trunk, we can consider that the silver crystal growth direction is (211) direction.
To see the influence of the substrate on the grown particles, the copper particles supported by polystyrene and polyethylene were used as a substrate with the combination of 50 μM-PVP and 0.1 M-AgNO 3 .The SEM images of the silver, on polystyrene and polyethylene substrates, which was synthesized by being immersed in the aqueous solution at 25 • C for 24 hours are shown in Figure 4.
Regular arranged needle-shaped silver was not formed on the polystyrene and the polyethylene substrates.In the present work, we can assume that a strong electronegativity of CN groups composed of acrylic substrate significantly affects the growth of nanosize silver, that is, it contributes to attract the PVP molecules.
The absorbance of UV-visible light for various concentrations of PVP and 0.1 M-AgNO 3 aqueous solution, which is the samples after 24 hours, is shown in Figure 5.The absorbance of the plasmon peak around 410 nm increased with increasing the PVP concentration in 0.1 M-AgNO 3 aqueous solution.The peak position means that the silver in nanoparticles size is formed and the size is 20-30 nm [1].The position of these plasmon peaks was shifted to shorter wavelength with an increase of the PVP concentration.This blue shift suggests that the size of silver particles decreased by protective effect with increasing the PVP concentration.
Figure 6 shows the SEM images of the silver obtained by ionic exchange from copper plate for TEM sample grid on in the various concentrations of the PVP and 0.1 M-AgNO 3 aqueous solution.Dendrite-shaped silver was observed at low concentration of PVP in 0.1 M-AgNO 3 aqueous solution, but not observed in high PVP concentration.It suggests that PVP protective effect increases with increasing the PVP concentration as well as fine copper particles as a reducer.
CONCLUSIONS
The silver rods having a partly regular direction on the substrate were obtained from the fine copper particles on acrylic plastic plate immersed in 50 μM-PVP and 0.1 M -AgNO 3 aqueous solution at 25 • C for 24 hours.
An increase of the PVP concentration in the AgNO 3 aqueous solution inhibits the growth of the string-shaped silver and dendrite-shaped silver as well as polyol method.
Figure 1 :
Figure 1: SEM images of the silver metal obtained by ionic exchange from fine copper particles on acrylic plastic plate in the various concentrations of PVP and 0.1 M-AgNO 3 aqueous solutions at 25 • C for 24 hours.
Figure 2 :
Figure 2: SEM image of the silver metal obtained by ionic exchange from fine copper particles on acrylic plastic plate in 0.1 M-AgNO 3 aqueous solutions without PVP at 25 • C for 24 hours.
Figure 3 :Figure 4 :
Figure 3: SEM image of the silver rod, which was focused on substrate, obtained from the fine copper particles on acrylic plastic plate in 50 μM-PVP and 0.1 M-AgNO 3 aqueous solution at 25 • C for 24 hours.
Figure 5 :
Figure 5: Absorbance spectra of various concentrations of PVP and 0.1 M-AgNO 3 aqueous solutions after 24 hours at 25 • C.
Figure 6 :
Figure 6: SEM images of the silver obtained by ionic exchange from copper plate for TEM sample grid to silver metal in the various concentrations of PVP and 0.1 M-AgNO 3 aqueous solution at 25 • C for 5 minutes.
|
2018-12-08T21:58:41.486Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "93141058e64a88f98aa16b338c1329874e521488",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jnm/2008/592838.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "93141058e64a88f98aa16b338c1329874e521488",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
261886612
|
pes2o/s2orc
|
v3-fos-license
|
TurboID-EV: Proteomic Mapping of Recipient Cellular Proteins Proximal to Small Extracellular Vesicles
Extracellular vesicles (EVs), including exosomes, have been recognized as key mediators of intercellular communications through donor EV and recipient cell interaction. Until now, most studies have focused on the development of analytical tools to separate EVs and their applications for the molecular profiling of EV cargo. However, we lack a complete picture of the mechanism of EV uptake by the recipient cells. Here, we developed the TurboID-EV system with the engineered biotin ligase TurboID, tethered to the EV membrane, which allowed us to track the footprints of EVs during and after EV uptake by the proximity-dependent biotinylation of recipient cellular proteins. To analyze biotinylated recipient proteins from low amounts of input cells (corresponding to ∼10 μg of proteins), we developed an integrated proteomic workflow that combined stable isotope labeling with amino acids in cultured cells (SILAC), fluorescence-activated cell sorting, spintip-based streptavidin affinity purification, and mass spectrometry. Using this method, we successfully identified 456 biotinylated recipient proteins, including not only well-known proteins involved in endocytosis and macropinocytosis but also other membrane-associated proteins such as desmoplakin and junction plakoglobin. The TurboID-EV system should be readily applicable to various EV subtypes and recipient cell types, providing a promising tool to dissect the specificity of EV uptake mechanisms on a proteome-wide scale.
■ INTRODUCTION
Extracellular vesicles (EVs) are lipid bilayer vesicles that contain various biomolecules such as proteins, DNA, and RNA. 1 EVs are released from all cells and exhibit heterogeneity in size (40 to 1000 nm diameter) and in their cargo composition of biomolecules.In particular, small EVs (SEVs) of 50−200 nm diameter, including exosomes derived from late endosomal multivesicular bodies, have attracted a lot of attention due to their unique properties to enable intercellular communications. 2Although molecular profiles of SEV's cargo RNAs, proteins, and lipids have been well-documented recently, we largely lack mechanistic insight into how SEVs are taken up by recipient cells and how the fate of SEVs is regulated within recipient cells after uptake.
Until now, SEV uptake is proposed to be mediated by receptor-mediated endocytosis, lipid rafts, phagocytosis, caveolae, macropinocytosis, and direct fusion at the plasma membrane to deliver the contents of SEVs into the cytoplasm of recipient cells. 3,4However, molecular mechanisms for SEV uptake differ between donor and recipient cell types.Such SEV-and cell-type specificity underlying SEV uptake appear to be regulated by the affinity between donor SEV proteins and recipient cellular proteins. 2For example, distinct exosomal integrin expression patterns are responsible for organ-specific uptake of exosomes through integrin-recipient cell surface protein interactions. 5,6Thus, a robust and versatile method to provide a proteomic landscape of EV-recipient protein interactions is required to dissect the diversity of the SEV uptake mechanisms.Although fluorescent-protein (e.g., GFP-CD63) tagged SEVs 7,8 or high-speed atomic force microscopy 9 was used to monitor SEV uptake, content delivery, and SEVprotein interaction within recipient cells, these methods do not provide a global view of proteins responsible for SEV uptake.
Recently, proximity-dependent biotinylation techniques have evolved in cell biology and biochemistry fields. 10The ability to biotinylate proximal proteins (∼10 nm apart) in combination with mass spectrometry (MS)-based proteomics is an attractive approach for profiling proteins in the proximity to SEVs.Indeed, proximity-labeling proteomics enabled the identification of EV surface and internal proteins using APEX2 11 or wheat germ agglutinin conjugated to HRP 12 as well as endosomal protein composition using a BioID system. 13However, the application of proximity labeling to elucidate SEV uptake mechanisms has been limited.
In this study, we developed an MS-based proteomic approach in which TurboID is fused to the SEV membrane, thereby facilitating the biotinylation of TurboID-EV proteins themselves and recipient cell proteins proximal to TurboID-EVs.By combining TurboID-EV biotinylation with stable isotope labeling with amino acids in cultured cells (SILAC), 14 fluorescence-activated cell sorting (FACS), and spintip-based affinity purification of biotinylated proteins, we successfully identified proteins that are in close proximity to SEVs, potentially involved in their uptake and cellular interactions, related to clathrin-mediated endocytosis, macropinocytosis, and many other novel proteins.
■ METHODS
The experimental procedures are described in detail in the Supporting Information.
■ RESULTS AND DISCUSSION
Overview of the TurboID-EV System.Among currently available biotin ligases and peroxidases, we chose TurboID because it has a higher biotinylation activity and higher temporal resolution (approximately >10 min) than other biotin ligases and does not require H 2 O 2 treatment for biotinylation in contrast to peroxidases. 15We chose HEK293T as a model system because it exhibits high transfectability, allowing transient/stable expression of a gene of interest, and is widely used as a donor cell for SEVs due to its high SEV yield. 16o identify EV interacting proteins through proximitydependent biotinylation, we sought to design TurboID-EV, in which TurboID is tethered to the SEV membrane.To this end, we focused on phosphatidylserine (PS), one of the major constituent lipids of SEV membranes. 17The C1C2 domain of some lipid-related enzymes, including MFG-E8 (lactadherin), binds to PS 18,19 and has been exploited as a SEV membrane anchor. 20Inspired by EV surface display technology using C1C2 domain, 21 we generated a plasmid that expresses a fusion protein of TurboID with MFG-E8, mCherry, and epitope tags (Figure 1A).The arginine-glycine-aspartic acid (RGD) motif of MFG-E8 is known to promote the phagocytosis of apoptotic cells by binding to integrins expressed on macrophage cell membranes. 22To avoid this effect, we generated the TurboID fusion protein with a mutated version, MFG-E8 (D49E).The mCherry and epitope tags were introduced to visualize EVs in live cells and to detect the TurboID protein using immunoblotting, respectively.In summary, we developed an expression vector containing MFG-E8 D49E-mCherry-TurboID whose product is designed to be tethered to SEV lipid membranes, thereby enabling us to biotinylate proteins proximal to the TurboID-EVs.TurboID Fusion Protein Binds to Small EVs.To confirm the expression and biotinylation activity of the MFG-E8-mCherry-TurboID fusion protein, we first transfected a plasmid encoding the fusion protein into HEK293T cells.We observed mCherry-derived fluorescent dots in the cytoplasm (Figure S1A), similar to the localization pattern of MFG-E8 in HEK293T. 23We then tested and confirmed the biotinylation activity of the fusion proteins within cells by immunoblotting (Figure S1B) and by proteomic analysis based on nanoliquid chromatography/tandem mass spectrometry (LC/MS/MS) (Figure S1C and Table S1).We found that the TurboID fusion proteins biotinylated a broad spectrum of proteins in the cytosol, cell membrane, exosome, and endoplasmic reticulum (ER)-Golgi (Figure S1D).This result indicates that we observed multiple biotinylation events during the secretion of the TurboID proteins into the extracellular space through the ER-Golgi apparatus as well as the uptake of EVs containing the TurboID proteins into recipient cells.
We next assessed whether the TurboID fusion proteins secreted into the extracellular space bind to SEVs.We collected crude SEVs from cell culture media using ultracentrifugation (see Methods).Of note, no significant differences were observed in the properties of EVs, including their concentration, size, and zeta potential, between normal cells and TurboID-expressing cells (Figure 1B,C).This suggests that the expression of the TurboID proteins did not have a substantial effect on the overall characteristics of the EVs obtained in this study.To further distinguish SEVs from other particles, crude SEVs were separated by iodixanol-based density gradient ultracentrifugation, and 6 fractions from the top layer were sampled for quantitative proteomic analysis.We found that the TurboID fusion protein was coenriched in the third fraction with SEV marker proteins such as Alix (PDCD6IP), CD81, and CD9 (Figures 1D and S1E and Table S2), suggesting that the TurboID proteins were indeed associated with SEVs (hereafter referred to as "TurboID-EV").
We estimated the relative abundance of the 293 SEV proteins quantified in the third fraction using intensity-based absolute quantification (iBAQ 24 ).iBAQ provides a rough estimation of relative protein abundance based on the sum of all of the peptide intensities divided by the number of theoretically observable peptides.The abundance of the TurboID fusion proteins was as high as that of the SEV markers (Figure 1E and Table S2), indicating that a reasonable number of the TurboID fusion proteins were successfully tethered to the SEVs.
TurboID-EVs Can Be Uptaken by Recipient Cells.We next examined whether TurboID-EVs can be taken up by the recipient cells.A previous study performed a quantitative analysis of temporal EV uptake using HEK293T cells as a recipient model. 16Based on their findings, we selected 4 h as the coincubation time with EVs, as EV dots started to become observable at this time point and exhibited the highest number of EV dots within HEK293T cells. 16TurboID-EVs prepared using ultracentrifugation (Methods) were incubated for 4 h with normal HEK293T cells that did not express the TurboID proteins.Consistently, our results showed that the 4 h time point exhibited the highest number of EV dots (Figure S2A,B), confirming the uptake of TurboID-EVs.TurboID-EVs, thus, can serve as a valuable tool for monitoring EV uptake and fate within recipient cells.
TurboID-EVs Retain Enzymatic Activity in Vitro and Can Biotinylate EV Proteins.To confirm if TurboID-EVs are active after collecting them with ultracentrifugation, we performed in vitro biotinylation of EV proteins by incubating TurboID-EVs with exogenously added ATP and biotin (see Methods).As a control, an experiment without biotin addition was also done.Protein amounts for EV pellets after ultracentrifugation were approximately 10 μg, which was a much lower input for streptavidin-based affinity purification compared to a typical proximity labeling experiment where mg amounts of proteins are used. 10To efficiently enrich biotinylated proteins from a low input sample, we adapted and modified the fully integrated spintip-based affinity purification-MS technology (FISAP) method, 25 which uses streptavidin sepharose in a StageTip 26 for capturing and digesting biotinylated proteins (see Methods).Using FISAP and LC/MS/MS, we quantified 2,327 biotinylated proteins, of which 918 proteins were enriched in the biotin (+) samples (log 2 fold-change > 1, p < 0.01) (Figure 2B and Table S3).Notably, gene ontology (GO) enrichment analysis using Database for Annotation, Visualization and Integrated Discovery (DAVID) 27 revealed that exosome-related proteins, including CD9 and CD63, were highly enriched (Figure 2C).These results indicate that TurboID-EVs retained their activity and could biotinylate EV proteins, providing information about the protein composition of TurboID-EVs.
TurboID-EVs Can Biotinylate Recipient Cellular Proteins.Having confirmed the properties of TurboID-EVs, we finally applied the method to identify recipient cellular proteins proximal to TurboID-EVs during and after uptake.The TurboID fusion proteins should biotinylate both TurboID-EV proteins (donor; see Figure 2) and recipient cellular proteins.To distinguish whether biotinylated proteins are derived from donor EVs or recipient cells, we labeled recipient cells with heavy amino acids (Arg10, Lys8) based on SILAC, 14 while donor TurboID-EVs were collected from unlabeled (Arg0, Lys0) cells (Figure 3A).We then treated and incubated the heavy-labeled recipient cells with TurboID-EVs and biotin for 4 h (see Methods).We observed that TurboID-EVs were taken up by recipient cells (Figures S2B and 3A), but only a few percent of the total cell population exhibited mCherry-positive signals (Figure 3A), consistent with a recent report showing that EV uptake is a process with low yields. 8his indicates that a highly sensitive method is required to analyze a low amount of biotinylated proteins from a few mCherry-positive cells.Indeed, we could not even identify the TurboID fusion protein in recipient cells with a conventional streptavidin affinity purification-MS workflow.We, therefore, sought to enrich low abundance biotinylated proteins by combining FACS sorting of mCherry-positive cells and the spintip-based enrichment of low-abundance biotinylated proteins (Figure 3A).Approximately 7.8 × 10 4 mCherrypositive cells (∼10 μg protein) were sorted, and then, after spintip-based enrichment and digestion of biotinylated proteins, LC/MS/MS was used to analyze the resulting peptides.In parallel, we analyzed the mCherry-negative cells (7.8 × 10 4 cells) from the same cell population as the control (Figures 3A and S2C).
This approach led to the quantification of 613 heavy-labeled proteins (i.e., recipient cellular proteins), of which 456 proteins (74%) exhibited at least 10-fold enrichment in the mCherrypositive cells relative to the negative cells (Figure 3B and Table S4), since only mCherry-positive cells are expected to be biotinylated by TurboID-EVs.The extracted ion chromatograms of the selected proteins also ensured reliable quantification (Figure 3B).Mass spectra exemplifying this are shown for the TurboID protein and the clathrin heavy chain 1 (CLTC) (Figure 3C); a peptide (LIIGDKEIFGISR) derived from the TurboID was exclusively observed as a light form, validating that TurboID-EV proteins from donor cells were unlabeled and not cross-labeled with heavy amino acids.In contrast, both light and heavy forms of CLTC peptides (e.g., AVNYFSK) were observed, consistent with the observations that CLTC is one of the SEV proteins 28 and it is also involved in EV uptake. 29ndocytosis is one of the well-known mechanisms for the uptake of fine particles including viruses, EVs, and nanoparticles. 29,30Indeed, our approach identified proteins involved in clathrin-dependent endocytosis (e.g., CLTC, AP-2 complex subunit beta (AP2B1), AP-1 complex subunit gamma-1 (AP1G1), AP-3 complex subunit beta-1 (AP3B1), ras-related protein RAB5, and RAB7 and macropinocytosis-related proteins (e.g., filamin-A (FLNA), Coronin-1C (CORO1C), actin-related protein 2/3 complex subunit 2 (ARPC2), alphaactinins (ACTN1/3/4), and fascin (FSCN1)).In addition to these, proteins involved in intracellular transport during and after EV uptake were identified and formed a highly connected protein interaction network based on the STRING database 31 (Figure 3D).These results suggest that the TurboID-EV system could capture proteins potentially involved in a series of EV uptake processes.Importantly, we identified plasma membrane-associated proteins such as junction plakoglobin (JUP) and desmoplakin (DSP) related to the tight junction.These proteins might be newly identified key factors underlying SEV uptake, and further experiments, such as investigating the effect of loss-of-gene function with an SEV uptake assay, will be needed to validate their role in SEV uptake.Collectively, we demonstrated that the TurboID-EV system enabled us to identify recipient cellular proteins proximal to SEVs, which would provide an opportunity for tracking the footprints of EVs during and after their uptake.
■ CONCLUSIONS
Here, we established the TurboID-EV system, which enables proximity-dependent biotinylation of proteins neighboring TurboID-EVs in vitro and in a cell culture system.While we demonstrated the utility of TurboID-EV using HEK293T cells as a model, this method should be readily applicable to different subtypes of TurboID-EVs (e.g., CD63-positive EVs or EV surface glycan structures 32 ) as a donor EV and various cell types as acceptor cells, which should provide insights into the selectivity of EV uptake mechanisms.Despite the unique features of the TurboID-EV system, one limitation of the method is that donor cells need to express the exogenous TurboID fusion protein.This may limit the usability of the method to specific cell types and alter the physiological properties of EVs.It is also important to keep in mind that proximity-dependent biotinylation is known to label proteins, irrespective of whether they directly interact with the target proteins or are indirectly associated with them.Therefore, the proteins identified in this study are likely to include "bystander" proteins that are associated with EV-interacting proteins.Nevertheless, we demonstrated that the TurboID-EV system coupled with MS-based proteomics is a promising tool to reveal EV uptake mechanisms on a proteome-wide scale, beyond the limited scope of methods relying solely on fluorescent tagging or microscopy techniques.
■ ASSOCIATED CONTENT Data Availability Statement
The proteomics data have been deposited to the ProteomeXchange Consortium via the jPOST 33 partner repository (https://jpostdb.org)with the data set identifier PXD040569.
Figure 1 .
Figure 1.Evaluation of TurboID-EV production.(A) An overview of the TurboID fusion protein expression vector whose protein product is designed to be tethered to the SEV membrane.(B) Nanoparticle tracking analysis of SEVs collected from HEK293T cells expressing the TurboID proteins (top) and wild-type cells (bottom).The concentration and average diameter of SEVs from the TurboID-expressing cells were 1.7 × 10 9 EVs/mL and 150 nm, respectively.For the wild-type cells, the concentration and average diameter of SEVs from the TurboID-expressing cells were 1.7 × 10 9 EVs/mL and 133 nm, respectively.The inset shows transmission electron microscopy images of SEVs.(D) Relative protein abundance profiles of the TurboID fusion protein, selected SEV markers (CD9, CD81, and Alix), and a non-SEV protein (pyruvate kinase PKM) across iodixanol density gradient fractions.PKM is shown as an example that was not associated with SEVs based on its abundance profile.The "relative intensity" represents the signal intensity of each fraction divided by the highest intensity observed for a given protein.(E) Log 2 iBAQ intensities of proteins quantified in the third fraction of the density gradient ultracentrifugation.
Figure 2 .
Figure 2. In vitro biotinylation of TurboID-EVs.(A) Experimental design for in vitro biotinylation of TurboID-EVs collected with ultracentrifugation.Biotinylated proteins were enriched with StageTips containing C18 disk and streptavidin sepharose.(B) A volcano plot showing differential biotinylation levels of proteins with or without biotin addition in TurboID-EV fractions.Three independent experiments were performed.(C) GO enrichment analysis of the significantly enriched proteins in the biotin (+) experiments (log 2 fold-change > 1, p < 0.01).Only the top 3 GO cellular component terms are shown.
Figure 3 .
Figure 3. Identification of proteins in recipient cells proximal to TurboID-EVs during and after uptake.(A) Experimental design for identification of proteins in recipient HEK293T cells in close proximity to TurboID-EVs.The recipient cells were SILAC heavy-labeled.Only heavy-labeled proteins were monitored to identify and distinguish recipient proteins and compared between the mCherry-positive and -negative cells.(B) Log 2 fold-change of the heavy-labeled recipient proteins between the mCherry-positive and -negative cells.Representative extracted ion chromatograms (XICs) of monoisotopic peaks of the heavy peptides of the selected proteins are shown for the mCherry-positive (green) and -negative (orange) cells.AP2B1: LASQANIAQVLAELK; DSP: VQYDLQK; JUP: LAEPSQLLK; CLTC: TSIDAYDNFDNISLAQR; RPL7: EVPAVPETLK (C) Exemplary MS spectra for TurboID (LIIGDKEIFGISR) (top) and CLTC (AVNYFSK) (bottom).(D) A STRING protein interaction network of the selected proteins involved in intracellular protein transport (based on the high-confidence network, >0.7).
|
2023-09-16T06:17:23.354Z
|
2023-09-14T00:00:00.000
|
{
"year": 2023,
"sha1": "6a273d24b59bd7281106a4d065ad3e3550f3552c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acs.analchem.3c01015",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f79992ed19a52f8a10c4ff4acb0129abdaebda53",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14216629
|
pes2o/s2orc
|
v3-fos-license
|
M Theory: Uncertainty and Unification
I review our current understanding of the Worldformula, M theory, focusing on themes from the work of Heisenberg.
Introduction
We are in the middle of a series of important centennials: Wolfgang Pauli in 2000, Enrico Fermi and Werner Heisenberg in 2001, and Paul Dirac in 2002. This has presented an excellent opportunity to go back and review the scientific achievements of these men. Of course, the work that they did in the 20's, in their twenties, was their most important. But what I found more interesting was the work that they did afterwards. After they discovered quantum mechanics and established the basic framework of physics, they went on to try to understand the nuclear interaction, and quantum field theory, and the spectrum of particles and how all these things fit together. Some of these problems have since been solved, and it is interesting to compare their efforts with what we now know. Some of these problems we still struggle with, and it is even more interesting to compare the things that they tried with what we are trying today.
In many cases their point of view was surprisingly modern. Many of them tried to find a unified theory. Pauli, for one, was very attracted by Kaluza-Klein theory, the unification of gravity and electromagnetism in higher dimensions. Einstein, who was older of course, is well-known for his attempts at a unified theory, and Heisenberg is remembered for his attempts at a Worldformula.
Today many of us believe that there is a Worldformula. That is, there is a physical-mathematical structure that incorporates quantum mechanics, special relativity, general relativity, and the particles and their interactions, and which is beautiful and unique. We do not know the final form of this theory; we are like the quantum mechanicians in the early twenties, discovering the theory a piece at a time. In this talk I would like to present our current understanding of the Worldformula, M theory, and to structure the talk around some of the themes that were important in Heisenberg's work: a fundamental length, uncertainty, nonlinearity, and observables.
A Fundamental Length
Before getting to M theory, I want to say a few words about quantum field theory, one of the more-or-less solved problems. After quantum mechanics the next step was to incorporate special relativity. In principle this is straightforward and leads to quantum field theory. The problem was that the result had divergences, infinities. These arise because in quantum field theory the number of observables is infinite -for example, the values of the electric and magnetic fields at every point, The discoverers of quantum mechanics thought very hard about this problem and tried many solutions. There are two broad classes of solution. One is that quantum field theory breaks down at some fundamental distance that I will call l 0 . The other is this idea of renormalization, that the infinities do not appear in observables, they cancel and leave finite results. According to most textbooks, the second idea won out, that it is through renormalization that quantum field theory makes sense. Many of the pioneers of quantum mechanics found this unattractive, and so it is worth emphasizing that our modern point of view is really a combination of these two approaches, and is actually closer to the first [1]. That is, the quantum field theories that we deal with are not valid down to arbitrarily short distance. Mathematically some of them (the asymptotically free ones) might make sense to arbitrarily short distance, but as a point of physics we don't expect them to be valid this far. At successively shorter scales one expects to encounter new quantum field theories, and ultimately no quantum field theory at all. The technical content of renormalization theory remains, but it has a new and much more physical interpretation: that the physics we see at long distances is largely independent of what is happening at very short distances, so we can calculate without knowing everything. I assume that many of the pioneers of renormalization thought in these terms, but this did not make it into the textbooks, which for decades presented our fundamental understanding of quantum field theory as No! So if there is a fundamental length scale, what is it? Heisenberg's idea was based on the weak interaction. The Fermi coupling G F , settingh = c = 1, has units of length-squared. This length is about 10 −15 cm, and Heisenberg identified this with the fundamental length. The reason is that at shorter distances l the effective dimensionless coupling l 2 0 /l 2 becomes large, and in Heisenberg's words physics becomes 'turbulent.' Of course we now know that at the weak length scale, before turbulence can set in we just run into a new quantum field theory, Yang-Mills theory. But there is another constant of nature with units of length-squared, Newton's constant G N , where l 0 would be the Planck length 10 −32 cm. Here we really do believe that there is a fundamental and final length scale, because when gravity becomes strong it is spacetime itself that becomes turbulent, and the notion of distance ceases to make sense. Whereas the weak interaction describes particles in a fixed spacetime, gravity describes spacetime itself, and so it is at here that Heisenberg's turbulence argument implies a fundamental length. As far as I know, Heisenberg never thought directly about quantum gravity, because he was focused on the microscopic world, but we have learned that in order to make progress we have to think about everything.
It is interesting to note that in the recent idea of large extra dimensions, the Fermi constant really does set the fundamental length scale. Things are more complicated because there is another length in the problem, the size R of the extra dimensions. One then has where n is the number of large dimensions. I won't expand on this further, but it is curious that Heisenberg may have had the right length scale after all [2]. What happens at the fundamental scale l 0 ? In quantum field theory the interactions take place at spacetime points. When there is a fundamental length scale then the interactions must be spread out in some way, and this is not easy to do. It is not easy because there is a symmetry between space and time, special relativity, so if there is a spreading in space there is a spreading in time as well. Then there it the danger of losing causality and unitarity, so that physics does not make sense. In fact, in the case of quantum gravity, of everything that has been tried only one idea has worked, which is to replace the points with tiny loops, strings. And, strange as this is, string theory turns out to incorporate, and extend, many of the other unifying principles that have been tried and seem promising -supersymmetry, grand unification, and Kaluza-Klein theory.
I should mention that we often call the theory that we are working on string theory, because it has largely grown out of string theory, but it it has now grown into a larger structure. Thus we often call it M theory, a deliberately mysterious name for a theory whose final form we do not know.
There are actually several ways to introduce such noncommuting coordinates. An obvious thing is to put some constant matrix on the right-hand side so that spacetime becomes like a quantum mechanical phase space. The obvious problem is that the right-hand side is an antisymmetric tensor, so this cannot be Lorentz invariant; this is undoubtedly the main obstacle that inhibited the exploration of this direction. Nevertheless it is an interesting idea, which can be incorporated into quantum field theory and modifies the short-distance structure in puzzling ways (though it does not remove the short distance divergences) [3]. This kind of noncommutativity does appear in string theory, where the matrix θ µν is the value of some spacetime field, but it only applies to the coordinates of open, not closed, strings. It is not clear what the role of this noncommutativity is, or how fundamental it is, since you can turn it off by setting the tensor field to zero. I should note though that Witten's open string field theory is in a sense an enlargement of this idea. I would like to talk about another way to introduce noncommutativity of coordinates. Consider a nonrelativistic system of N particles. Its configuration space is defined by the N sets of coordinates where i labels the coordinate axes and a labels the different particles. Now let us make a different guess as to how to make these noncommutative. Let us double the lower, particle, index to make these into matrices, This is a bit strange, but it is not so far from the spirit of how Heisenberg guessed at matrix mechanics, so let us try it and see where it leads. Now in general the different spatial coordinates do not commute: so this is another way to introduce noncommutativity into the coordinates. In a sense it makes the particle identities uncertain as well, because we can now change the basis for the matrices.
We want physics at low energy to have its familiar form, while the new noncommutativity becomes important at high energy. We can arrange this by adding a certain potential energy term to the Hamiltonian. Here is the Hamiltonian, written in matrix notation: The first term is an ordinary kinetic term for every component of every matrix. The second term is the sum of the squares of every commutator (9), so that at low energy these commutators must vanish and we recover the ordinary commuting positions; at high energy the noncommutativity appears. Then this simple Hamiltonian has the desired property. Actually, this doesn't quite work yet, because the quantum corrections spoil the structure, in that they produce a nonzero energy even when the coordinates commute. But we know of a general way in physics to cancel quantum corrections, and that is to introduce supersymmetry. One introduces in addition to the real number coordinates x i ab some fermionic coordinates ψ i ab ; essentially this means that the particles can have various spins. Adding an appropriate coupling of the real and fermionic coordinates to the Hamiltonian, makes the theory supersymmetric and cancels the unwanted quantum corrections. The theory then behaves as desired, commutative at low energy and noncommutative at high energy.
Once we have added supersymmetry it is natural to consider the largest possible supersymmetry algebra. It happens that the largest possible algebra has 16 supersymmetry charges. But once we take this step, we begin to encounter a nice convergence of ideas: the funny commutator potential term, which we added in order to get back ordinary physics at low energies, is in fact the unique potential allowed when there are 16 supersymmetries! This is a sign that we are on the right track -we are getting more out than we put in. 2 In fact, things are even better. If we look now at the low energy physics of the commuting coordinates, the noncommuting parts of the coordinate matrices give virtual effects. One can calculate this, and one finds that the net effect of the virtual degrees of freedom is precisely to give a gravitational interaction (or supergravitational, to be precise) between the particles. Gravity is not put in from the start, it is a derived effect of the noncommutativity! Could it be that eq. (11), and not string theory, is the Worldformula? Yes, and no. Eq. (11) very likely is the Worldformula, but it is not an alternative to string theory, it is string theory. To be precise, this is the Banks-Fischler-Shenker-Susskind matrix theory, describing M theory in eleven asymptotically flat dimensions with one of the null directions periodic [4]. So this is a formula for a world, but not for our world. It is a complete description of one sector of the Hilbert space of M theory, but one that still has a lot of physics -gravitons, black holes, strings, and branes are all described by this simple matrix Hamiltonian. We live in a much less symmetric state, where seven of the dimensions are curved and compact, and on top of this the geometry of our spacetime is changing in time. We do not yet know the correct form of matrix theory or M theory in our much less symmetric state, it is undoubtedly much more complicated.
Nonlinearity
So how do we see the strings and branes in the Hamiltonian (11)? Essentially, the particles can link up, due to their noncommutative nature, into loops and higher-dimensional structures. It is essential here that the Hamiltonian is nonlinear. This was an important part of Heisenberg's thinking also, that we could start from a simple Hamiltonian and build up complicated physics via nonlinearities. QED is a nice textbook example of a weakly coupled field theory, where the nonlinearities can be treated perturbatively, but the most interesting phenomena in physics, like quark confinement, dynamical symmetry breaking, and black holes, arise due to strong nonlinearities.
One of the important things that we have learned in the past few years is that nonlinear theories do not have to be ugly and chaotic. For the particular Hamiltonians that arise in string theory and M theory, it happens in many cases that just when the nonlinear effects become very large, and you would expect that the physics becomes very 'turbulent,' there is a new set of variables in terms of which the physics becomes approximately linear. This is called a 'duality,' and it is a remarkable phenomenon that has enabled us to make great progress in understanding string/M theory. For example, the matrix Hamiltonian (11) can be recast in terms of string variables, and the theory takes the familiar form of a sum over string world-histories; this string description becomes weakly coupled (linear) when exactly one of the coordinates x i is made periodic.
One way to summarize our understanding of string theory is through a sort of phase diagram, shown in Fig. 1 [5]. In various limits, which are the corners of the diagram, the physics linearizes. Five of these points correspond to one or the other of the string theories, and the sixth is the eleven-dimensional theory that I have been discussing. Up until a few years ago, all we understood was the five stringy points and their neighborhoods, but now we are able to map out the whole diagram. What we used to think of as different theories are just different phases in a single theory.
If this were the phase diagram of water, say, then the parameters would be the pressure and temperature. Here, the parameters are the shapes and sizes of the compact dimensions. M theory has gravity, so spacetime is dynamical. We are most interested in spacetimes like ours, which has four large spacetime dimensions and the rest small and compact. Even if we cannot see directly those compact dimensions, the important principle is that the physics that we do see depends on their geometry and topology. So it is this geometry that is varying as we move around the diagram, and there are certain limits of the geometry in which the physics becomes linear in some set of variables. By the way, this diagram is greatly oversimplified, in that there are many parameters and many more pieces of the diagram which join each other across phase transitions. When a four-dimensional physicists sees a phase transition, a qualitative change in the physics, what is usually happening from the higher-dimensional point of view is a change of the topology of space.
In the middle of the diagram, away from the linear limits, we do not know how to calculate, but what is worse is that we do not know even in principle
Observables
For all we understand string/M theory, we still do not know its central defining principle, the analog of the uncertainty principle in quantum mechanics and the equivalence principle in general relativity. What we need is for one of the young people in the audience to do what Heisenberg did, to go off to Heligoland for a few weeks and figure it out. Before you go, I would like to try to play the role of Bohr, and give you a few things to think about. First, the key step may be to identify what are the physical observables, and what cannot be observed. For example, the equivalence principle tells us that we cannot measure absolute velocity or absolute acceleration. The uncertainty principle tells us that we cannot measure position and velocity to arbitrary accuracy.
In string/M theory, the issue of observables has been around for a while. The obvious observable in string theory has always been the S matrix, the amplitude to go from some configuration of strings (or strings and branes) in the infinite past to some other configuration in the infinite future. This correctly incorporates the principle that we can only make measurements with physical objects. For example, we cannot talk about some local operator at a point without a prescription for measuring it in a scattering experiment. On the other hand, the S matrix does not correspond to our experience of time in an ongoing way. It is even more a problem in cosmology, where the universe may not have an infinite past and future.
It is worth noting at this point that Heisenberg is in a rather direct sense the great-grandfather of string theory: The strong interaction was a difficult problem for a very long time, and one of the ways that Heisenberg tried to approach it, in the 40's, was via the same route that he understood quantum mechanics: identifying the physical observables. So he invented the S matrix for just this purpose, and he further proposed that it would be determined entirely by physical consistency, unitarity and analyticity. Heisenberg dropped this idea a few years later, in favor of a more dynamical approach. But the strong interaction remained unsolved twenty years later, and so Chew and others returned to the idea that we should consider only the S matrix and its consistency conditions. For the strong interaction this was not correct, it is a local field theory, but it led Veneziano to make an inspired guess and write down a simple solution to the consistency conditions. His model was interpreted a few years later as describing a theory of strings, and that led in turn to strings as a theory of gravity and everything else. 3 So the issue of observables has been central to the history of string theory, and it is probably also a key to its future.
On to Heligoland
We do have an idea of what the central principle is, and we call it the holographic principle. We do not have a precise formulation of this, but the rough statement is that if we have a system in some region, the states of the system can be characterized by degrees of freedom living on the surface of that region [6]. This is completely contrary to our experience and to quantum field theory, where the degrees of freedom would live at points in the interior of the region. But there are strong arguments that this must be true in a theory of quantum gravity, and it is much less local than one would have with just a minimum length. It means that the thing that we must give up in our next revolution is the underlying locality of physics. This principle is suggested by black hole quantum mechanics, where the entropy is proportional to the surface area. It has a precise realization in recent dualities in string theory, the AdS/CFT duality and generalizations, where the states of string theory in the bulk of the anti-de Sitter spacetime are isomorphic to the states of gauge fields on the boundary. However, anti-de Sitter spacetime is very special, and the realization of the holographic principle in more general settings is not known.
Many of the open puzzles in string theory seem to center on cosmology: • Why is the cosmological constant so small, and why then is it not exactly zero?
• What are the observables in a cosmological situation, and how does one formulate the holographic principle, especially if the spatial geometry is closed?
• How are cosmological singularities resolved? This is a problem that has been solved in string theory for many static singularities.
• How do we find a unified theory of the dynamical laws and the initial conditions?
I have presented this as a purely theoretical discussion; unfortunately experiment still gives little guidance as to what lies beyond the Standard Model, and what is the theory of quantum gravity. Notice, however, that the apparent observation of a positive cosmological constant has very strongly affected the thinking of string theorists. In particular, it very much complicates the formulation of the holographic principle. So even a small amount of data can have a large impact. Let me therefore echo Michael Peskin's message about the importance of building TESLA.
Finally, let me wish the young people in the audience: have a good trip to Heligoland, and call when you get back!
|
2014-10-01T00:00:00.000Z
|
2002-09-12T00:00:00.000
|
{
"year": 2002,
"sha1": "9950818f19e5aaf70d7a707769d5ddf2cdc07dce",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0209105v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "54fc085e95c5dc60ab02a48b42fc010abcd9b35a",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
}
|
247447622
|
pes2o/s2orc
|
v3-fos-license
|
$L^2$-Gradient Flows of Spectral Functionals
We study the $L^2$-gradient flow of functionals $\mathcal F$ depending on the eigenvalues of Schr\"odinger potentials $V$ for a wide class of differential operators associated to closed, symmetric, and coercive bilinear forms, including the case of all the Dirichlet forms (as for second order elliptic operators in Euclidean domains or Riemannian manifolds). We suppose that $\mathcal F$ arises as the sum of a $-\theta$-convex functional $\mathcal K$ with proper domain $\mathbb{K}\subset L^2$ forcing the admissible potentials to stay above a constant $V_{\rm min}$ and a term $\mathcal H(V)=\varphi(\lambda_1(V),\cdots,\lambda_J(V))$ which depends on the first $J$ eigenvalues associated to $V$ through a $C^1$ function $\varphi$. Even if $\mathcal H$ is not a smooth perturbation of a convex functional (and it is in fact concave in simple important cases as the sum of the first $J$ eigenvalues) and we do not assume any compactness of the sublevels of $\mathcal K$, we prove the convergence of the Minimizing Movement method to a solution $V\in H^1(0,T;L^2)$ of the differential inclusion $V'(t)\in -\partial_L^-\mathcal F(V(t))$, which under suitable compatibility conditions on $\varphi$ can be written as \[ V'(t)+\sum_{i=1}^J\partial_i\varphi(\lambda_1(V(t)),\dots, \lambda_J(V(t)))u_i^2(t)\in -\partial_F^-\mathcal K(V(t)) \] where $(u_1(t),\dots, u_J(t))$ is an orthonormal system of eigenfunctions associated to the eigenvalues $(\lambda_1(V(t)), ,\dots,\lambda_J(V(t)))$.
Introduction
Optimization problems for eigenvalues of elliptic operators have been a subject of great interest in the last few years, due both to the possible applications and to the challenging mathematical questions arising from these topics.
In particular, shape optimization problems for the eigenvalues of the Dirichlet Laplacian have been deeply investigated and many results concerning existence of optimal shapes in suitable admissible classes of domains, together with regularity results, have been proved, see [18,19] for an overview.
A point of view that has not completely been understood yet for this class of problems is an evolutionary approach through a gradient flow of shapes associated with a functional depending on the eigenvalues. One of the main issues is the choice of the natural metric driving the evolution and of a corresponding topology well adapted to shape optimization problems. In the case of stationary variational problems, the best approach in order to prove existence results (see [7]) is to relax the problem in the class of capacitary measures, i.e. Borel measures that vanish on sets of zero capacity, where the γ-convergence provides a compact topology sufficiently strong to guarantee the continuity of the eigenvalues of the Dirichlet Laplacian. In the framework of capacitary measures, a first gradient flow evolution for this problem was proposed by Bucur, Buttazzo and Stefanelli in [6]. They prove existence of (generalized) Minimizing Movements for a large class of functionals, but they do not characterize explicitly the gradient flow equation. A very interesting observation from their work is that, even in cases in which the evolution starts from a "nice" shape, then the relaxation in the capacitary measures can actually happen. We also quote the approach of [15] in shape optimization problems. Eigenvalue problems associated with Schrödinger potentials. In the present paper we propose a different approach, and we focus on the evolution, driven by the L 2 -metric, of a special class of capacitary measures, that is, those absolutely continuous with respect to a given reference measure (such as the Lebesgue measure of R d ). Even though in the strong L 2 -framework the driving functionals are not smooth nor convex and their sublevels are not compact, we are still able to prove that the Minimizing Movements solve a natural differential inclusion. Our approach is sufficiently strong to deal with eigenvalues of a wide class of operators, not only those of the Dirichlet Laplacian, avoiding the relaxation phenomenon.
In fact, we will address the problem in the general setting of a (weakly) coercive, symmetric bilinear form E : V × V → R on a Hilbert space V densely and compactly embedded in H = L 2 (D, m) for a finite measure space (D, m). Since E is a nonnegative quadratic form we have (1.1) We consider a convex subset K of L 2 (D, m) whose elements V satisfy the uniform lower bound V (x) ≥ V min for a fixed constant V min . Given a (Schrödinger) potential V ∈ K (that we can identify with the capacitary measure µ = V m absolutely continuous with respect to the reference measure m), we can introduce the symmetric bilinear form where L is the linear selfadjoint operator associated with E.
Our main structural assumption on V, E, D, m is that for every choice of positive constants C,λ ∈ R + the set of eigenfunctions satisfying (1.3) for λ ≤λ and V ∈ K with V L 2 (D,m) ≤ C is relatively compact in L 4 (D, m). This property is always satisfied if, e.g., V is compactly embedded in L 4 (D, m) or E is a Dirichlet form.
Apart from the (finite-dimensional, but still interesting) case when D is a finite set, simple relevant examples covered by our setting are provided by a bounded Lipschitz open set D of R d with the usual Lebesgue measure m = L d (or a compact smooth Riemannian manifold endowed with the Riemannian volume measure) and 1. The Dirichlet (resp. Neumann) Laplacian Lu = −∆u (the Laplace-Beltrami operator in the Riemannian case) corresponding to V = H 1 0 (D) (resp. V = H 1 (D)) and E(u, v) = D ∇u · ∇v dx. 2. The elliptic operator associated with the Dirichlet form E(u, v) = D A(x)∇u·∇v dx in H 1 0 (D) or H 1 (D), where A satisfies the usual uniform ellipticity condition α|ξ| 2 ≤ A(x)ξ · ξ ≤ α −1 |ξ| 2 for some α > 0 and every x ∈ D, ξ ∈ R d . 3. The fractional Laplacian, for s ∈ (0, 1), with where the integral should be read in the principal value sense, and V = H s (D) or V = H s 0 (D). 4. The Dirichlet (resp. Neumann) Bilaplacian corresponding to V = H 2 0 (D) (or H 2 (D)) and E(u, v) = D ∇ 2 u : ∇ 2 v dm in dimension d ≤ 8. 5. We can also consider the Dirichlet form induced by a nondegenerate Gaussian measure m in a separable Hilbert space D, see e.g. [12,Chap. 10]. L 2 -gradient flows. The aim of this paper is to study the L 2 -gradient flow of potentials driven by the limiting subdifferential (known also as Mordukhovich subdifferential [21,24,25]) of a functional F : L 2 (D, m) → R∪{+∞} arising from the sum of two competing terms K and H : (1) a convex (or a quadratic perturbation of a convex) confining term K (typically nonsmooth, such as the indicator function of a closed convex set of L 2 (D, m)) such that K (V ) = +∞ iff V ∈ K, which in particular forces the potential V to stay above the constant V min . K will keep track of the class of admissible potentials (see, e.g., formula (3.4) below).
which depends on the first J eigenvalues associated with V through a function ϕ ∈ C 1 (Λ J ) where Λ J is the subset of [λ min , +∞) J spanned by all the ordered vectors made of J real numbers. At least when all the first J + 1 eigenvalus are distinct, the gradient flow equation (1.6) reads as where (u 1 (t), u 2 (t), · · · , u J (t)) is an orthonormal system of eigenfunctions associated with the potential V (t) and to the eigenvalues λ i (V (t)). When some of the eigenvalues are multiple the function H loses its differentiability properties; however, we will still be able to recover (1.8) at least when ϕ satisfies a suitable compatibility condition at the boundary of Λ J . Among the interesting examples that are covered we mention the monotonically increasing composition of symmetric functions of the first k-eigenvalues, k ≤ J, as (here λ min > 0) The special case when ϕ(λ 1 , · · · , λ J ) = λ 1 + · · · + λ J (also with J = 1) is a quite interesting example of concave functional leading to the differential inclusion From the viewpoint of gradient flows, the main difficulty and challenging feature arising from increasing functions of eigenvalues is that even the simplest map V → λ j (V ) is not even a smooth perturbation of a convex function with respect to the potential V (when j = 1 it is in fact a concave function, which is not differentiable when λ 1 is a multiple eigenvalue). Therefore many standard results of gradient flow theory do not apply. We are thus led to follow and adapt results for gradient flows of highly non-convex functionals proposed in [28]. We also have to circumvent a second important difficulty, related to the lack of compactness of the sublevels of F . By analyzing the structure of the limiting subdifferential of (suitable regularizations of) H and employing a sort of compensated-compactness argument, we are eventually able to prove the strong convergence of the Minimizing Movement scheme for F and to to show that all the limits satisfy (1.6) (and (1.8) for compatible ϕ).
Plan of the paper. Section 2 is devoted to clarify the structural assumptions we will refer in the paper. The discussion of the main examples and of some applications covered by the theory is carried on in Section 3. Section 4 contains the precise statement of our main results. The crucial tools concerning the regularity and the differentiability properties of the functionals H and F are developed in Sections 5 and 6 respectively.
The last Section 7 collects the main estimates concerning the Minimizing Movement scheme and is then devoted to the proof of its strong convergence.
The Appendix contains some basic material concerning convergence of eigenvalues and eigenfunctions and a useful result of convex analysis.
Notation and assumptions
We briefly collect here the abstract setting in which we work for the whole paper and a few preliminary results. Let (D, m) be a finite measure space with H := L 2 (D, m) separable. We will denote by | · | and ·, · the norm and the scalar product of H. For the sake of simplicity, in the following we will assume that dim(H) = +∞; it will be easy to adapt the various statements to the case when H has finite dimension (e.g. when D is a finite set and we can identify L 2 (D, m) with some R d ).
2.
A Closed, symmetric, and coercive bilinear forms. We will consider a Hilbert space V satisfying V ֒→ H is densely and compactly imbedded in H, and a continuous and symmetric bilinear form for constants α ≥ 0 and M > 0. The bilinear form E(u, v) = E(u, v) + u, v is a scalar product for V inducing an equivalent norm.
2.B Admissible Shrödinger potentials. We will deal with a lower-semicontinuous (−θ)-convex functional K : H → (−∞, +∞] where θ ≥ 0, V min ∈ R are suitable constants that we will keep fixed throughout the paper. Notice that the domain K of the functional K characterizes the set of admissible potentials. Recall that K is (−θ)-convex if the function K θ : V → K (V ) + θ 2 |V | 2 is convex; for later use, we will set , c ≥ 0, is an increasing family of closed and bounded convex subsets of H whose union is K. Since K is not empty, it contains at least an element V o , so that setting is not empty for every c ≥ c o .
2.C Eigenvalues. For every V ∈ K we consider the symmetric bilinear form Denoting by H V the closure of D(E V ) in H, it is clear that E V is a closed and symmetric bilinear form, whose domain D(E V ) is compactly imbedded in H V . It is also clear that We say that λ ∈ R is an eigenvalue for (the linear operator induced by) Any nontrivial solution u to (2.8) is called a (V, λ)-eigenfunction (u is also normalized if |u| = 1). The standard spectral theory applied to the bilinear form E V allows us to prove that there exists a sequence λ(V ) = (λ 1 (V ), · · · , λ k (V ), · · · ) of eigenvalues satisfying Notice that λ min is defined in terms of α and V min and it will remain fixed throughout the paper. Moreover the sequence of the eigenvalues can be characterized by means of a min-max principle, where the minimum is taken over the subspaces E j ⊂ D(E V ) of dimension j. For a given J ∈ N λ J = λ J (V ) ∈ R J will denote the vector of the first J eigenvalues; we will denote by U J (V ) the collection of all the orthonormal systems of eigenfunctions associated with λ J (V ), namely (2.11) We note that the eigenvalues satisfy the following monotonicity property with respect to the potential: (2.12) 2.D L 4 -summability of eigenfunctions. We will assume that every eigenfunction u solving (2.8) for some V ∈ K belongs to L 4 (D, m) and for every constant c ≥ c o andλ > λ min the set U[c,λ] := u is a normalized (V, λ)-eigenfunction with V ∈ K[c], λ ≤λ is relatively compact in L 4 (D, m). (2.13) is weakly compact in H. Further, we will see that from every sequence u n of (V n , λ n )-eigenfunctions, n ∈ N, with λ n → λ and V n ⇀ V in H it is possible to extract a subsequence k → u n(k) strongly converging to a (λ, V )-eigenfunction u in V. Then, (2.13) is in fact equivalent to the compactness of U[c,λ] in L 4 (D, m) and can also be formulated as a continuity property: for every sequence (u n ) n∈N of normalized (V n , λ n )-eigenfunctions: (2.14) Assumption (2.13) and (2.7) guarantee that every (V, λ)-eigenfunction u belongs to V 4 ⊂ D(E W ) for every W ∈ K and In fact, if V ∈ K and (u k ) k∈N is an orthonormal system of the eigenfunctions of E V , the space Moreover, Assumption (2.13) yields in particular that for every c ≥ c o ,λ > λ min there exists a constant C such that u ∈ U[c,λ] ⇒ u L 4 (D,m) ≤ C.
then Assumption (2.13) is satisfied, since U[c,λ] is clearly bounded in V (thus relatively compact in L 2 (D, m)) and bounded in L p (D, m) for p > 4, and therefore relatively compact in L 4 (D, m).
Examples and applications
Let us briefly show a few examples where the assumptions of Section 2.A-2.D apply, considering in particular the cases (a,. . . ,d) of 2.D.
3.1. The finite dimensional case. Let D be a finite set which we can identify with the set of indices {1, · · · , d} so that H = L 2 (D, m) can be identified to R d for some d ≥ 1. In this case the bilinear form E can be identified with a d × d matrix L = (a ij ), symmetric and nonnegative definite, so that We can take as K any convex and lower semicontinuous function in R d whose proper domain is a closed convex set K ⊂ [v min , +∞) d , for some constant v min ∈ R. For every V ∈ K we have the symmetric bilinear form Since we are in a finite dimensional setting, Assumption (2.13) is trivially satisfied.
3.2. The case of a Dirichlet form. Let us now consider the case when E is a Dirichlet form, thus satisfying the Markov condition (see [17]) is also dense in V and thus in H, we deduce that D(E V ) is dense in H. If β := 1 + (λ min ) − the quadratic form E V + β| · | 2 is a Dirichlet form associated with the selfadjoint operator by L V + β whose inverse R β V := (L V + β) −1 : H → H is a sub-Markovian compact selfadjoint operator (and in particular a contraction) in H; it is well known that u is a (λ, V ) eigenfunction if and only if u is a (λ + β) −1eigenfunction of the operator R β V (see also Appendix A). The restriction of R β V to L p (D, m) is also a contraction. By [14, Theorems 1.6.1-2-3] the spectrum and the eigenfunctions of R β V are independent of p ∈ [2, +∞) and in particular all the eigenfunctions associated with V belong to L p (D, m) for every p ∈ [2, +∞).
Let now u n be a sequence of normalized (V n , λ n )-eigenfunctions with λ n ≤λ and V n ∈ K[c]. Up to extracting a suitable subsequence, it is not restrictive to assume that By Lemma A.3 in the Appendix, R β Vn converge uniformly to R β V in L(H): this implies that u is a normalized (V, λ)-eigenfunction. We want to show that u n − u L 4 → 0; we fix p > 4 and we show that the L p -norm of u n is bounded. We argue by contradiction, assuming that u n L p → +∞ along a (not relabeled) subsequence.
Since R V and R Vn are contractions in L q (D, m) for any q > p > 4, by Riesz-Thorin interpolation we also have that R β V − R β Vn L(L p ) → 0 as n → +∞. Settingũ n := u n −1 L p u n by [10, Thm. 7.4, p. 690] we find a (V, λ)-eigenfunction v n such that ũ n − v n L p → 0. We deduce that v n L p is bounded; since v n belongs to a finite dimensional space, it admits a subsequence v n(k) strongly convergent to some limit v in L p (D, m). Thereforeũ n → v strongly in L p (D, m) with v L p = 1; howeverũ n → 0 in L 2 (D, m), a contradiction.
3.3.
The case when V ⊂ L 4 (D, m) or the resolvent has a regularizing effect. This case follows immediately from the equivalent characterization (2.14) of (2.13). Notice that the example 4 in the Introduction corresponds to this situation, thanks to the Sobolev imbedding of H 2 (D) in L 4 (D) when the dimension d ≤ 8. The last case (d) considered in Section 2.D can be easily discussed by observing that if V ∈ L ∞ (D, m) then a normalized (V, λ)-eigenfunction u satisfies the equation We conclude this section by briefly discussing two possible applications of gradient flows of spectral functionals.
For V ∈ K, the classical reaction-diffusion model in an heterogeneous environment proposed by Fisher and Kolmogorov can be generalized as: where u(x, t) represents the population density at time t and position x, and (−V (x)) is the intrinsic grow rate of the species at the spatial point x. The condition proved in [3] for the survival of the species for large times (as t → ∞) is that the first eigenvalue λ 1 (V ) for the associated linearized problem (we stress that here we have the opposite sign in front of the potential, with respect to [3]) which is defined as should be negative, so it is natural to try to minimize it under the constraint V ∈ K. This problem has been widely studied (see for example [9,22] and the references therein): it is known that an optimal potential V * is of bang-bang type, i.e.
On the other hand, there are still many open problems concerning the shape of the partition D ± . The L 2 -gradient flow of the functional can provide some useful new insights.
3.5.
Optimization of eigenvalues of potentials. In the paper [8] some optimization problems for eigenvalues of potentials in the case of the Dirichlet Laplacian, i.e. when D ⊂ R d is an open and bounded set, V = H 1 0 (D) and E(u, v) = D ∇u · ∇v dx, were considered. The authors studied the minimization problem for all ϕ : R J → R regular and increasing in each variables, with the class of admissible potentials K defined as follows: where Ψ : [0, +∞] → [0, +∞] denotes a strictly decreasing convex function and c ∈ R is such that lim It is clear that K is convex and closed in L 2 (D, m). Some remarks about the choice of the class of potentials are in order. First of all, we note that examples of function satisfying the hypotheses above are Ψ(s) = s −β or Ψ(s) = e −βs , for some β > 0. It is immediate to check that K is not empty and that 0 ∈ K, so that the trivial potential V = 0 is not allowed. By choosing K as the indicator function of K (which is clearly convex and lower semicontinuous) and H (V ) = ϕ(λ 1 (V ), . . . , λ j (V )), we provide a gradient flow evolution for the minimization problems studied in [8,Section 4]. We note that in our L 2 setting, the existence of minimizers for the problem follows easily since the functional is weakly lower semicontinuous in L 2 (D, m).
When Ψ(s) = e −βs the interest for problem (3.3) also lies in the fact that it can be used as an approximation of a shape optimization problem (see [8,Example 5.8
Main results
In order to make precise the notion of gradient flows we are going to study, let us first recall the main definitions of subdifferentials which are involved. We refer to [27,Chap. 8] for more details.
equivalently, by using the viscosity characterization [4, Remark 1.4], there exist ̺ > 0 and a function ω : H → [0, +∞) of class C 1 , convex, and satisfying ω(0) = 0, ξ belongs to the limiting subdifferential (known also as Mordukhovich subdifferential [21,24,25] 2) corresponds to the definition of proximal subdifferential. Notice that here we adopted a definition of limiting subdifferential which is stronger than the one considered in [28] (and denoted by ∂ − ℓ G ), since in (4.3) we require the convergence of the functionals G (v n ) → G (v) instead of their boundedness. This choice is justified by the better regularity properties of the functionals which we are considering, but in the case of F the two definition will lead to the same object.
It is well known that when G is (−η)-convex and lower semicontinuous, then ∂ − F G and ∂ − L G coincide [11] and can also be characterized by In particular, when η = 0 and G is convex we recover the usual subdifferential of convex analysis which we will simply denote by ∂ − G . For a given time interval [0, T ], T > 0, the gradient flow of a convex functional then reads as the solution v : and for every given initial In our case, the interesting functionals F are typically neither convex nor (−η)-convex for any choice of η > 0, since even simple examples such as H (V ) = J j=1 λ j (V ) are nonsmooth concave functionals. In this case the graph of the proximal (and also of the Fréchet) subdifferential is not closed and it is then natural to study the corresponding equation of (4.5) in terms of the limiting subdifferential (see e.g. the discussion in [28]). A further difficulty arises by the fact that we did not assume any compactness on the sublevels of F .
In order to circumvent these difficulties, we adopt the variational approach of the Minimizing Movement method [28,1], trying to obtain the gradient flow as a limit of a discrete approximation.
We introduce a uniform partition of the interval [0, T ]: corresponding to a step size τ > 0, the perturbed functionals and we consider the discrete solutions {V n τ } n∈N in H of the variational iterative scheme starting from a given initial datum V 0 ∈ D(F ): We will show (see Lemma 6.1) that a discrete solution always exists for every initial datum V 0 ∈ D(F ). We then callV τ the piecewise constant interpolant and by V τ the piecewise linear interpolant of the discrete values {V n τ }: if there exists a decreasing vanishing sequence of step sizes (τ (k)) k∈N , τ (k) ↓ 0 as k → ∞, and a corresponding sequence of discrete solutionsV τ (k) such that We denote by GMM(Φ, V 0 , T ) the collection of all the (strong) Generalized Minimizing Movements for Φ starting from V 0 in the interval [0, T ].
Our first result reads as follows.
it solves the Cauchy problem, and satisfies the Energy-Dissipation Identity Finally, if k → τ (k) is a vanishing sequence as in (4.9) we also have Remark 4.5 (Affine projection and minimal selection). We recall that the affine hull of a set A ⊂ H is defined as Notice that we have retrieved the minimal section principle (4.11) (even in the stronger formulation (4.10)) in this non convex case: Under the sole C 1 assumption on ϕ of Section 2.E, the precise characterization of ∂ • L F (V (t)) is not immediate. A first piece of information is provided by the following proposition. (4.14) For every t ∈ O the set U J (V (t)) satisfies the minimality property and for every u(t) ∈ U J (V (t)) we have The refined structural condition (2.25) of Section 2.F guarantees that the decomposition (4.16) holds a.e. in (0, T ).
Theorem 4.7. Under the same assumptions of Theorem 4.4, let us also assume that (2.25) of Section 2.F holds, let V be a solution of (4.11), and let D be the set (of full Lebesgue measure) where V is differentiable and the inclusion (4.11) holds. Then for every t ∈ D there exists u(t) ∈ U J (V (t)) satisfying (4.16).
The proof of our main results will follow from the analysis carried out in the next Sections. First of all, in Section 5, we study the regularity and the differentiability properties of the functional H . In Section 6 we use these results in order to prove the existence of discrete solutions to the Minimizing Movement scheme and to obtain crucial structural properties of the limiting subdifferential of F . A crucial step will also be the Chain Rule formula in Proposition 6.4. Section 7 contains the basic estimates on the Minimizing Movement solutions. It is not difficult to show that a weak Generalized Minimizing Movement exists (i.e. a curve V which is the pointwise weak limit of a subsequence V τ (k) ). The main improvement is to show that such a curve is also a strong Generalized Minimizing Movement according to (4.9). This fact is not obvious, since we did not assume that F has compact sublevels: it will be obtained by using the compactness properties of the subdifferential of H and a compensated compactness argument, see Proposition 7.3.
At that point we will have all the ingredients to apply the results of [28]: the final discussion will be carried out in Section 7.3.
Regularity and differentiability properties of eigenvalues and eigenfunctions
In this Section we will always keep the structural assumptions 2.A-2.E of Section 2; we will explicitly mention the more refined property (2.25) of Section 2.F, whenever it is involved.
We will study the regularity properties of H with respect to V . We will still denote by It is not difficult to check that u → E V (u) + (λ min ) − |u| 2 is a convex and lower semicontinuous functional on H.
Weak continuity and Lipschitzianity.
Lemma 5.1 (Weak continuity of eigenvalues and eigenfunctions). Let V n ∈ K, n ∈ N, be a sequence weakly converging in H to V ∈ K as n → ∞. For all k, J ∈ N we have and every sequence u n ∈ U J (V n ) admits a subsequence m → u n(m) and a limit u ∈ U J (V ) such that u n(m),k → u k strongly in V ∩ L 4 (D, m) for every k ∈ {1, · · · , J}. Before stating the next corollary, we recall that U J (V ) denotes the collection of all the orthonormal systems of eigenfunctions associated with λ J (V ), see (2.11).
see (2.13). If u n ∈ U J (V n ), n ∈ N, is a sequence with V n ∈ K[c], we can find an increasing subsequence k → n(k) and a limit V ∈ K[c] such that V n(k) ⇀ V weakly in H; up to extracting a further (not relabeled) subsequence, Lemma 5.1 shows that We now introduce the family of functions σ k : which will play a crucial role in the following, since they have a nice representation formula, which involves orthonormal sets of cardinality k. We refer to [20,26] for a more refined investigation in finite dimension. If E ⊂ H is a subspace of H, we denote by Ort k (E) the subset of orthonormal frames of E k and we have satisfies σ k,c ≥ σ k on K and coincides with σ k on K[c].
Proof. The weak continuity is a consequence of Lemma 5.1; the regularity of λ k clearly follows from the analogous property of σ k since λ k = σ k − σ k−1 . We can thus focus on the case of σ k . The fact that σ k is concave clearly follows from (5.8), which represents σ k as a minimum of a family of bounded linear functionals on H. It is also clear that σ k ≤ σ k,c .
In order to prove that the local representation given by (5.9) coincides with σ k if V ∈ K[c], it is sufficient to notice that the choice w ∈ U k (V ) is admissible in the minimization (5.9) of σ k,c (V ) (by the very definition (5.3)) so that (5.10) We now observe that for every w ∈ U k [c] the norm of the linear functionals is uniformly bounded in L 2 (D, m) by the constant A 2 (c) given by (5.4), since and it is finite everywhere. Moreover, σ k,c is the infimum of a family of kA 2 (c)-Lipschitz functions on H so it is kA 2 (c)-Lipschitz as well. Thanks to (5.10) we deduce that σ k is kA 2 (c)-Lipschitz in K[c].
5.2.
Compactness properties of the limiting subdifferential of H . Let us now compute the superdifferential of the concave functions σ k,c defined by (5.9); we recall that the Fréchet superdifferential ∂ + F G of a function G : : u is a minimizer of (5.9) , Finally, ∂ + σ k,c takes compact values and it is upper semicontinuous w.r.t. the weak topology. σ k,c is also Fréchet differentiable at every V ∈ K[c] such that λ k (V ) < λ k+1 (V ).
Proof. We want to apply Lemma C.1 in the appendix and we observe that the functions σ k,c can be represented as in ( We thus obtain all the properties stated for σ k,c ; notice that (5.15) just follows by (5.14) and (5.13). It is also worth noticing that if V ∈ K[c] and λ k (V ) < λ k+1 (V ) then Σ k,c (V ) = Σ k (V ) is a singleton (5.18) thanks to Corollary B.1. This implies that σ k,c is Fréchet differentiable at V by Lemma C.1. Eventually, (5.16) follows from (5.15) by choosing c sufficiently large so that V ∈ K[c] and using the fact that σ k (V ) = σ k,c (V ), σ k (W ) ≤ σ k,c (W ).
We now want to study the structure of the subdifferential of H . We fix a constant c ≥ c o and we denote by ϕ c : R J → R a C 1 and Lipschitz function whose restriction to Λ J ∩[λ min , 1+ℓ J (c)] J coincides with ϕ (recall (5.5)). We introduce the function ψ c ∈ C 1 (R J ) ψ c (s 1 , s 2 , · · · , s J ) := ϕ c (s 1 , s 2 − s 1 , s 3 − s 2 , · · · , s J − s J−1 ) (5.19) which clearly satisfies we define ψ as the restriction of ψ c to Λ J Λ J ∩ [λ min , 1 + ℓ J (c)] J . In particular, γ j ξ j : γ j = ∂ j ψ(σ(V )), ξ j ∈ co Σ j (V ) . and the graph of S c is weakly closed in H × H: (1) is an easy consequence of Lemma 5.4, the representation (5.17) in terms of Lemma C.1, and the fact that ψ c is of class C 1 .
Claim (2) in particular there exists a positive function ω : H → R as in (4.1) and ̺ > 0 such that The differentiability of ψ c and the fact that W → σ i,c (W ) is Lipschitz entail that Let us consider the set H of indices {j : 1 ≤ j < J, λ j < λ j+1 } and observe that γ j ≥ 0 if j ∈ H thanks to (2.25). By Lemma 5.4 σ i,c is Fréchet differentiable at V for every i ∈ H and it is Fréchet superdifferentiable for every i. It follows that setting ξ i := i k=1 u 2 k , γ i ξ i belongs to the Fréchet superdifferential of W → γ i σ i,c (W ) at V . (5.28) Using (5.27) we find a positive function ω : H → R as in (4.1) and ̺ > 0 such that On the other hand
5.4.
The case when H is concave. The result of the previous section can be further refined when ϕ satisfies the stronger condition, which is related to Schur-concavity [23]. Even though the superdifferentiability result, Theorem 5.6, covers a more general setting, let us briefly recap this different approach. We consider here the situation when ϕ is the restriction to Λ J of a C 1 symmetric function φ : [λ min , +∞) J → R (recall (2.27)). We consider the functions S k : where for every µ = (µ 1 , · · · , µ J ) ∈ R J we will denote by µ ↑ = (µ (1) , · · · , µ (J) ) ∈ Λ J the vector obtained by increasing rearrangement of the component of µ.
If E ⊂ H is a subspace of dimension d ≥ J and V ∈ H, we can consider the vector λ J (V, E) = (λ 1 (V, E), · · · , λ J (V, E)) of the eigenvalues of the restriction of E V to E. The variational characterization easily shows that . By a Theorem of Schur [23,Chap. 9, B1], if w = (w 1 , · · · , w J ) ∈ Ort J (E) and µ = (µ 1 , · · · , µ J ) with µ k = E V (w k ) we also have By selecting E = Span(w) we conclude that If φ ∈ C 1 [λ min , +∞) J is symmetric then it satisfies the monotonicity condition if and only if φ is increasing and Schur-concave [23, Chap. 3, A8], i.e. In particular S J satisfies (5.37). However, the class of symmetric Schur-concave functions is much wider and stable w.r.t. various kind of operations, see [23]. In particular all the elementary symmetric polynomials are Schur-concave and increasing if λ min ≥ 0. We deduce the following result.
Proposition 5.7 (Concavity of H ). Let k ∈ N, V ∈ K, φ ∈ C 1 ([λ min , +∞) k ) be a symmetric, increasing and Schur-concave function. Then If moreover φ is concave, then the function H is concave as well.
Proof. (5.36) and (5.37) yield On the other hand, the equality is attained by selecting w ∈ U k (V ). When φ is concave the maps V → φ(E V (w)) are concave since they are the composition of a concave with a linear function w.r.t. V . It follows that V → φ(λ k (V )) is concave as well, since it is the minimum of a family of concave functions.
Regularity and subdifferentiability properties of F
In this section we will collect the main properties of the functional F from (2.24), according to the setting presented in Section 2.A-2.E. We will eventually discuss a further important consequence of (2.25) from Section 2.F. In particular for every τ > 0 such that τ θ < 1 and every V ∈ H the functional Φ(τ, V ; has a minimizer. Proof. By (2.22) and the fact that Since the function V → −A 1 σ J,co (V ) is convex, finite, and continuous in K thanks to the representation (5.9), it is bounded from below by an affine function, so that there exists a constant A 2 > 0 such that Setting δ := (η − θ)/6 and A 3 : showing that every sublevel of F η is contained in a suitable sublevel of K θ . Since K θ is convex, we have for some A 4 ≥ 0 so that (6.4) yields for A 5 := A 3 + A 4 and A 6 : showing (6.1). In particular if F (V ) ≤ a and |V | ≤ a then (6.4) shows that V ∈ K[c] whenever c ≥ a + 1 2 η 2 a + A 3 . Since the restriction of H to K[c] is weakly continuous and K η is convex and weakly lower semicontinuous as well, we conclude that F η is also weakly lower semicontinuous. Since if τ −1 > θ we immediately get that Φ(τ, V ; ·) has a minimizer.
Let us now study the properties of the limiting subdifferential of F . We will also consider a weaker notion of ℓ-subdifferential: we say that ξ belongs to see also Remark 4.2.
Lemma 6.2 (Decomposition of the limiting subdifferential of F -I). For every V ∈ K we have , and c 1 > c then there exist ξ H ∈ ∂ − L H c 1 (V ) and ξ K ∈ ∂ − F K (V ) such that ξ = ξ H + ξ K . In particular there exist ξ j ∈ co Σ j (V ) such that ξ = J j=1 γ j ξ j + ξ K , γ j = ∂ j ψ c 1 (σ(V )). (6.8) Proof. We set a := c 1 − c and we first consider the case when ξ ∈ ∂ − F F (V ) is an element of the Fréchet subdifferential of F .
If u ∈ U J (V ) we have Σ k (V ) = {ξ k } where ξ k = k j=1 u 2 j for every k ∈ {1, · · · , J}. Using (6.8), we can then argue as in (5.30) to obtain As a further step, we will prove that the limiting subdifferential of F contains sufficient information to get the following chain rule property (cf. condition (chain 2 ) of [28,Thm. 3]). Proposition 6.4 (Chain rule). Let V ∈ H 1 (0, T ; H), ξ ∈ L 2 (0, T ; H) such that ξ(t) ∈ ∂ − L F (V (t)) for a.e. t ∈ (0, T ) and F • V is bounded. Then the map F • V is absolutely continuous in [0, T ] and d dt F (V (t)) = ξ(t), V ′ (t) a.e. in (0, T ). (6.15) Proof. Since F • V is bounded and V is bounded as well in H being V ∈ H 1 (0, T ; H), by Lemma Since H c is a Lipschitz function, the composition t → H c • V (t) is absolutely continuous. Moreover by Lemma 6.2 we can decompose ξ(t) as for a.e. t ∈ (0, T ). (6.16) Since H c is Lipschitz, ξ H is uniformly bounded and therefore the minimal selection t → ∂ • F K (V (t)) is a function in L 2 (0, T ; H). Being K the difference between a convex function and a quadratic one, we conclude that t → K (V (t)) is absolutely continuous as well.
We can then find a Borel set D ⊂ (0, T ) of full Lebesgue measure such that the functions V, H c • V, K • V, σ j,c • V are differentiable at every t ∈ D, j = 1, · · · , J, and there exist ξ j (t) ∈ ∂ + σ j,c (V (t)) and ξ K (t) ∈ ∂ − F K (V (t)) such that thanks to (6.8). Since σ j,c are concave and K is (−θ)-convex, we have Since ψ c is of class C 1 we clearly have Combining (6.19), (6.18) and (6.17) we get (6.15).
We conclude this section by showing a more refined decomposition of ∂ − L F in the case ϕ satisfies also the structural condition (2.25) of Section 2.F. Lemma 6.5 (Decomposition of the subdifferential of F -II). Let us suppose that all the assumptions of Section 2 are satisfied, including (2.25).
Proof. Let us first consider Claim (1). By Definition 4.1 we know that there exist ̺ > 0 and a positive function ω F : H → R as in (4.1) such that We can now apply (5.26) and find ̺ 1 ∈ (0, ̺) and a positive function ω H : H → R as in (4.1) such that so that choosing δ < ̺ 1 sufficiently small we obtain and |W − V | < δ. (6.24) This implies that (6.23) holds for every W ∈ K ∩ B(V, δ) and therefore ξ − ξ H ∈ ∂ − F K (V ). Claim (2) readily follows: by the definition of limiting subdifferential we can find a sequence V n ∈ K strongly convergent to V and ξ n ∈ ∂ − F F (V n ) weakly convergent to ξ with F (V n ) → F (V ) as n → ∞. We can then select arbitrary u n ∈ U J (V n ) setting ξ n H := J i=1 ∂ i ϕ(λ J (V n ))(u n i ) 2 and ξ n K := ξ n − ξ n H ∈ ∂ − F K (V n ). Up to extracting a subsequence, we may assume that u n → u ∈ U J (V ) strongly in H J and λ J (V n ) → λ J (V ) in Λ J , so that ξ n H → ξ H := J i=1 ∂ i ϕ(λ J (V ))u 2 i strongly in H thanks to the regularity of ϕ. Correspondingly we have ξ n
Convergence of the Minimizing Movement scheme and proof of the main results
We now refer to the construction we introduced in Section 4 (see in particular (4.6), (4.7), (4.8) and Definition 4.3) and we briefly recap the main general properties and estimates from the abstract theory of Minimizing Movements, following [28,Section 4]. As usual, we operate in the setting of Section 2, 2.A-2.E. 7.1. Existence, stability estimates and weak convergence of Generalized Minimizing Movements. We start by proving the existence of Generalized Minimizing Movements in our setting.
(1) The existence of discrete solutions to the Minimizing Movement scheme follows directly from Lemma 6.1. Notice that in our case we did not assume that the sublevels of F (V ) + 1 2τ * |V | 2 are strongly compact as in [28,Lemma 1.2]; however Lemma 6.1 guarantees the weak lower semicontinuity of F and the weak compactness of the sublevels of F (V ) + 1 2τ * |V | 2 . (7.1) is then a simple application of the definition of Fréchet subdifferential (see e.g. [28, (4.29)]). In fact the minimality of V n τ in (4.7) yields (7.6) (7.1) then follows since for every 1 ≤ n ≤ N (τ ) (2) is a direct application of [28,Prop. 4.6].
Lemma 7.2 (Weak convergence of the Minimizing Movement scheme). Under the same assumptions of Lemma 7.1 from every vanishing sequence k → τ (k) ↓ 0 it is possible to extract a further subsequence (not relabeled) and to find a limit function Proof. First of all we note that V k ,V k satisfy the differential inclusion as in (7.1), the apriori estimates of Lemma 7.1, and the weak convergences (7.9) and (7.10). By Lemma 6.2 we can decompose −V ′ k (t) as the sum of two piecewise constant terms ), (7.12) where S(V k (t)) was defined in (5.22) and (5.23). For the sake of clarity, we now divide the proof in several steps.
Step 1: compactness of A k . Thanks to Proposition 5.5 the image of A k is contained in a compact set C ⊂ H independent of k. For late use, we will introduce By [29,Theorem 3.25], we deduce that C 0,t is a family of compact sets in H, which by definition are also a convex, contain the origin, and satisfies C 0,t ⊂ C 0,T for every t ∈ [0, T ] since C 0,t = t T C 0,T and C 0,T is a convex set containing 0.
As a consequence, We first introduce the perturbation C k : (7.14) By definition of C k we have and a similar calculation holds for B: The lower semicontinuity of the norm with respect to the weak convergence yields lim sup Hence (7.13) will follow if we to prove the convergence We use a compensated-compactness argument and we introduce the integral function Since for every k ∈ N the sequence A ′ k (t) = e −2θt A k (t) takes values in the compact subset C ⊂ H and thus is uniformly bounded, we deduce that A k is uniformly Lipschitz equicontinuous.
It is also easy to show that A k (t) ∈ C 0,T for every k ∈ N and every t ∈ [0, T ], since by Jensen inequality All in all, by Ascoli-Arzelà Theorem, we deduce that (A k ) k is relatively compact in C 0 ([0, T ]; H), and therefore A k → A uniformly and in L 2 (0, T ; H) as k → +∞, (7.18) where A(t) := t 0 e −2θs A(s) ds. An integration by parts then gives with a similar identity involving A, V , and A. Then we combine (7.18), (7.9) and (7.10) to infer we can then pass to the limit in (7.19) and we get (7.17).
Step 3: for a.e. t ∈ (0, T ) we have B(t) ∈ ∂ − K θ (V (t)). Introducing the integral functional (7.20) in the Hilbert space H := L 2 ((0, T ), µ θ , H) associated with the Borel measure µ θ := e −2θt L 1 in (0, T ), since V ∈ D( K θ ) and B ∈ H, we can equivalently prove that for all W ∈ D( Then it is sufficient to use Step 2, the weak lower semicontinuity of K θ in H (since it is strongly lower semicontinuous and (−θ)-convex), the weak convergence of B k , and the strong convergence which means B(t) ∈ ∂ − K θ (V (t)) for a.e. t.
Step 4: V k → V uniformly in C 0 ([0, T ]; H). By the equicontinuity estimate and the weak convergence (7.10), it is sufficient to prove that for all S ∈ (0, T ] lim sup k→+∞ |V k (S)| 2 = |V (S)| 2 . (7.22) Using the identities (7.15) and (7.16) written in the interval [0, S] and taking into account of (7.17), (7.22) is equivalent to Since by step 3 we have B(t) ∈ K θ (V (t)) a.e., the monotonicity property of the subdifferential of a convex function yields and therefore lim inf where we used the weak convergence of B k to B and of V k to V . Proof of Theorem 4.4. The fact that GMM(Φ, V 0 , T ) is not empty just follows from Proposition 7.3. We can then apply [28,Theorem 3] which shows that every element V ∈ GMM(Φ, V 0 , T ) satisfies (4.10), (4.11), (4.12) and (4.13) if F satisfies the Chain rule property we proved in Proposition 6.4. In fact the compactness assumption in [28,Theorem 3] was just needed to guarantee the existence of an element in GMM(Φ, V 0 , T ) but the proof of the characterization of the limiting subdifferential equation is independent of such an assumption.
Proof of Proposition 4.6. It is sufficent to combine Theorem 4.4 with Corollary 6.3.
Proof of Theorem 4.7. It just follows by Theorem 4.4 and (6.21) of Lemma 6.5.
Appendix A. Convergence of eigenvalues and eigenfunctions for Schrödinger potentials
In order to study the behaviour of the eigenvalues of E V with respect to V we will use Mosco convergence in H. Recall that a sequence of functionals Φ n : H → R ∪ {+∞} converges in the sense of Mosco to a limit functional Φ : H → R ∪ {+∞} if the following two conditions hold: (M1) for every sequence (w n ) n∈N ⊂ H weakly converging to w ∈ H we have lim inf n→∞ Φ n (w n ) ≥ Φ(w); (M2) for every v ∈ H there exists a sequence (w n ) n∈N strongly converging to w such that lim n→∞ Φ n (w n ) = Φ(w). Mosco convergence is equivalent to Γ-convergence with respect to the weak and strong Htopology, see [13,Chapters 12,13]. Under equi-coercivity (guaranteed in our case by the compactness of the imbedding of V in H), weak and strong Γ-convergence are equivalent and are also related to uniform convergence of the resolvents.
We split the proof of Lemma 5.1 in two parts: first we prove that the weak convergence of potentials implies the Mosco convergence of the associated functionals, and then show that the Mosco convergence implies the convergence of eigenvalues and eigenfunctions.
Lemma A.1. Let V n ∈ K, n ∈ N, be a sequence weakly converging in H to V ∈ K as n → +∞.
Then the corresponding sequence of quadratic forms E Vn converges in the sense of Mosco to E V .
Proof. We start from the condition (M1) and consider a sequence w n weakly converging to w in H such that E Vn (w n ) ≤ E definitely. In particular E(w n ) is uniformly bounded from above, so that w n is converging strongly to w in H and On the other hand, for every k > 0, w n ∧ k converges strongly to w ∧ k in L 4 (D, m) so that Since k > 0 is arbitrary we conclude that Combining (A.1) and (A.2) we obtain E V (w) ≤ E as well. Concerning (M2), we first show that for every w ∈ D(E V ) there exists a sequence (w k ) k∈N in D(E V ) ∩ L 4 (D, m) converging strongly to w in H such that E V (w k ) → E V (w) as k → +∞. It is sufficient to consider an orthonormal basis of eigenfunctions (u h ) h∈N for E V and set On the other hand, for every k ∈ N we have so that a standard diagonal argument yields (M2).
Now we provide the proof of some well-known facts concerning the Mosco convergence and the convergence of eigenvalues.
Definition A.2. For all β > λ min and V ∈ K, the resolvent operator R β V : H → H maps every f ∈ H into the the unique solution u of the problem We list here some properties of the resolvent operator: • The operator R β V is continuous. • The operator R β V is compact, thanks to the compact embedding of V into H. • The operator R β V is self-adjoint. • The operator R β V is positive. As a consequence, the spectrum of R β V is real, positive and discrete and it is made of eigenvalues ordered as 0 ≤ · · · ≤ Λ k (β, V ) ≤ · · · ≤ Λ 1 (β, V ) = R β V L(H) , which are related to the sequence λ k (V ) by the formula 5) The next fundamental lemma relates Mosco convergence to the (uniform) norm convergence of the resolvent operators.
Lemma A.3. Let V n , V ∈ K and let us assume that V n ⇀ V in H. Then for every β > λ min the associated resolvent operators converge, namely Proof. We fix β > λ min . From the definition of operator convergence, for t ≥ 0 fixed, we have for a suitable sequence (f n ) ⊂ H with |f n | ≤ 1 and that we can assume to be weakly-H converging to some f ∈ H. We can then split The last term is vanishing, as the resolvent operator is continuous and R β V (f n ) V is uniformly bounded.
The second term is also infinitesimal thanks to [13,Theorem 13.12]. Concerning the first term, since R β Vn (f n − f ) is uniformly bounded in V, it is sufficient to prove its weak convergence in H. For every g ∈ H we have R β Vn (f n − f ), g = f n − f, R β Vn g → 0 since R β Vn g → R β V g strongly in H and f n ⇀ 0. In conclusion, w = u = R β V (f ) and by the compact embedding of V into H, we conclude that R β Vn (f n ) → R β V (f ), strongly in H, and the convergence holds for the whole sequence, since the limit is independent of the chosen subsequence.
Eventually, thanks to the classical theory of linear operators, the norm convergence of the operators implies the convergence of the spectrum, see for example [16,Lemma XI.9.5]. Passing to the limit in the equation R β Vn u n = Λ n u n where u n is normalized sequence of eigenvalues associated with a converging sequence Λ n and using the uniform boundedness of R β Vn and the compactness of the embedding of V in H we can also prove the convergence (possibly up to subsequences) of the eigenfunctions: this concludes the proof of Lemma 5.1.
(B.2)
It is clear that if w ∈ Ort k (E) then also Qw ∈ Ort k (E) since Let now Q be a symmetric bilinear form on E and let Q ∈ O(k) be an orthogonal matrix. For every w ∈ Ort k (E) with w ′ = Qw we have In particular, if E has finite dimension dim(E) = d the quantity Corollary B.1. If E ⊂ H is a finite dimensional space with dim(E) = d and w ′ , w ′′ ∈ Ort(E) then |w ′ | 2 (x) = |w ′′ | 2 (x) for m-a.e. x ∈ D.
Proof. Since w ′ , w ′′ are orthonormal basis of E there exists an orthogonal matrix Q ∈ O(d) such that w ′′ = Qw ′ . It is then sufficient to apply (B.3).
Appendix C. Basic facts concerning non smooth differential calculus Let C be a compact metrizable topological space, let f : C → H be a continuous map with image R := f (C), and let g : C → R be a lower semicontinuous map. We denote by K (R) the space of compact subsets of R. We set F (v) := min v, f (u) + g(u), u ∈ C for every v ∈ H, (C.1) and we denote by M (v) the set of u ∈ C where the minimum in (C.1) is attained.
Lemma C.1. F is a Lipschitz concave function whose superdifferential is given by in particular, for every ξ ∈ ∂ + F (v) there exists a Borel probability measure µ ∈ P(C) such that 3) The map ∂ + F : H → K (R) is weakly-strongly upper semicontinuous and satisfies v n ⇀ v, ξ n ∈ ∂ + F (v n ) ⇒ (ξ n ) n∈N is strongly relatively compact in H, every limit point ξ of (ξ n ) n∈N belongs to ∂ + F (v).
(C.4)
F is Fréchet differentiable at v 0 if and only if f (M (v 0 )) is a singleton.
The representation (C.3) is an immediate consequence of the continuity of f and the Krein-Milman Theorem.
Let now suppose that v n ⇀ v in H and let ξ n ∈ ∂ + F (v n ); we can find a Borel probability measure µ n on C such that supp(µ n ) ⊂ M (v n ), ξ n = C f (u) dµ n (u).
Since C is compact and metrizable, we can find a subsequence k → n(k) and a limit measure µ such that µ n(k) → µ weakly in P(C). For every point u of the support of µ there exists a sequence of points u n ∈ supp(µ n ) ⊂ M (v n ) converging to u; passing to the limit in the family of inequalities v n , f (u n ) + g(u n ) ≤ v n , f (w) + g(w) for every w ∈ C, we get v, f (u) + g(u) ≤ v, f (w) + g(w) for every w ∈ C, so that u ∈ M (v). It follows that setting we then conclude that ξ n(k) → ξ strongly in H as k → ∞.
Concerning the Fréchet differential of F , it is obvious that if F is differentiable at v 0 then ∂ + F (v 0 ) reduces to a singleton. To prove the converse property, let ξ 0 be the unique element of f (M (v 0 )): we have just to show that ξ 0 ∈ ∂ − F (v 0 ). By (C.4), for every ε > 0 we can find δ > 0 such that f (M (w)) ⊂ B(ξ 0 , ε) for every w ∈ B(v 0 , δ). For every w ∈ B(v 0 , δ) and ξ ∈ f (M (w)) we thus have
|
2022-03-15T01:16:33.737Z
|
2022-03-14T00:00:00.000
|
{
"year": 2022,
"sha1": "9cb213d82647de84232988d98dc142e3a2b900eb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bd1bec29dae0bd73ba215ed0e53e3d5b1f232b24",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
257973170
|
pes2o/s2orc
|
v3-fos-license
|
Chronic Cavitary Pulmonary Histoplasmosis in an Immunocompetent Patient
Histoplasma capsulatum is a fungal organism that causes systemic histoplasmosis. It is commonly asymptomatic in healthy immunocompetent individuals. The clinical symptoms of chronic cavitary histoplasmosis are typically seen in the immunodeficient population, particularly in smokers with pre-existing structural lung disease. We report a case of chronic cavitary histoplasmosis in an immunocompetent patient from an endemic area without pre-existing structural lung pathology. She presented complaining of right hypochondrial pain and had no history of respiratory symptoms nor history suggestive of immunosuppression, tuberculosis, or recent travel. CT scan revealed a cavitary lung lesion and a hilar mediastinal mass. Biopsies obtained by bronchoscopy revealed signs of necrosis, granulomas, and the presence of fungal organisms consistent with histoplasmosis. Histoplasma antibodies by complement fixation for yeast antibodies test were positive establishing the diagnosis of chronic cavitary pulmonary histoplasmosis (CCPH). She was then started on itraconazole with good tolerance. On follow-up three months later, a chest CT done along with measurement of inflammatory markers and liver enzymes demonstrated complete clinical recovery. This case emphasizes the importance of expanding our current understanding of the clinical presentation and manifestations of histoplasmosis beyond the conventional assumption that severe disease only affects immunocompromised individuals.
Introduction
Histoplasma capsulatum, a dimorphic fungus, is the causative agent of histoplasmosis [1]. This fungus is most commonly found in Central and North America, especially in Mississippi and Ohio River valleys [1]. However, it can be found in other various regions around the world [2]. The burden of the disease depends mainly on the amount of microconidia inhaled, and whether the host is immunocompetent or immunocompromised. The scale of symptom occurrence is considerably variable; most healthy, immunocompetent individuals will develop mild respiratory symptoms or even will be asymptomatic. The invasive and chronic forms of histoplasmosis occur usually in immunodeficient patients, and the complications of these forms can be severe and sometimes even fatal [3]. Patients with underlying pulmonary diseases are at increased risk for developing chronic histoplasmosis, which is usually associated with lung cavitations [4]. These cavities are likely to enlarge and involve many areas within the lungs. Affected individuals present with a wide range of respiratory symptoms, including productive cough, shortness of breath, hemoptysis, chest pain, fevers, and weight loss. An uncommon, yet unfortunate complication of histoplasmosis infection is fibrosing mediastinitis: also known as sclerosing mediastinitis, which is identified by extensive fibrotic reaction in the mediastinum [5].
We present a case of chronic cavitary histoplasmosis in an immunocompetent nonsmoker patient in the absence of underlying structural lung disease, who presented with unilateral hypochondriac abdominal pain.
Case Presentation
A female patient in her thirties presented to the emergency department (ED) complaining of right hypochondrial pain for which abdominal and chest CT scans were done and revealed an incidental finding of a cavitary lesion in the lower lobe of the right lung. The patient denied any history of respiratory symptoms or constitutional symptoms (weight loss, loss of appetite, night sweats, fever, or itching). She didn't have any history suggestive of aspiration, immunosuppression, tuberculous, and other atypical infections, and symptoms suggesting rheumatological, connective tissue, or autoimmune disease.
The patient also had no dyspnea, cough, expectoration, fever, chills, hemoptysis, or any history of recurrent chest infections. Past medical history was significant for childhood asthma, a mild remote history of coronavirus disease 2019 (COVID-19) infection only involving the upper airways, as well as an episode of viral sinusitis, migraine, hyperlipidemia, obesity, and depression. Past surgical history was non-revealing. Social history revealed that the patient was not a smoker, had no pets, didn't drink alcohol or use illicit drugs, and had no recent travel history. Occupational and environmental history was unrevealing. The patient had no family history of any connective tissue disease, malignancy, or other pulmonary diseases. The physical examination was otherwise non-contributory. Vital signs in the ED were within normal limits.
Upon admission, a CT scan of the chest was ordered and revealed a right hilar mass along with right lower lobe abscess, indicating an indolent infection ( Figure 1A, 1B). As part of further evaluation, a transthoracic echocardiogram (TTE) was negative for vegetations or valvular lesions ruling out infective endocarditis. In addition, the patient underwent bronchoscopy with bronchoalveolar lavage (BAL), along with endobronchial ultrasound with right lung and hilar mass transbronchial needle aspiration biopsies. A few days following discharge, the patient was re-admitted for pneumonia secondary to the bronchoscopy procedure, for which she received a course of oral antibiotics. CT scan done upon readmission showed an interval increase in the size of the right hilar mass, along with post-obstructive consolidation, and volume loss in the right middle lobe (Figure 2A, 2B). Moreover, the cavitary lesion in the right lower lobe (RLL) had increased in size compared to the prior scan, with evidence of constriction in the bronchi of the right middle lobe (RML), and anterior segment of the RLL. Fibrous tissue was also seen encasing the pulmonary arteries, and RML pulmonary vein. Put together, all these radiological findings were suggestive of early signs of fibrosing mediastinitis. Biopsies obtained from the bronchoscopy procedure were sent for pathology and showed signs of necrosis, fibrosis, and caseating and non-caseating granulomas. The culture was obtained and four weeks later revealed white-yellowish colonies of fungal organisms with a central raised area surrounded by a flat spreading outer zone, consistent with histoplasmosis. A full workup was done including a complete blood count (CBC), C-reactive protein (CRP), rheumatoid factor (RF), antinuclear antibody (ANA), antinuclear cytoplasmic antibodies (ANCA), along with blood, urine, and sputum cultures, which excluded rheumatological and infectious causes. Laboratory and radiological workups for malignancy were also negative. Workup for immunodeficiency was done, including immunoglobulin levels, human immunodeficiency virus (HIV) testing, complement levels, delayed-type hypersensitivity skin testing using Candida antigen, and flow cytometry, all of which were within normal limits. In addition, the infectious BAL panel returned negative for viral, fungal, and bacterial causes. Urine and serum Histoplasma antigen tests were negative. Histoplasma antibodies by complement fixation for yeast antibodies test were positive at 1:64, while mycelial form antibodies were negative. Histoplasma antibodies by fungal immunodiffusion were negative too. The biopsy findings combined with the positive Histoplasma serology confirmed the diagnosis of histoplasmosis.
The clinical picture as well as the laboratory, biopsy, and imaging findings of the patient led to a diagnosis of chronic cavitary pulmonary histoplasmosis (CCPH). The patient was started on oral itraconazole with good tolerance. Two-week trough level showed a marginal therapeutic level. The dose of itraconazole was increased.
A follow-up CT scan was done after three months and showed gradual resolution of the right hilar lesions and RLL abscess. Repeat measurements of inflammatory markers and liver enzymes were normal. Consultation with the infectious disease department was also undertaken.
On follow-up, she reported complete clinical recovery with no signs or symptoms of pleuritic chest pain, abdominal pain, cough, or expectoration. The patient continued taking oral itraconazole therapy, trough level was always therapeutic. She was advised to continue itraconazole for 12 months with strict follow-up for signs of recurrence upon stopping the medication.
Discussion
Histoplasma capsulatum is a non-encapsulated dimorphic fungus that causes histoplasmosis, a systemic fungal infection. Infection with Histoplasma is most prevalent in North and Central America, but the organism also occurs in other diverse areas of the world. In the United States, it is most prevalent around the Mississippi and Ohio River Valleys. Soil containing a great amount of bird or bat droppings, especially next to chicken coops and in caves, remains the main source of Histoplasma [1].
Histoplasmosis has been reported in immunocompromised individuals, including those with HIV, diabetes, alcoholics, transplant recipients, and those on immunosuppressive medications and biologics [6]. Only less than 1% of those exposed to the fungus develop clinical disease, depending on the immune status of the patient and the amount of exposure [7].
Infection with Histoplasma occurs mainly by inhalation of contaminated soil containing microconidia, the infectious form of Histoplasma, in endemic areas during day-to-day pursuits. Once in the lower airways, macrophages ingest the microconidia, which transform into yeast, and multiply inside, then spread by means of the reticuloendothelial system. Antigen-presenting dendritic cells recognize the organism and present it to T-lymphocytes, which stimulates their proliferation and release of cytokines such as tumor necrosis factor-alpha (TNF-a) and interferon-gamma, as well as interleukin 12 from macrophages. The end result is granuloma formation, which helps contain the organism and prevent its dissemination [8]. Thus, any defect in cell-mediated immunity contributes to Histoplasma infection. As mentioned, after thorough evaluation, our patient had no evidence of immune deficiency and thus introduced an unusual case of chronic cavitary histoplasmosis in an immunocompetent patient. It is possible that immunocompetent individuals may develop progressive disseminated histoplasmosis due to endogenous reactivation of the infection several years later, similar to what is observed in TB [9].
Data extrapolated from a multistate epidemiological study of histoplasmosis showed that 56% of patients presenting with mildly symptomatic disease were immunocompetent, while only 30% of immunocompromised patients had comparable severity [10]. Cough was the most common presenting symptom in both groups. Furthermore, being in an immunocompromised state was associated with a 78% increase in the likelihood of developing severe disease requiring hospitalization compared to those with immunocompetent status [10]. Interestingly, although being immunocompetent is protective against developing severe histoplasmosis, factors such as age, high dosage of exposure to the organism, and non-White race possibly relating to lower socioeconomic status, poor access to medical care, and genetics, all increase the risk for developing advanced disease requiring hospitalization [10].
Exposure to Histoplasma is especially common in individuals residing in endemic areas; however, clinically symptomatic infection is not. A large number of those exposed do not develop symptoms or only develop mild disease not recognized as histoplasmosis [1]. Histoplasmosis can be clinically divided into acute pulmonary infection, chronic cavitary pulmonary infection, extrapulmonary progressive disseminated infection, and mediastinal lymphadenitis ( Table 1) [11]. Interestingly, chronic cavitary histoplasmosis has a predilection for older patients and is more likely to occur in smokers with pre-existing structural lung diseases like emphysematous lungs [1,11]. It is strikingly atypical for a patient without pre-existing pulmonary disease to develop this type of histoplasmosis [1], which is evident in our 39-year-old nonsmoker patient. We suggest that a combination of genetic and environmental factors contributed to the development of chronic cavitary histoplasmosis in our patient, possibly owing to cytokine signaling disruption particularly involving interleukin-1, interleukin-2, interferon-gamma, and TNF-a, this could have caused a transient immunocompromised state leading to ineffective granuloma formation and maintenance with consequent dissemination of the organism, dysregulated host inflammatory response, and chronic parenchymal lung disease with cavitary destruction. Cavitary histoplasmosis presents with fatigue, fever, night sweats, anorexia, and weight loss. More specific respiratory symptoms include cough, sputum production, hemoptysis, and shortness of breath, which can mimic a chronic obstructive pulmonary disease (COPD) exacerbation in these patients. The differential diagnosis includes primarily TB, as the presentation is very similar, with chest imaging demonstrating large cavitary lesions with fibrosis. A characteristic feature of chronic cavitary histoplasmosis is the "marching cavity", where the cavities progress in size, due to continuing necrosis, to involve the entire lobe of the lung [1].
Fibrosing mediastinitis is a rare but fatal complication of pulmonary histoplasmosis, usually occurring in young adults aged 20-40 years, with a slight preponderance in women [1]. It has a reported prevalence of three per 100,000 cases [23]. The pathogenesis involves an exaggerated production of fibrous tissue with an encasing of the mediastinal structures, such as the pulmonary vessels, superior vena cava, and airways. It is an idiosyncratic reaction to Histoplasma antigens that occurs in a particular group of patients, which insinuates a possibly abnormal immunological host response in these patients to Histoplasma infection [1,23].
Reviewing the literature on the etiology of fibrosing mediastinitis, it is evident that wide variations exist throughout different geographical locations. For instance, a recent review showed that Histoplasmosis, as an underlying cause, has incidences of 0-83% in different studies [24]. Figure 3 summarizes the documented causes of fibrosing mediastinitis.
FIGURE 3: Causes of fibrosing mediastinitis
The symptoms are due to the slowly progressive encroachment of mediastinal structures, and present as increasing dyspnea, cough, hemoptysis, and chest pain, without the systemic signs of infection such as fever, chills, and night sweats [1]. The diagnosis is made clinically and radiographically, and by excluding malignancy, radiation therapy, and thromboembolic disease [23]. F-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) scan commonly shows heightened metabolic activity in foci with fibrosing mediastinitis [25]. It is worthwhile to note that Westerley et al. utilized off-label rituximab therapy, in conjunction with prednisone at the time of the infusion, in three patients with documentation of chronic inflammation and fibrosis on biopsy, and evidence of increased metabolic activity on FDG-PET/CT. All three patients had a favorable therapeutic outcome with clinical improvement and minimization of the fibrous tissue burden, as well as reduction of the metabolic activity on PET/CT scan. Consequently, it was hypothesized that B lymphocytes might play a role in the evolution of fibrosing mediastinitis and are associated with the increased metabolic activity visible on FDG-PET/CT scan. The use of rituximab can attenuate the associated metabolic activity, prevent further disease progression, and ameliorate clinical symptoms [25].
Our patient's repeat CT revealed signs of vascular involvement suggestive of early fibrosing mediastinitis. The gold standard for diagnosing histoplasmosis is culture on Sabouraud agar demonstrating the organism, with a sensitivity of 67% in chronic pulmonary histoplasmosis. A more rapid diagnosis can be made by histopathology of the biopsy from the affected tissue revealing necrotizing granulomas, with a sensitivity of 75%. In the majority of chronic pulmonary histoplasmosis cases, isolation of Histoplasma from bronchoscopy and sputum samples can confirm the diagnosis. Moreover, Histoplasma can be detected in bronchial lavage washings. Antigen testing in urine and/or serum is usually negative due to the low burden of the organism, while serologic testing is positive in almost all cases and provides the cornerstone for diagnosis in up to onequarter of the cases [5]. The most commonly used serologic tests are immunodiffusion (ID) and complement fixation (CF), with a reported sensitivity of 70% in patients with culture-confirmed histoplasmosis [26]. This is in comparison to enzyme immunoassay (EIA), which is of lower sensitivity, cost, and labor intensity compared to CF and ID. Noticeably, both the sensitivity and specificity of CF and ID are influenced by the patient population being tested, the reagents used during testing, and the specific technique used by laboratory personnel [26].
Antifungal treatment is mandated for all cases of chronic pulmonary histoplasmosis to decrease the mortality and to halt the progression of the disease, unlike most of the other histoplasmosis syndrome presentations in which observation is the best-recommended approach. Figure 4 illustrates the different treatment regimens for each histoplasmosis syndrome according to the 2007 update of the Infectious Diseases Society of America clinical practice guidelines [27]. In regard to CCPH, the recommended treatment duration is controversial; however, updated guidelines recommend therapy with itraconazole for 12-24 months, and/or until no additional improvement is noticeable on CT. Follow-up is advised for one to two years after treatment is discontinued as 10-20% of the cases relapse off therapy [5].
Conclusions
We described a case of CCPH in a young immunocompetent patient without a smoking history, who showed early signs of fibrosing mediastinitis. Although CCPH generally affects immunocompetent individuals with underlying structural lung disease, it can infrequently present in the absence of an underlying structural lung pathology. The pathogenesis of CCPH involves defects in cellular-mediated immunity and granuloma formation; however, how it affects healthy immunocompetent individuals without underlying lung disease remains an enigma. The treatment duration for CCPH is controversial, nonetheless, a prolonged duration with antifungal treatment is recommended, in addition to strict follow-up to detect disease recurrence upon discontinuation of the medication. It is important to emphasize that untreated or partially treated cases often progress to marching cavitary disease with panlobular involvement and/or fibrosing mediastinitis, a rare but fatal complication. Rituximab has demonstrated promising outcomes in the management of fibrosing mediastinitis, which should be explored further by conducting randomized controlled trials.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2023-04-06T15:15:26.097Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "201ed39af6e702162f6912bde8cb92f150f1ee63",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/145993/20230404-32724-1wi0olm.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5cb3d2c568d810b2bf9e7ab87bb547ac86247745",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
129942196
|
pes2o/s2orc
|
v3-fos-license
|
Using Self-Assembling Peptides to Integrate Biomolecules into Functional Supramolecular Biomaterials
Throughout nature, self-assembly gives rise to functional supramolecular biomaterials that can perform complex tasks with extraordinary efficiency and specificity. Inspired by these examples, self-assembly is increasingly used to fabricate synthetic supramolecular biomaterials for diverse applications in biomedicine and biotechnology. Peptides are particularly attractive as building blocks for these materials because they are based on naturally derived amino acids that are biocompatible and biodegradable; they can be synthesized using scalable and cost-effective methods, and their sequence can be tailored to encode formation of diverse architectures. To endow synthetic supramolecular biomaterials with functional capabilities, it is now commonplace to conjugate self-assembling building blocks to molecules having a desired functional property, such as selective recognition of a cell surface receptor or soluble protein, antigenicity, or enzymatic activity. This review surveys recent advances in using self-assembling peptides as handles to incorporate biologically active molecules into supramolecular biomaterials. Particular emphasis is placed on examples of functional nanofibers, nanovesicles, and other nano-scale structures that are fabricated by linking self-assembling peptides to proteins and carbohydrates. Collectively, this review highlights the enormous potential of these approaches to create supramolecular biomaterials with sophisticated functional capabilities that can be finely tuned to meet the needs of downstream applications.
Introduction
Organization of individual molecules into a higher-ordered supramolecular structure, normally referred to as "self-assembly" [1], is a hallmark of living systems that is increasingly being used to fabricate synthetic biomaterials [2][3][4][5][6][7]. Self-assembly is mediated by weak physical interactions between molecules, including hydrogen bonds, ionic bonds, hydrophobic interactions, and van der Waals interactions [8,9]. The accumulation of these weak interactions results in a stable ordered supramolecular structure [10], as witnessed through DNA hybridization and protein folding. This supramolecular order can establish unique functional properties. For example, folded proteins can catalyze reactions (e.g., enzymes) or recognize ligands (e.g., transmembrane receptors) with specificity and selectivity that are not seen when the same protein is denatured. Inspired by these natural examples, significant efforts are now focused on designing synthetic molecules, such as peptides, peptoids, oligomers, and polymers that can self-assemble into nano-scale structures with different morphologies (Figure 1) [11][12][13][14]. In turn, synthetic nanoparticles, nanofibers, and nano-vesicles fabricated via self-assembly can be employed as three-dimensional scaffolds, vehicles, and carriers for diverse applications, including drug delivery, tissue engineering, biosensors, stimuli-responsive materials, and vaccine development [12,[15][16][17][18][19][20][21].
One example from Bian and colleagues created a fusion of the N-cadherin mimetic peptide (HAVDI) and the self-assembling KLD peptide (Ac-KLDLKLDLKLDL), referred to as "KLD-Cad", to fabricate self-assembled hydrogels that promote chondrogenesis of human mesenchymal stem cells (hMSCs) [68]. In a second paper, they elucidated the mechanism by which KLD-Cad hydrogels induced chondrogenesis of hMSCs [69]. Specifically, KLD-Cad peptide or KLD-Scr (Ac-AGVIDH GKLDLKLDLKLDL, used here as a negative control) was first mixed with KLD peptide in sterilized phosphate-buffered saline (PBS) to obtain precursor solutions. These mixtures were then incubated with hMSC precursor solutions and allowed to incubate at 37 • C for 35 min to form stable self-assembled hydrogels ( Figure 2a). SEM images of the obtained free-standing hydrogels demonstrated no significant differences in the nanofibrous structure for KLD-Cad or KLD-Scr (Figure 2a). KLD-Cad-containing hydrogels induced higher chondrogenic gene expression levels ( Figure 2b) and a greater amount of cartilaginous matrix (Figure 2c) after 14 days of in vitro culture of hMSCs. Lastly, canonical Wnt signaling was inhibited in cells cultured on KLD-Cad hydrogels but not KLD-Scr hydrogels, suggesting that knockdown of Wnt signaling promoted chondrogenic differentiation of hMSCs. (Figure 2d). Collectively, these studies highlight the promise of using functional peptide assemblies to control cell phenotype and function by engaging specific cell surface receptors.
Another recent example from Stupp and co-workers [70] reported self-assembled peptide amphiphiles that can activate the tyrosine kinase B (TrkB) receptor on primary cortical neurons by presenting a peptide that mimics brain-derived neurotrophic factor (BDNF) protein. The BDNF mimetic peptide was covalently conjugated to a peptide amphiphile (PA) consisting of two glutamic acid residues, two alanine residues, two valine residues, and an alkyl tail of 16 carbons (E 2 PA) via a poly(ethylene glycol) 6 (PEG 6 ) spacer ( Figure 3a). To form nanofibers, the BDNF mimetic PA was mixed with non-modified E 2 PA at 10 mol.% (Figure 3a). BDNF-mimetic nanofibers activated the TrkB receptor of primary cortical neurons to a similar extent as BDNF protein in vitro. In contrast, nanofibers lacking the BDNF-mimetic peptide, as well as BDNF PA in the unassembled form, did not activate the TrkB receptor ( Figure 3b). After 30 days of in vitro culture, cells treated with BDNF mimetic PA demonstrated enhanced expression of neuronal maturation markers compared to blank PA or linear BDNF PA (Figure 3c) and comparable electrical activity to cells treated with native BDNF protein ( Figure 3d). Cortical neurons cultured on BDNF mimetic PA scaffolds demonstrated the highest degree of cell maturation among all of the scaffolds tested ( Figure 3e). An interesting finding reported in this paper was that the BDNF mimetic peptide was only active when it was presented on a self-assembled nanofiber. The authors suggested that this may be due to multivalent presentation of the ligand on the nanofiber which facilitates receptor dimerization similar to native dimeric BDNF. Another recent example from Stupp and co-workers [70] reported self-assembled peptide amphiphiles that can activate the tyrosine kinase B (TrkB) receptor on primary cortical neurons by presenting a peptide that mimics brain-derived neurotrophic factor (BDNF) protein. The BDNF mimetic peptide was covalently conjugated to a peptide amphiphile (PA) consisting of two glutamic Type II collagen content in KLD, KLD-Cad, and KLD-Scr hydrogels after 14 days of chondrogenic culture compared to non-chondrogenic KLD hydrogels cultured in basal growth media. A notable higher amount of Type II collagen was observed in KLD-Cad-containing hydrogels compared with other groups. (d) Quantitative analysis of β-catenin and LEF-1 gene expression by hMSCs. Gene expression was significantly inhibited by KLD-Cad hydrogels on day 3. *: p < 0.05, **: p < 0.01, ***: p < 0.001. Adapted with permission from [69]. Copyright 2017, Elsevier publishing.
PA or linear BDNF PA ( Figure 3c) and comparable electrical activity to cells treated with native BDNF protein (Figure 3d). Cortical neurons cultured on BDNF mimetic PA scaffolds demonstrated the highest degree of cell maturation among all of the scaffolds tested ( Figure 3e). An interesting finding reported in this paper was that the BDNF mimetic peptide was only active when it was presented on a self-assembled nanofiber. The authors suggested that this may be due to multivalent presentation of the ligand on the nanofiber which facilitates receptor dimerization similar to native dimeric BDNF. Figure 3. (a) Chemical structure of cyclic brain-derived neurotrophic factor (BDNF) mimetic peptide. Representative Cryo-TEM image of nanofibers derived from BDNF peptide amphiphiles (PA) co-assembled at 10 mol.% with E2 PA. β-sheet fibrous materials were obtained via assembly of BDNF PA and E2 PA. (b) Western blot densitometry analysis of phosphorylated tyrosine kinase B (p-TrkB) activation by cells treated with BDNF peptide, E2 PA, E2 + E4, linear BDNF, BDNF PA, and BDNF Figure 3. (a) Chemical structure of cyclic brain-derived neurotrophic factor (BDNF) mimetic peptide. Representative Cryo-TEM image of nanofibers derived from BDNF peptide amphiphiles (PA) co-assembled at 10 mol.% with E 2 PA. β-sheet fibrous materials were obtained via assembly of BDNF PA and E 2 PA. (b) Western blot densitometry analysis of phosphorylated tyrosine kinase B (p-TrkB) activation by cells treated with BDNF peptide, E 2 PA, E 2 + E 4 , linear BDNF, BDNF PA, and BDNF protein for 6 h in vitro. The BDNF PA-treated group showed comparable activation as cells treated with BDNF protein and significantly higher response relative to all other groups. (c) Representative confocal images of neuronal cells treated with BDNF peptide, E2 PA, BDNF PA, and BDNF protein for 24 h in vitro. Red fluorescence represents the axonal marker, pan-axonal neurofilament protein (SMI312), expressed on the cell surface, while green fluorescence represents the dendritic marker, microtubule associated protein 2 (MAP-2). The nucleus was stained using 4 ,6-diamidino-2-phenylindole (DAPI) (blue fluorescence). A notable increase in axon length was observed when cells were treated with BDNF PA or BDNF protein, which suggests maturation of neuronal cells activated by TrkB receptor binding. (d) Raster plots show enhanced electrical activity for cells treated with BDNF PA or BDNF protein over 30 days of in vitro culture. (e) Representative confocal images of neuronal cells cultured on three-dimensional (3D) PA gel scaffolds for one week in vitro. Green fluorescence represents the dendritic marker MAP-2 on the cell surface, while red fluorescence represents the neuronal marker, neuron-specific class III beta-tubulin (Tuj-1). Extended neurites and a homogeneous neuronal network were observed in all gels tested. Furthermore, significantly higher MAP-2 expression was observed for cells cultured on BDNF PA gels or native BDNF + E 2 PA gels when compared to other groups. ( * ) p < 0.05, ( * * ) p < 0.01, and ( * * * ) p < 0.001. Adapted with permission from [70]. Copyright 2018, American Chemical Society publishing.
Moving Beyond Peptides as the Functional Ligand
The preceding section highlighted recent examples of using self-assembled peptide nanofibers as scaffolds to guide cell phenotype and function via presentation of peptide ligands that bind to specific cell surface receptors. This is a rich and active area of research that is well documented in other excellent recent reviews, which we direct interested readers to [29,[71][72][73][74]. Here, we shift our focus toward the increasing use of self-assembling peptides to integrate proteins or carbohydrates into supramolecular biomaterials.
Glycosylated Nanomaterials Fabricated from Carbohydrate-Modified Self-Assembling Peptides
Highly abundant in nature, carbohydrates not only provide an energy source for cell metabolism, they also specifically interact with a broad range of biomolecules, including lectins, growth factors, and other carbohydrates [75,76]. Binding events involving carbohydrates are often weak, with dissociation constants in the milli-to micromolar range; however, carbohydrate-ligand interactions can be significantly enhanced by multivalency, often referred to as the "glycocluster effect" [77]. Inspired by this, conjugates of carbohydrates and peptides (i.e., "glycopeptides") [78][79][80], as well as carbohydrates and polymers (i.e., "glycopolymers") [81][82][83], began receiving attention as synthetic glycoclusters a few decades ago. With advances in chemistry, as well as increased understanding of lectin-glycan interactions, the focus shifted toward fabricating glycopeptides that selectively bind to specific lectins by tailoring carbohydrate chemistry and physical presentation [84,85]. For an overview of the synthetic methodologies used to prepare glycopeptides, we direct readers to an excellent review published elsewhere [86]. Here, we survey recent advances in glycosylated nanomaterials fabricated from self-assembling glycopeptides, which are finding increasing use as biomaterials that can regulate cell behavior through interactions with carbohydrate-binding proteins.
One emerging application of glycopeptides is to create nanomaterials that can mediate cell adhesion or activate cell signaling events by interacting with carbohydrate-binding receptors [87][88][89]. For example, Guler and co-workers used glycopeptide amphiphiles to fabricate glycosaminoglycan (GAG)-like nanomaterials. The resulting assemblies bound to cluster of differentiation 44 (CD44) receptors and promoted chondrogenic differentiation of MSCs [88]. The same group also created an extracellular matrix (ECM)-mimicking scaffold using glycopeptide amphiphiles, which enhanced MSC adhesion. By altering the presentation of different functional groups, these scaffolds could induce differentiation of MSCs into brown adipocytes [89]. Additionally, they developed a self-assembled mannosylated peptide amphiphile decorated with antigen-mimetic GM3-lactone molecules as the basis for vaccines that target dendritic cells and induce their maturation [87].
Galectins are a family of soluble carbohydrate-binding proteins that regulate cell behavior in various healthy and pathological processes, such as integrin-mediated cell adhesion and migration [90], inflammation and its resolution [91], T-cell activation [92], and viral infection [93]. Galectins bind to β-galactosides, such as N-acetyllactosamine and related variants found on laminin, type IV collagen, and various cell membrane glycoproteins. Hudalla and co-workers used a peptide self-assembly strategy to develop synthetic glycoclusters that can bind to galectins and inhibit their activity as extracellular signals [94]. Their approach was based on a variant of the β-sheet fibrillizing peptide, QQKFQFQFEQQ (Q11), which has the monosaccharide N-acetylglucosamine (GlcNAc) conjugated to an asparagine residue added at the N-terminus of the peptide. GlcNAc-Q11 assembles into β-sheet nanofibers with similar morphology as Q11 nanofibers (Figure 4a). GlcNAc groups on the nanofiber can be converted to N-acetyllactosamine (LacNAc) via a glycosyltransferase enzyme in the presence of a sugar donor without disrupting nanofiber formation ( Figure 4b). LacNAc-Q11 nanofibers bound Galectin-1 with higher affinity than GlcNAc-Q11 nanofibers or soluble β-lactose (Figure 4c,d). Due to this increased binding affinity, LacNAc-Q11 nanofibers inhibited T-cell agglutination and metabolic activity loss induced by Galectin-1 more effectively than soluble β-lactose or thiodigalactoside, a synthetic small-molecule Galectin-1 inhibitor (Figure 4e with similar affinity as LacNAc-Q11 nanofibers but demonstrated no affinity for Galectin-1. LacDiNAc-Q11 nanofibers can inhibit T-cell apoptosis induced by Galectin-3; however, their results demonstrated that serum glycoproteins outcompete Galectin-3 binding to LacDiNAc-Q11 nanofibers, which diminishes their inhibitory activity. Competitive interactions between Galectin-3, serum glycoproteins, and synthetic multivalent glycoclusters may have important implications for developing better Galectin-3 inhibitors. One limitation of LacNAc is that it can interact with all members of the galectin family. To create nanofibers that selectively recognize Galectin-3, Hudalla and co-workers adapted their strategy to replace LacNAc on Q11 nanofibers with N,N -diacetyllactosamine (LacDiNAc), a disaccharide that selectively binds to Galectin-3 [95]. LacDiNAc-Q11 nanofibers bound Galectin-3 with similar affinity as LacNAc-Q11 nanofibers but demonstrated no affinity for Galectin-1. LacDiNAc-Q11 nanofibers can inhibit T-cell apoptosis induced by Galectin-3; however, their results demonstrated that serum glycoproteins outcompete Galectin-3 binding to LacDiNAc-Q11 nanofibers, which diminishes their inhibitory activity. Competitive interactions between Galectin-3, serum glycoproteins, and synthetic multivalent glycoclusters may have important implications for developing better Galectin-3 inhibitors.
In addition to the carbohydrate type, the density and valency of carbohydrates in a glycocluster can also influence protein binding specificity and affinity. Using GlcNAc-Q11, Hudalla and co-workers studied relationships between lectin binding and carbohydrate display on peptide nanofibers ( Figure 5a) [96]. Nanofibers with a range of carbohydrate densities and valencies were fabricated by mixing GlcNAc-Q11 and Q11 peptides at different molar ratios. Moderate carbohydrate densities provided optimal binding kinetics and extent of binding for both wheat germ agglutinin (WGA) and Griffonia simplicifolia II (GS II), independent of carbohydrate valency (Figure 5b). Due to the increased binding kinetics, nanofibers with moderate carbohydrate density inhibited T-cell apoptosis induced by WGA more effectively than nanofibers with high carbohydrate density at equivalent valency ( Figure 5c). Collectively, these results demonstrated that interactions between self-assembled glycopeptide nanofibers and proteins are dependent on the avidity of carbohydrates rather than the absolute amount. Interestingly, this differs from results observed with glycopolymers and glyconanoparticles, where increased valency typically results in increasing affinity [97][98][99]. These findings highlight the benefit of supramolecular systems that allow for carbohydrate type, density, and valency to be easily and systematically varied to identify optimal protein binding characteristics. (left) and co-precipitation assays (right) demonstrate that wheat germ agglutinin (WGA) binds faster to GlcNAc-Q11 and Q11 (1:3 ratio) mixed nanofibers (moderate density) compared to pure GlcNAc-Q11 nanofibers (high density). * represents p < 0.005, Student's t-test. (c) Nanofibers with optimal carbohydrate density inhibited WGA-induced Jurkat T-cell death more effectively than nanofibers with high carbohydrate density. ** represents p < 0.01 and *** represents p < 0.001, Student's t-test. Reproduced from reference [96] with permission from The Royal Society of Chemistry.
Finally, Hudalla and co-workers developed microgels for affinity-controlled release of lectins via desolvation of GlcNAc-Q11 glycopeptide nanofibers (Figure 6a) [100]. Microgels with different sizes can be prepared by adjusting peptide concentrations (Figure 6b). Microgels demonstrating tunable release of WGA can be prepared by varying the amount of GlcNAc-Q11 relative to Q11 (Figure 6c). WGA released from GlcNAc-Q11 microgels was biologically active, as demonstrated by (a) Schematic representation of self-assembled glycopeptide nanofibers with different carbohydrate densities and their potential interaction pattern with proteins. Bound proteins are expected to hide neighboring ligands on nanofibers with high carbohydrate density. (b) Turbidity (left) and co-precipitation assays (right) demonstrate that wheat germ agglutinin (WGA) binds faster to GlcNAc-Q11 and Q11 (1:3 ratio) mixed nanofibers (moderate density) compared to pure GlcNAc-Q11 nanofibers (high density). * represents p < 0.005, Student's t-test. (c) Nanofibers with optimal carbohydrate density inhibited WGA-induced Jurkat T-cell death more effectively than nanofibers with high carbohydrate density. ** represents p < 0.01 and *** represents p < 0.001, Student's t-test. Reproduced from reference [96] with permission from The Royal Society of Chemistry.
Finally, Hudalla and co-workers developed microgels for affinity-controlled release of lectins via desolvation of GlcNAc-Q11 glycopeptide nanofibers (Figure 6a) [100]. Microgels with different sizes can be prepared by adjusting peptide concentrations (Figure 6b). Microgels demonstrating tunable release of WGA can be prepared by varying the amount of GlcNAc-Q11 relative to Q11 (Figure 6c). WGA released from GlcNAc-Q11 microgels was biologically active, as demonstrated by its ability to induce Jurkat T-cell apoptosis in vitro (Figure 6d). These results demonstrate the potential to formulate glycopeptide nanofibers into controlled-release vehicles for the delivery of therapeutic lectin payloads. (c) WGA burst release curves from microgels with different GlcNAc content: 0% (circles), 5% (squares), 10% (triangles), or 25% (diamonds). Burst release decreased with increasing amount of GlcNAc-Q11. (d) Jurkat apoptosis induced by WGA released from Q11 microgels (gray), or stock WGA that was not subjected to desolvation (black), demonstrating that the released proteins were active. ''ns'' denotes p > 0.05 between indicated groups, **** indicates p < 0.001, ANOVA with Tukey's post hoc. Adapted from reference [100] with permission from The Royal Society of Chemistry.
Self-assembling glycopeptide amphiphiles are also finding use as building blocks for supramolecular nanomaterials that can amplify the activity of carbohydrate-binding proteins. For example, Stupp and co-workers designed nanofilaments that present sulfated carbohydrates as biomaterials that can bind to bone morphogenetic protein 2 (BMP-2) to amplify its activity during bone regeneration (Figure 7a) [101]. Specifically, they used the copper(I)-catalyzed alkyne-azide cycloaddition (CuAAC) click reaction to conjugate different monosaccharides onto peptide amphiphiles [102]. The multivalency afforded by these glycosylated nanofilaments was intended to mimic natural highly sulfated complex polysaccharides that bind to various growth factors. These nanofilaments amplified the activity of BMP-2 in a monosaccharide density-dependent manner, as measured through up-regulation of alkaline phosphatase activity (Figure 7b). In vivo studies demonstrated that trisulfated self-assembled glycopeptide nanofilaments could decrease the effective BMP-2 dose required for bone fusion by 100-fold compared to using BMP-2 alone ( Figure 7c). (c) WGA burst release curves from microgels with different GlcNAc content: 0% (circles), 5% (squares), 10% (triangles), or 25% (diamonds). Burst release decreased with increasing amount of GlcNAc-Q11. (d) Jurkat apoptosis induced by WGA released from Q11 microgels (gray), or stock WGA that was not subjected to desolvation (black), demonstrating that the released proteins were active. "ns" denotes p > 0.05 between indicated groups, **** indicates p < 0.001, ANOVA with Tukey's post hoc. Adapted from reference [100] with permission from The Royal Society of Chemistry.
Self-assembling glycopeptide amphiphiles are also finding use as building blocks for supramolecular nanomaterials that can amplify the activity of carbohydrate-binding proteins. For example, Stupp and co-workers designed nanofilaments that present sulfated carbohydrates as biomaterials that can bind to bone morphogenetic protein 2 (BMP-2) to amplify its activity during bone regeneration (Figure 7a) [101]. Specifically, they used the copper(I)-catalyzed alkyne-azide cycloaddition (CuAAC) click reaction to conjugate different monosaccharides onto peptide amphiphiles [102]. The multivalency afforded by these glycosylated nanofilaments was intended to mimic natural highly sulfated complex polysaccharides that bind to various growth factors. These nanofilaments amplified the activity of BMP-2 in a monosaccharide density-dependent manner, as measured through up-regulation of alkaline phosphatase activity (Figure 7b). In vivo studies demonstrated that trisulfated self-assembled glycopeptide nanofilaments could decrease the effective BMP-2 dose required for bone fusion by 100-fold compared to using BMP-2 alone (Figure 7c).
Incorporating Folded Proteins into Supramolecular Nanomaterials
The extracellular matrix (ECM) is a supramolecular assembly consisting of various proteins, glycoproteins, glycosaminoglycans, and proteoglycans. Furthermore, individual ECM components, such as fibrillar collagens and elastin, are assemblies of individual protein subunits [103,104]. Inspired by these natural processes, strategies to integrate proteins into synthetic supramolecular biomaterials are receiving increasing attention [105,106]. Silk proteins and related mimics are now used as biomaterials for diverse purposes in various biomedical fields [107,108]. Similarly, supramolecular biomaterials based on elastin-like polypeptides are commonly employed as drug delivery vehicles [109]. Self-assembled collagen mimics are also gaining interest as nanomaterials [65]. In this section, we highlight recent advances in fusing self-assembling peptides to proteins to create nanomaterials with advanced functional properties.
One example from Collier and co-workers reported self-assembled peptide-based nanomaterials with multiple different co-integrated proteins (Figure 8a) [110]. Each protein was fused to a "β-Tail" (MALKVELEKLKSELVVLHSELHKLKSEL) tag, which is a peptide that undergoes slow transition from an α-helix to β-strands that form nanofibers (Figure 8b). β-Tail fusion proteins assemble with the β-sheet fibrillizing peptide Q11 to form nanofibers modified with active protein domains. For example, fluorescent nanofibers can be prepared by assembling β-Tail-GFP with Q11, while nanofibers with hydrolase activity can be fabricated by assembling β-Tail-cutinase with Q11. Various β-Tail fusion proteins can be co-assembled to fabricate multifunctional nanofibers. The relative abundance of each protein can be independently varied to fabricate nanofibers with tunable properties, as demonstrated by materials with a range of fluorescent hues that correspond to the feed ratio of red, green, and blue fluorescent proteins co-assembled with Q11 ( Figure 8c). Lastly, in vivo studies showed that antibodies can be raised against β-Tail fusion proteins assembled into Q11 nanofibers (Figure 8d), which demonstrates the potential of this platform for developing a multi-antigen vaccine.
Incorporating Folded Proteins into Supramolecular Nanomaterials
The extracellular matrix (ECM) is a supramolecular assembly consisting of various proteins, glycoproteins, glycosaminoglycans, and proteoglycans. Furthermore, individual ECM components, such as fibrillar collagens and elastin, are assemblies of individual protein subunits [103,104]. Inspired by these natural processes, strategies to integrate proteins into synthetic supramolecular biomaterials are receiving increasing attention [105,106]. Silk proteins and related mimics are now used as biomaterials for diverse purposes in various biomedical fields [107,108]. Similarly, supramolecular biomaterials based on elastin-like polypeptides are commonly employed as drug delivery vehicles [109]. Self-assembled collagen mimics are also gaining interest as nanomaterials [65]. In this section, we highlight recent advances in fusing self-assembling peptides to proteins to create nanomaterials with advanced functional properties.
One example from Collier and co-workers reported self-assembled peptide-based nanomaterials with multiple different co-integrated proteins (Figure 8a) [110]. Each protein was fused to a "β-Tail" (MALKVELEKLKSELVVLHSELHKLKSEL) tag, which is a peptide that undergoes slow transition from an α-helix to β-strands that form nanofibers (Figure 8b). β-Tail fusion proteins assemble with the β-sheet fibrillizing peptide Q11 to form nanofibers modified with active protein domains. For example, fluorescent nanofibers can be prepared by assembling β-Tail-GFP with Q11, while nanofibers with hydrolase activity can be fabricated by assembling β-Tail-cutinase with Q11. Various β-Tail fusion proteins can be co-assembled to fabricate multifunctional nanofibers. The relative abundance of each protein can be independently varied to fabricate nanofibers with tunable properties, as demonstrated by materials with a range of fluorescent hues that correspond to the feed ratio of red, green, and blue fluorescent proteins co-assembled with Q11 ( Figure 8c). Lastly, in vivo studies showed that antibodies can be raised against β-Tail fusion proteins assembled into Q11 nanofibers (Figure 8d), which demonstrates the potential of this platform for developing a multi-antigen vaccine. A mutated β-Tail, used here as a negative control, adopted a random coil structure. (c) Fluorescence images of red, green, and blue β-Tail proteins co-assembled into Q11 microgels at a predetermined ratio. Top row shows the predicted merged color, while the bottom row shows the actual merged color. A mutated β-Tail led to mismatch between predicted and experimental colors, demonstrating that integration of the β-Tail proteins is via the assembly process rather than physical absorption. (d) Antibody responses in C57BL/6 mice treated with Q11 nanofibers bearing β-Tail-GFP (left) or β-Tail-cutinase (right). Significantly higher antibody titers were observed when animals received protein assembled into Q11 nanofibers. * p < 0.05, ** p < 0.01, NS, no significant differences (p > 0.05). ANOVA with Tukey's post-hoc (left) and Student's t-test (right). Adapted with permission from [110]. Copyright 2014, Nature publishing.
Another recent example of installing proteins into supramolecular nanomaterials via fusion to a self-assembling peptide was reported by Woolfson and colleagues. In this approach, they fabricated nanoreactors by displaying proteins on cages assembled from peptides that form α-helical coiled-coils [111,112] (Figure 9a,b). The orientation of a protein on a self-assembled cage can be controlled by conjugating it to either the N-or C-terminus of the coiled-coil peptide, as demonstrated by differences in average cage diameter. The amount of protein loaded into self-assembled cages can also be varied by changing the ratio of protein-modified and unmodified peptide, with a maximum of 15% loading before significant cage aggregation was observed ( Figure 9c). Finally, activity of the loaded proteins was characterized by measuring bioluminescence emission from Renilla luciferase displayed on self-assembled cages in the presence of coelenterazine. Their results showed that luminescence emission by enzyme localized to the core or surface of cages was comparable to free enzyme (Figure 9d). A mutated β-Tail, used here as a negative control, adopted a random coil structure. (c) Fluorescence images of red, green, and blue β-Tail proteins co-assembled into Q11 microgels at a predetermined ratio. Top row shows the predicted merged color, while the bottom row shows the actual merged color. A mutated β-Tail led to mismatch between predicted and experimental colors, demonstrating that integration of the β-Tail proteins is via the assembly process rather than physical absorption. (d) Antibody responses in C57BL/6 mice treated with Q11 nanofibers bearing β-Tail-GFP (left) or β-Tail-cutinase (right). Significantly higher antibody titers were observed when animals received protein assembled into Q11 nanofibers. * p < 0.05, ** p < 0.01, NS, no significant differences (p > 0.05). ANOVA with Tukey's post-hoc (left) and Student's t-test (right). Adapted with permission from [110]. Copyright 2014, Nature publishing.
Another recent example of installing proteins into supramolecular nanomaterials via fusion to a self-assembling peptide was reported by Woolfson and colleagues. In this approach, they fabricated nanoreactors by displaying proteins on cages assembled from peptides that form α-helical coiled-coils [111,112] (Figure 9a,b). The orientation of a protein on a self-assembled cage can be controlled by conjugating it to either the N-or C-terminus of the coiled-coil peptide, as demonstrated by differences in average cage diameter. The amount of protein loaded into self-assembled cages can also be varied by changing the ratio of protein-modified and unmodified peptide, with a maximum of 15% loading before significant cage aggregation was observed (Figure 9c). Finally, activity of the loaded proteins was characterized by measuring bioluminescence emission from Renilla luciferase displayed on self-assembled cages in the presence of coelenterazine. Their results showed that luminescence emission by enzyme localized to the core or surface of cages was comparable to free enzyme (Figure 9d). Molecules 2019, 24, x FOR PEER REVIEW 13 of 22 Figure 9. Self-assembled cages displaying protein domains. (a) Schematic representation of a cage formed from self-assembling peptide-protein conjugates. Proteins can be displayed within the core or on the surface of the cage using this methodology. (b) Representative fluorescence still images and transmission electron microscopy images of assembled peptide-protein cages. Particles assembled in the presence of GFP fusion proteins were fluorescent. Particles with average diameters of ~100 nm and uniform morphologies were observed. (c) The average particle diameter depended on protein orientation. Slightly larger particles were obtained when proteins were conjugated on the outside of the particles. (d) Representative SEM images of assembled particles incorporating 0%, 5%, 15%, 25%, and 35% (volume ratio) peptide-protein conjugates. Individual particles were observed below 15% protein loading, while aggregation occurred at higher concentrations. (e) Bioluminescence emission at 472 nm from 5% Renilla luciferase assembled into cages (light gray) or free in solution (dark gray). Differences between assembled and soluble enzyme were insignificant. Adapted with permission from [111]. Copyright 2017, American Chemical Society publishing.
A third recent example reported the fabrication of peptide nanofibrils displaying functional proteins using a split-tag method [113]. In particular, two different peptide tag sequences that recognize either RNase S-protein or a GFP fragment (S-peptide and GFP 11, respectively) were conjugated to the β-sheet fibrillizing peptide (FKFE)2 via a PEG linker (Figure 10a). Both (FKFE)2 variants self-assembled into β-sheet rich fibrillar structures (Figure 10b). Nanofibrils bearing S-peptide catalyzed the hydrolysis of cytidine 2',3'-cyclic monophosphate in the presence of the split protein RNase S', and the rate of reaction increased with increasing tagging ratio (Figure 10c). Similarly, nanofibrils bearing GFP 11 were fluorescent in the presence of split GFP, and the amount of fluorescence increased with the increasing amount of GFP 11 on the fibrils (Figure 10d). Figure 9. Self-assembled cages displaying protein domains. (a) Schematic representation of a cage formed from self-assembling peptide-protein conjugates. Proteins can be displayed within the core or on the surface of the cage using this methodology. (b) Representative fluorescence still images and transmission electron microscopy images of assembled peptide-protein cages. Particles assembled in the presence of GFP fusion proteins were fluorescent. Particles with average diameters of~100 nm and uniform morphologies were observed. (c) The average particle diameter depended on protein orientation. Slightly larger particles were obtained when proteins were conjugated on the outside of the particles. (d) Representative SEM images of assembled particles incorporating 0%, 5%, 15%, 25%, and 35% (volume ratio) peptide-protein conjugates. Individual particles were observed below 15% protein loading, while aggregation occurred at higher concentrations. (e) Bioluminescence emission at 472 nm from 5% Renilla luciferase assembled into cages (light gray) or free in solution (dark gray). Differences between assembled and soluble enzyme were insignificant. Adapted with permission from [111]. Copyright 2017, American Chemical Society publishing.
A third recent example reported the fabrication of peptide nanofibrils displaying functional proteins using a split-tag method [113]. In particular, two different peptide tag sequences that recognize either RNase S-protein or a GFP fragment (S-peptide and GFP 11, respectively) were conjugated to the β-sheet fibrillizing peptide (FKFE) 2 via a PEG linker (Figure 10a). Both (FKFE) 2 variants self-assembled into β-sheet rich fibrillar structures (Figure 10b). Nanofibrils bearing S-peptide catalyzed the hydrolysis of cytidine 2 ,3 -cyclic monophosphate in the presence of the split protein RNase S , and the rate of reaction increased with increasing tagging ratio (Figure 10c). Similarly, nanofibrils bearing GFP 11 were fluorescent in the presence of split GFP, and the amount of fluorescence increased with the increasing amount of GFP 11 on the fibrils (Figure 10d). Figure 10. Immobilizing proteins onto peptide nanofibrils via split tag. (a) Graphical representation of non-covalent protein conjugation onto self-assembled peptide nanofibrils using a split protein strategy. Blue: a self-assembling peptide (Ac-(FKFE)2-NH2); red: peptide tag; green: split protein fragments. (b) Fourier-transform infrared (FT-IR) spectra indicating that 10% co-assembled FP1S15/Ac-(FKFE)2-NH2 fibrils and 20% G11P1F/Ac-(FKFE)2-NH2 fibrils (20% G11P1F) adopt a β-sheet secondary structure. (c) Optical density curves of cytidine 2′,3′-cyclic monophosphate hydrolysis catalyzed by self-assembled nanofibrils with different amounts of ribonuclease S tag in the presence of ribonuclease S'. (d) Spectra of green fluorescence produced by self-assembled nanofibrils with different amounts of GFP 11 tag in the presence of GFP 1-10. Control spectra are GFP 1-10 alone (red) and GFP 1-10 + GFP 11 (black). Reproduced from reference [113] with permission from The Royal Society of Chemistry.
Hudalla and co-workers reported a co-assembly strategy to fabricate β-sheet nanofibers with pendant protein domains [114]. Co-assembly tags based on charge complementarity or "CATCH peptides", are anionic ("CATCH(−)") and cationic ("CATCH(+)") variants of the β-sheet fibrillizing peptide Q11 (Figure 11a). CATCH peptides resist self-assembly due to strong electrostatic repulsion, yet co-assemble into two-component nanofibers when mixed due to charge complementarity ( Figure 11b). This allows for CATCH fusion proteins to be expressed from recombinant DNA by bacteria without premature assembly or aggregation (Figure 11a). In turn, CATCH fusion proteins added to mixtures of CATCH peptides incorporate into the resulting nanofibers (Figure 11a). For example, CATCH(+), CATCH(−), and a CATCH(−)GFP fusion protein co-assemble to form fluorescent nanofibers (Figure 11c), while binary mixtures of CATCH(−)GFP and CATCH(+) peptide co-assemble into micron-sized fluorescent particles (Figure 11d). The transition from particle to nanofiber morphology depends on the feed ratio of CATCH(−) and CATCH(+) peptides mixed with CATCH(−)GFP, and the size of the microparticles formed can be varied by stirring binary mixtures of CATCH peptides and fusion proteins (Figure 11e). At higher concentrations, CATCH(+) and CATCH(−) form hydrogels (Figure 11f). Ternary mixtures of CATCH(+), CATCH(−), and CATCH(−)GFP yield fluorescent hydrogels that retain GFP over many days, while hydrogels assembled from CATCH(+), CATCH(−), and a GFP with a mutated CATCH tag release GFP into Figure 10. Immobilizing proteins onto peptide nanofibrils via split tag. (a) Graphical representation of non-covalent protein conjugation onto self-assembled peptide nanofibrils using a split protein strategy. Blue: a self-assembling peptide (Ac-(FKFE) 2 -NH 2 ); red: peptide tag; green: split protein fragments. (b) Fourier-transform infrared (FT-IR) spectra indicating that 10% co-assembled FP1S15/Ac-(FKFE) 2 -NH 2 fibrils and 20% G11P1F/Ac-(FKFE) 2 -NH 2 fibrils (20% G11P1F) adopt a β-sheet secondary structure. (c) Optical density curves of cytidine 2 ,3 -cyclic monophosphate hydrolysis catalyzed by self-assembled nanofibrils with different amounts of ribonuclease S tag in the presence of ribonuclease S . (d) Spectra of green fluorescence produced by self-assembled nanofibrils with different amounts of GFP 11 tag in the presence of GFP 1-10. Control spectra are GFP 1-10 alone (red) and GFP 1-10 + GFP 11 (black). Reproduced from reference [113] with permission from The Royal Society of Chemistry.
Hudalla and co-workers reported a co-assembly strategy to fabricate β-sheet nanofibers with pendant protein domains [114]. Co-assembly tags based on charge complementarity or "CATCH peptides", are anionic ("CATCH(−)") and cationic ("CATCH(+)") variants of the β-sheet fibrillizing peptide Q11 (Figure 11a). CATCH peptides resist self-assembly due to strong electrostatic repulsion, yet co-assemble into two-component nanofibers when mixed due to charge complementarity (Figure 11b). This allows for CATCH fusion proteins to be expressed from recombinant DNA by bacteria without premature assembly or aggregation (Figure 11a). In turn, CATCH fusion proteins added to mixtures of CATCH peptides incorporate into the resulting nanofibers (Figure 11a). For example, CATCH(+), CATCH(−), and a CATCH(−)GFP fusion protein co-assemble to form fluorescent nanofibers (Figure 11c), while binary mixtures of CATCH(−)GFP and CATCH(+) peptide co-assemble into micron-sized fluorescent particles (Figure 11d). The transition from particle to nanofiber morphology depends on the feed ratio of CATCH(−) and CATCH(+) peptides mixed with CATCH(−)GFP, and the size of the microparticles formed can be varied by stirring binary mixtures of CATCH peptides and fusion proteins (Figure 11e). At higher concentrations, CATCH(+) and CATCH(−) form hydrogels (Figure 11f). Ternary mixtures of CATCH(+), CATCH(−), and CATCH(−)GFP yield fluorescent hydrogels that retain GFP over many days, while hydrogels assembled from CATCH(+), CATCH(−), and a GFP with a mutated CATCH tag release GFP into surrounding aqueous media over time (Figure 11g). Together, these observations demonstrate the potential of CATCH peptides as fusion tags to immobilize functional proteins within nanofibrillar hydrogel scaffolds. surrounding aqueous media over time (Figure 11g). Together, these observations demonstrate the potential of CATCH peptides as fusion tags to immobilize functional proteins within nanofibrillar hydrogel scaffolds. Adapted with permission from [114]. Copyright 2016, Springer US publishing.
Self-assembling peptides can also be used to organize proteins into other nano-scale architectures in addition to nanofibers and nanovesicles. For example, Hudalla and colleagues created nanoassemblies by fusing an enzyme to Galectin-3 via a peptide that forms an α-helical coiled-coil [115] (Figure ??a). Galectin-3 is a protein that binds carbohydrates found on the cell surface and within the extracellular matrix of mammalian tissues, including N-acetyllactosamine and related variants, as well as chondroitin and heparan sulfate glycosaminoglycans [116,117]. Trimeric nanoassemblies having three enzymes and three Galectin-3 domains bound carbohydrates with significantly higher affinity than a monomeric fusion of enzyme and Galectin-3 connected by a flexible linker (Figure ??b). When injected at different tissue sites, trimeric nanoassemblies of NanoLuc TM luciferase (Promega Corporation, Madison, WI, USA) and Galectin-3 persisted for two weeks, whereas the monomeric fusion was retained for approximately one week. In contrast, native NanoLuc TM cleared within one day (Figure ??c). Importantly, unlike wild-type Galectin-3, which forms higher-ordered oligomers that can induce T cell death, trimeric nanoassemblies did not induce T-cell apoptosis (Figure ??d). Collectively, this report demonstrates that the carbohydrate-binding properties of Galectin-3 can be harnessed to anchor enzymes at tissue injection sites independently of Galectin-3 signaling activity.
Molecules 2019, 24, x FOR PEER REVIEW 16 of 22 Self-assembling peptides can also be used to organize proteins into other nano-scale architectures in addition to nanofibers and nanovesicles. For example, Hudalla and colleagues created nanoassemblies by fusing an enzyme to Galectin-3 via a peptide that forms an α-helical coiled-coil [115] (Figure 12a). Galectin-3 is a protein that binds carbohydrates found on the cell surface and within the extracellular matrix of mammalian tissues, including N-acetyllactosamine and related variants, as well as chondroitin and heparan sulfate glycosaminoglycans [116,117]. Trimeric nanoassemblies having three enzymes and three Galectin-3 domains bound carbohydrates with significantly higher affinity than a monomeric fusion of enzyme and Galectin-3 connected by a flexible linker (Figure 12b). When injected at different tissue sites, trimeric nanoassemblies of NanoLuc TM luciferase (Promega Corporation, Madison, WI, USA) and Galectin-3 persisted for two weeks, whereas the monomeric fusion was retained for approximately one week. In contrast, native NanoLuc TM cleared within one day (Figure 12c). Importantly, unlike wild-type Galectin-3, which forms higher-ordered oligomers that can induce T cell death, trimeric nanoassemblies did not induce T-cell apoptosis (Figure 12d). Collectively, this report demonstrates that the carbohydrate-binding properties of Galectin-3 can be harnessed to anchor enzymes at tissue injection sites independently of Galectin-3 signaling activity. nanoassembly has higher glycan-binding affinity than the monomeric fusion protein due to multivalent avidity effects. (b) Carbohydrate-binding properties of monomeric G3 fusion proteins and trimeric nanoassemblies. (c) Bioluminescence images at various time points of mice that received trimeric nanoassemblies of NanoLuc and Galectin-3 (NL-TT-G3), a monomeric fusion protein of NanoLuc and Galectin-3 (NL-G3), or wild-type NanoLuc (WT-NL) (equivalent moles of NL) injected subcutaneously into the hock. Representative images show prolonged residence of trimeric nanoassemblies at the injection site compared with other groups. (d) Bright-field micrographs of Jurkat T cells incubated with PBS (untreated, negative control), wild-type Galectin-3 (WT-G3) (positive control), NL-G3, or NL-TT-G3 for 4 h, which demonstrated that monomeric fusion proteins and trimeric nanoassemblies lacked the activity for inducing T-cell apoptosis that is characteristic of WT-G3. Adapted with permission from reference [115] under Creative Commons Attribution 4.0 International license. Copyright 2018. Nature publishing.
Future Directions
Self-assembling peptides have significantly advanced the state of the art of nano-scale biomaterials over the last few decades, as exemplified by their increasing use in diverse applications such as drug delivery, tissue engineering, regenerative medicine, vaccines, and stimuli-responsive biomaterials [16,20,73]. Essential to this breadth of use is the ability to easily install different functional ligands into supramolecular biomaterials by simply conjugating them to a self-assembling peptide. Building upon early iterations, in which the functional ligands were often peptides or small molecules, the examples surveyed herein highlight the growing use of self-assembling peptides as handles to incorporate folded proteins or carbohydrates into supramolecular architectures. The sophisticated biochemical properties of proteins and carbohydrates open up exciting opportunities to develop novel biomaterials with unprecedented functional capabilities. We envision that continued progress using self-assembling peptides to integrate diverse types and combinations of biologically active ligands into supramolecular biomaterials will greatly advance their use in existing and emerging areas of biomedicine and biotechnology.
|
2019-04-25T13:03:22.330Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "0e12196f24ca963b4f2b34e092432c9cc6c749e4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/24/8/1450/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e12196f24ca963b4f2b34e092432c9cc6c749e4",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
214680850
|
pes2o/s2orc
|
v3-fos-license
|
Controversies about COVID-19 and anticancer treatment with immune checkpoint inhibitors
Corona virus disease-19 pandemic & cancer patients On 11 March, the WHO formally declared the corona virus disease-19 (COVID-19) outbreak a pandemic [1]. After the first cluster of cases emerged from Wuhan, in China, at the end of 2019, up today almost 287000 cases of infections from severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) have been diagnosed across all five continents in the last few months [2,3]. COVID-19 morbidity and mortality have been linked to elderly age and comorbidities, leading to a poorer outcome to the viral infection for frail patients and more often resulting in hospitalization, intensive care unit admittance and need for invasive tracheal intubation [4]. Among such individuals, cancer patients represent a large subgroup at high risk of developing coronavirus infection and its severe complications. A recent nationwide analysis in China demonstrated that, of 1590 COVID-19 cases from 575 hospitals, 18 had a history of cancer (1 vs 0.29% of cancer incidence in the overall Chinese population, respectively), with lung cancer as the most frequent diagnosis [5]. Patients with cancer were observed to have a higher risk of severe events compared with patients without cancer (39 vs 8%; p = 0.0003). Moreover, cancer patients who underwent recent chemotherapy or surgery had a higher risk of clinically severe events than did those not receiving treatment. With the limit of a small sample size, the authors concluded that patients with cancer might have a higher risk of COVID-19, and poorer outcomes, than individuals without cancer. As a consequence, they recommended to consider an intentional postponing of adjuvant chemotherapy or elective surgery for stable cancer in endemic areas [5]. Nevertheless, as subsequently highlighted by other authors, the true incidence of COVID-19 in patients with cancer would be more informative in assessing whether such patients have an increased risk (and morbidity) from this viral illness [6]. Furthermore, the limited cancer patient population described in this first report from the literature, was curiously characterized by the lack of individuals receiving anticancer immunotherapy. Indeed, only chemotherapy and surgery were cited among treatments received by patients in the month prior to developing COVID-19. Maybe, this could simply be due to the casualty of a small sample, or otherwise, it could suggest that cancer patients receiving immunotherapy are less prone to develop COVID-19 or to be admitted in hospital due to severe coronavirus symptoms. Currently, we are aware of the probably higher incidence of misdiagnosed coronavirus infections compared with that reported and updated every day; it is likely that a great portion of healthy and young population develop COVID-19 with mild symptoms, not requiring hospital admittance and thus escaping the laboratory confirmation of the disease [7]. Cancer patients undergoing treatment with anti-PD-1/PD-L1 or anti-
Corona virus disease-19 pandemic & cancer patients
March, the WHO formally declared the corona virus disease-19 (COVID-19) outbreak a pandemic [1]. After the first cluster of cases emerged from Wuhan, in China, at the end of 2019, up today almost 287000 cases of infections from severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) have been diagnosed across all five continents in the last few months [2,3].
COVID-19 morbidity and mortality have been linked to elderly age and comorbidities, leading to a poorer outcome to the viral infection for frail patients and more often resulting in hospitalization, intensive care unit admittance and need for invasive tracheal intubation [4]. Among such individuals, cancer patients represent a large subgroup at high risk of developing coronavirus infection and its severe complications. A recent nationwide analysis in China demonstrated that, of 1590 COVID-19 cases from 575 hospitals, 18 had a history of cancer (1 vs 0.29% of cancer incidence in the overall Chinese population, respectively), with lung cancer as the most frequent diagnosis [5]. Patients with cancer were observed to have a higher risk of severe events compared with patients without cancer (39 vs 8%; p = 0.0003). Moreover, cancer patients who underwent recent chemotherapy or surgery had a higher risk of clinically severe events than did those not receiving treatment. With the limit of a small sample size, the authors concluded that patients with cancer might have a higher risk of COVID-19, and poorer outcomes, than individuals without cancer. As a consequence, they recommended to consider an intentional postponing of adjuvant chemotherapy or elective surgery for stable cancer in endemic areas [5].
Nevertheless, as subsequently highlighted by other authors, the true incidence of COVID-19 in patients with cancer would be more informative in assessing whether such patients have an increased risk (and morbidity) from this viral illness [6]. Furthermore, the limited cancer patient population described in this first report from the literature, was curiously characterized by the lack of individuals receiving anticancer immunotherapy. Indeed, only chemotherapy and surgery were cited among treatments received by patients in the month prior to developing COVID-19. Maybe, this could simply be due to the casualty of a small sample, or otherwise, it could suggest that cancer patients receiving immunotherapy are less prone to develop COVID-19 or to be admitted in hospital due to severe coronavirus symptoms. Currently, we are aware of the probably higher incidence of misdiagnosed coronavirus infections compared with that reported and updated every day; it is likely that a great portion of healthy and young population develop COVID-19 with mild symptoms, not requiring hospital admittance and thus escaping the laboratory confirmation of the disease [7]. Cancer patients undergoing treatment with anti-PD-1/PD-L1 or anti-CTLA-4 immune checkpoint inhibitors (ICI) currently used in everyday practice to treat solid tumors such as melanoma, lung cancer, renal carcinoma, urothelial cancers and head and neck carcinoma constitute a growing oncological population [8]. Their specific susceptibility to bacterial or viral infections has not been investigated. Considering that immunotherapy with ICI is able to restore the cellular immunocompetence, as we previously suggested in the context of influenza infection, the patient undergoing immune checkpoint blockade could be more immunocompetent than cancer patients undergoing chemotherapy [9,10].
Potential interference between COVID-19 pathogenesis & immune checkpoint blockade
In the recent weeks, in the countries heavily interested by the COVID-19 outbreak, such as Italy, the scientific associations recommended the prudential postponing of active cancer treatments, especially for stable patients not needing urgent interventions [11]. On one hand, this recommendation could be reasonable for advanced cancer patients receiving chemotherapy, with the risk of hematological toxicity and of worsening an immunosuppressed status, thus favoring COVID-19 morbidity [5]. On the other hand, some oncologists are even currently wondering about the risk of administering ICI in the middle of the COVID-19 outbreak, essentially due to two major concerns.
The first seems to be represented by the potential overlap between the coronavirus-related interstitial pneumonia and the possible pneumological toxicity from anti-PD-1/PD-L1 agents. Even if lung toxicity is not the most frequent adverse event of ICI, it can be life threatening. The overall incidence rate of ICI-related pneumonitis ranges from 2.5-5% with anti-PD-1/PD-L1 monotherapy to 7-10% with anti-CTLA-4/anti-PD-1 combination therapy [12]. The dominant radiological pattern of lung immune-related adverse events (irAEs) is organizing pneumonia, but ICI-related pneumonitis could exhibit a variety of patterns, also including nonspecific interstitial pneumonitis [13]. Despite being rarer than other irAEs, pneumonitis is the most fatal AE associated with PD-1/PD-L1 inhibitor therapy, accounting for 35% of treatment-related toxic deaths [14]. Considering that underlying lung disease, particularly including interstitial pneumopathy, is considered a risk factor for ICI-related pneumonitis, it could be reasonable taking into account the risk of treating patients while they are developing an initial form of COVID-19. The synergy between the two lung injuries, despite only hypothetical, cannot be surely ruled out. Nevertheless, such an epidemiological coincidence should not prevent the oncologist from offering a potentially effective and often well-tolerated treatment even in the middle of the COVID-19 outbreak, since the duration of the pandemic is still currently unpredictable. This is true in particular considering the potentially curative aim of ICI treatment in the context of highly responsive diseases, such as melanoma and renal cell carcinoma and in the adjuvant setting even more than in the advanced disease.
The second concern seems to be represented by a possible negative interference of ICI in the pathogenesis of COVID-19. Cytokine-release syndrome (CRS) is a phenomenon of immune hyperactivation typically described in the setting of T cell-engaging immunotherapy, including CAR-T cell therapy but also anti-PD-1 agents [15]. CRS is characterized by elevated levels of IL-6, IFN-γ and other cytokines, provoking consequences and symptoms related to immune activation, ranging from fever, malaise and myalgias to severe organ toxicity, lung failure and death. In parallel, one of the most important mechanism underlying the deterioration of disease in COVID-19 is represented by the cytokine storm, leading to acute respiratory distress syndrome or even multiple organ failure [16]. The cytometric analyses of COVID-19 patients showed reduced counts of peripheral CD4 and CD8 T cells, while their status was hyperactivated. In addition, an increased concentration of highly proinflammatory CCR6+ Th17 in CD4 T cells has been reported, and CD8 T cells were found to harbor high concentrations of cytotoxic granules, suggesting that overactivation of T cells tends to contribute to the severe immune injury of the disease [17]. Moreover, the pathological findings associated with acute respiratory distress syndrome in COVID-19 showed abundant interstitial mononuclear inflammatory infiltrate in the lungs, dominated by lymphocytes, once again implying that the immune hyperactivation mechanisms are at least partially accountable for COVID-19 severity [17]. Considering these aspects, the hypothesis of a synergy between ICI mechanisms and COVID-19 pathogenesis, both contributing to a counter-producing immune hyperactivation, cannot be excluded.
In spite of this fascinating rationale, we should remember that ICI-induced CRS is a quite rare phenomenon as well as that the cytokine storm is not an early event in the COVID-19 pathogenesis, indeed characterizing the late phase of its most severe manifestation, occurring in a minority of patients. It is not likely that cancer patients are still receiving ICI during this phase of the viral illness. Obviously, in the current pandemic scenario, careful attention should be dedicated in delaying treatment for those patients presenting flu-like symptoms at the time of the intended ICI treatment.
Therapeutic implications: tocilizumab & the risk of hasty conclusions
Since its first outbreak in China, COVID-19 was empirically treated with antiviral therapy, first employing agents already used in prior severe acute respiratory syndrome epidemics [18]. Then, several randomized clinical trials were initiated in China and more recently in Italy, investigating different treatment options, varying from classical antiviral drugs as lopinavir/ritonavir, to newer antiviral as remdesivir, to unconventional agents such as chloroquine and hydroxychloroquine [19]. The latest treatment frontier against COVID-19 seems to be represented by a recombinant humanized monoclonal antibody, named tocilizumab, which binds the human IL-6 receptor, inhibiting its signal transduction [20]. Tocilizumab is currently used for rheumatoid arthritis, but its efficacy has been demonstrated also against ICI-induced irAEs, starting from the rationale of an ICI-induced systemic inflammatory response syndrome similar to CRS [21]. Moreover, along with the improvement in symptoms related to systemic inflammatory response syndrome, some authors reported a clinical improvement in other irAEs with tocilizumab used in cancer patients with immune-related toxicity from anti-PD-1 agents [21,22].
With these premises, the risk of hasty conclusions is around the corner. In fact, one can argue that the alleged tocilizumab efficacy both for treating COVID-19 and irAEs might suggest a potentially increased danger from SARS-CoV-2 infection for ICI-treated patients, maybe hypothesizing a synergy in the promotion of the viral morbidity. Nevertheless, this is probably a thoughtless deduction.
First, it can be a matter of time. The time at which the COVID-19 patient develops the pathologic hyperactivation of the immune response, eventually contributing to the final injury, is probably in the late phase of the disease manifestation, occurring together with the respiratory distress [17]. Furthermore, the time matters also in the case of ICI therapy, since the majority of patients develop irAEs within the first 6 months from the first administration [12]. Thus, a certain caution for ICI administration during the pandemics could be applied mostly for those patients needing therapy initiation or in their first months of treatment.
Second, it is probably a matter of patient. Patients more prone to developing immune hyperactivation are probably those more likely to respond to ICI [23]. There is a possibility that such patients would be also more prone to fall in the cytokine storm in the case of SARS-CoV-2 infection. Nevertheless, these patients do not correspond to the average advanced cancer patient, who is supposed to be immunosuppressed, with a blunted immune status [6]. The epidemiology of the COVID-19 observed up today suggests that SARS-CoV-2 tends to infect more frequently the frail patient populations, such as the elderly and cancer patients [4,5]. Cancer is usually associated with overexpression of immunosuppressive cytokines, suppression of proinflammatory danger signals, impaired dendritic cell maturation, and enhanced immunosuppressive leukocyte populations [6]. Since ICI can restore the immune-competence, if on one hand it can be paradoxically needed to develop the cytokine storm characterizing the acute respiratory distress syndrome (ARDS) phase, on the other hand the epidemiological features of SARS-CoV-2 infection lay for a lower probability to affect these patients compared with their chemo-treated immune-suppressed counterpart.
Third, the efficacy of tocilizumab for COVID-19 is still under investigation, with still unexplored backstage and with uncomfortable upstream evidence coming from the setting of influenza infection. Despite clinical studies associating IL-6 with high disease severity in influenza-infected patients and its levels correlated directly with symptom occurrence in human influenza virus infection, the role of this cytokine is still ambiguous [24]. It was demonstrated in mice models that IL-6 is essential for preventing virus-induced neutrophil cell death and H1N1associated mortality, limiting influenza-induced cytokine storm and protecting against fatal lung pathology [25]. Furthermore, IL-6 is crucial in secondary infections to recall virus-specific memory CD4 T cells, favoring virus clearance and host survival, as supported by the inability of IL-6 deficient mice to control influenza viral titers in the lung [25]. Such preclinical evidence suggests that, despite probably being harmful in the ARDS phase, IL-6 role could be crucial, in the early phase of the viral infection, to defuse the pathogenesis of severe and lethal forms of influenza. Thus, hoping for positive results from tocilizumab randomized clinical trial on COVID-19 patients, we could only argue about the evident diversity of this viral infection from previous SARS outbreaks and even more from influenza epidemics, probably both in terms of clinical features and of pathogenetic implications.
Conclusion
Clinical decisions about cancer patients deserving immunotherapy in the current context of the COVID-19 pandemic should be characterized by separated reflections, avoiding generalizations and remembering their deeply different immunological status compared with that of cancer patients undergoing chemotherapy or targeted agents. In the end, beyond any charming scientific speculations, it is unfortunately likely that in this COVID-19 pandemic, the greatest risk for cancer patients is the unavailability of the usually high-level medical services, since all our hospital resources, in terms of structures, tools and healthcare professionals, are currently strongly dedicated to the outbreak management.
Financial & competing interests disclosure
The author received research funding by Roche, Pfizer, Seqirus, AstraZeneca, Bristol-Myers Squibb, Novartis and Sanofi; she also received honoraria for advisory role and as speaker at scientific events by Bristol-Myers Squibb, Novartis and Pfizer.
No writing assistance was utilized in the production of this manuscript.
|
2020-03-29T07:15:36.285Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "787957acd970a7f2b83017fd4ca13ccbd77a697f",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7117596?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "3728113e54ad7c874d5e84451e49d024c6cbacc7",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
}
|
255466415
|
pes2o/s2orc
|
v3-fos-license
|
Promising Clinical Applications of Hydrogels Associated With Precise Cancer Treatment: A Review
Gastrointestinal cancer is one of the most malignant tumors with high morbidity and mortality, especially colorectal cancer, which has become the second leading cause of cancer-related deaths worldwide. Targeted drug treatment and precise endoscopic resection can significantly improve the overall survival rate and greatly extend the life span. Promising biomedical applications of hydrogels would represent hopeful therapeutic alternatives for patients with different kinds of diseases, particularly providing precise therapy for cancer patients. Although the intersection field of material science and biomedical science has made tremendous advances, major challenges remain. In this review, the application of hydrogel-based technology in cancer precision medicine is the focus of attention, which is the development trend of multidisciplinary cooperation in the future. First, we provide the current clinical landscape of hydrogel applications, and then we highlight precision oncology, including personalized drug treatment and accurate endoscopic intervention. Finally, we discuss major challenges for their clinical translation that have not yet been overcome and future perspectives on cancer precision medicine.
Introduction
Hydrogels are a type of soft and crosslinked hydrophilic polymer network and can be adapted to meet the requirements of different settings by altering material components and chemical modifying approaches. 1 Due to favorable physicochemical characteristics and high biocompatibility, various hydrogels have been designed and developed for biomedical applications, such as regenerative medicine, 2,3 tissue engineering scaffolds, 4,5 drug delivery system, 6,7 and cancer precision medicine. [8][9][10] In particular, there is an increasing utilization of hydrogels in the diagnosis and treatment of cancers. For example, gastrointestinal cancers are life-threatening malignant diseases originating from the gastrointestinal tract, consisting of esophageal cancer, gastric cancer, colorectal cancer, and others. Gastrointestinal cancer accounts for approximately 20% of all cancer diagnoses and 22.5% of cancer deaths worldwide. The 5-year survival rate of early-stage cancer is more than 90%, while that of advanced-stage cancer is less than 20%. 11 Early endoscopic curative resection by using hydrogels can significantly improve the survival rate and quality of life of gastrointestinal cancer patients. Although a variety of hydrogel products have been investigated in preclinical research, the translation from material sciences to clinical application remains a challenge. 1,12 Herein, recent advances in hydrogel-based biomedical applications are generally reviewed. This review also summarizes the current state of hydrogels associated with precision treatment in gastrointestinal cancer, including patientspecific drug screening, therapeutic delivery, and endoscopic precise removal of early-stage malignant cancer.
A fundamental classification of hydrogels is made, such as natural, synthetic, and semisynthetic hydrogels based on the material origin. 13 Among these 3 types, the most common hydrogels are natural hydrogels. Owing to their good biocompatibility, controllable biodegradability, and flexible adaptability, natural hydrogels derived from polypeptides or polysaccharides are highly efficient in many biomedical applications. Through chemical modification and crosslinking during synthesis, synthetic hydrogels are obtained and can afford high tunability and versatile physical properties. To overcome the limitations of natural and synthetic hydrogels, such as stability and batch-to-batch variation, investigators have developed chemically modified natural hydrogels, which are defined as semisynthetic hydrogels. 14 Physicochemical properties of hydrogels, such as mechanical strength, and biological characteristics, such as degradation behavior, can be regulated by varied compositions, different chemical crosslinking methods and density. 14 Naturally, derived hydrogels include hyaluronic acid (HA), alginate, fibrin, collagen, gelatin, and chitosan. Of all natural hydrogels, HA and alginate are 2 notable types, and they are never reactive with the proteins in our body. HA is composed of repeating disaccharide units of D-glucuronic acid and N-acetyl-D-glucosamine. Due to outstanding properties such as biocompatibility, biodegradability, and nonimmunogenicity, there is exponential growth in its biomedical applications. 15 Owing to its high biological relevance and precise chemical tunability, alginate is often utilized to create tailored mechanical scaffolds and biomedical implants. Fibrin has also been explored in extensive biomedical investigations due to its biocompatibility and easy fabrication process; however, uncontrolled degradation is a major limitation in its clinical translation because it is easily affected by tissue factors and enzymes in-vivo. 14 Additionally, gelatin is denatured from collagen and can be derived from diverse sources. Due to their relatively low antigenicity, short degradation period and similar structure to the extracellular matrix, gelatin-combined hydrogels have been explored widely. 13 Moreover, chitosan is a natural, nontoxic, biodegradable polysaccharide, and ionic cross-linking is the most common method for preparing chitosan nanoparticles, which are often used as an antitumor therapy carrier for research due to their inherent antitumor activity. Since their highly hydrophilic and mechanical properties are similar to those of many soft tissues in-vivo, natural hydrogels are popularly utilized in a wide range of clinical applications. Here, we highlight promising biomedical applications of natural hydrogels.
Currently, natural hydrogels are commonly studied and applied in clinical settings such as regeneration medicine and precision oncology. Tissue regeneration and augmentation are common biomedical applications of hydrogels. For example, specific hydrogel patches are currently applied to facilitate the healing process in diabetic ulcers, burn wounds, and skin conditions such as eczema because a combination of preventing bacterial overgrowth and delivering therapeutic agents can be achieved by hydrogel patches. 1 Tissue augmentation can provide mechanical support for compromised tissue, such as myocardial infarction and refractory heart failure. Yadid et al proposed that an engineered myocardial pump would represent a therapeutic alternative for millions of patients with end-stage heart disease, addressing their urgent need for heart donation. 16 Additionally, with the increasing understanding of tissue engineering, hydrogel scaffolds have attracted increasing attention, such as intra-articular injectable hydrogels designed for the treatment of knee osteoarthritis. Cancer precision medicine is another important application of hydrogels. For example, carbopol-based hydrogels loaded with lipophilic bismuth nanoparticles can effectively inhibit the proliferation of cervical cancer, prostate cancer, and colorectal cancer cell lines without adversely affecting the control of nontumor cells. 17 For another example, Vikas et al developed dual receptor-targeted chitosan nanoparticles and confirmed that they had good cytotoxicity and enhanced anticancer activity against lung cancer cell lines. 18 Furthermore, nanoparticles and nanotechnology are also used in many other gastrointestinal treatments, such as phototriggered therapy (including photothermal therapy, photoimmunotherapy, and photodynamic therapy), nanopowders, nanoscaffolds, nanogels, and so on. 19,20 Nanomedicine has the potential to improve diagnostic tools for gastrointestinal cancers and increase treatment options. 21 However, there is very little available clinical data on nanomedicine applications compared with preclinical data. 22 In addition to the development of more novel hydrogels for antitumor therapy, a series of cancer products have been developed and approved. Endo's Vantas® has received regulatory approval by the FDA as a subcutaneous hormonal therapy for the prevention of testosterone-dependent prostate cancer. 1 TraceIT® of hydrogel systems is approved by the FDA for imaging in cancer diagnosis and treatment in clinical trials. 1 Gelfoam matrix histoculture, first developed by Leighton Joseph, permits the determination of the cell cycle position of invading and noninvading cancer cells. 23 SpaceOAR® hydrogel is primarily designed to protect normal tissues from radiation injury during radiation treatment of cancerous tissues. 1 Furthermore, HA-based hydrogels are related to cancer behavior and could be prognostic agents for tumors. In cancer cases, the degradation of HA is highly associated with tumor malignancy, angiogenesis, and distal metastasis. 14 Patient-specific drug screening, targeted drug delivery, and accurate endoscopic removal of tumors are also included in precise cancer therapy.
Drug Screen
To our knowledge, major disadvantages of regular cancer therapy are that a large number of patients have to go through multiple rounds of drug treatment to eradicate tumors and afford tremendous cost spending on cancer therapy. 14 Therefore, there is a pressing need to develop precise and efficient oncology models that highly recapitulate the genetic and morphological composition and mimic the arrangement pattern of cancer cells in the original tumor. Over the past few years, a variety of drug screening tools have been investigated, such as tumor cell lines, 24,25 tumor organic, 26 organ-on-a-chip, reprogramming technology, and hydrogel-based tumor models. In terms of tumor cell lines, the low culture success rate and limited proliferative capacity of ex-vivo culture of tumor cells from patients are major roadblocks in their utilization to evaluate therapeutic effectiveness. 23 For organ-on-a-chip technology, making a tumor or physiologically relevant disease on a chip has been a logical step in the field of cancer research. For instance, Huh et al developed a lung-on-a-chip model that can mimic the physiological environment of the lung. 14,27 Hydrogels are one of the most versatile technologies for personalized drug screen. 1,28 Recently, Suzuka et al reported an innovative hydrogel, defined as a double-network hydrogel, that can rapidly reprogram tumor cells into cancer stem cells with advanced reprogramming technology. It is important to develop novel cancer therapies and screen personalized therapeutic agents targeting cancer stem cells with available doublenetwork hydrogels. 29 Additionally, using hydrogels to create engineered tumor models is an emerging trend in cancer precision medicine, and researchers have demonstrated that the tumor microenvironment plays an important role in tumor development and metastasis. 14 Hydrogel-based tumor models, accurately recapitulating the tumor microenvironment, serve as in-vitro platforms for better screening of novel precise cancer therapeutics, as well as further study of mechanisms underlying cancer development and metastasis. 14,30,31 There is promising translatability and wide-scale use of hydrogelbased tumor models for less expensive and more controllable therapeutic evaluation than in-vivo.
Drug Delivery
Injectable hydrogels are regarded as favorable carriers, loading and delivering therapeutic agents to the surrounding environment and targeted sites. 10,32 While hydrogels encapsulate therapeutics and circulate in the bloodstream, initial efforts should focus on their biological properties to reduce phagocytic uptake and clearance. 15 It is expected that delivering therapeutic agents to the targeted sites can significantly enhance therapeutic efficacy while decreasing adverse effects. The effectiveness of drug delivery is also determined by the size of agents, mesh size of gels, and interaction affinity of the agent-hydrogel. 1 Initially, Szoka et al used HA liposomes loaded with anticancer drugs to facilitate targeted drug delivery by upregulating CD44 receptors on murine tumor cells. Since then, a variety of CD44-induced HA-based hydrogels for targeted therapy have been developed. 15 Recently, pH-sensitive hydrogels have been proposed as an important method of drug-targeted delivery. Due to their high sensitivity to detect minute pH changes of as small as 10 −5 pH units, they are capable of targeting cancer and prolonging drug release within the blood circulation. 33 The pH of the tumor microenvironment is directly or indirectly influenced by O2, angiogenesis, cytokines, and interactions with each other. By monitoring pH changes in tumor sites and blood in individual precision cancer treatment, pH-sensitive hydrogels can provide tailored release doses of chemotherapy drugs, the best timing for drug release, and records of the response of cancer cells to anticancer drugs. 33 For instance, Dai et al used a pH-sensitive hydrogel to deliver anticancer chemotherapy drugs to tumor sites where the pH was different from the physiological range. 34 Fully taking advantage of hydrogel biomaterials in the field of cancer research has made it possible to successfully transition from drug discovery to personalized medicine.
Endoscopic Removal
Gastrointestinal cancer is one of the most malignant tumors with high morbidity and mortality worldwide. 35,36 Endoscopic removal of early cancer and premalignant neoplasia, including endoscopic mucosal resection (EMR) 37 and endoscopic submucosal dissection (ESD), 38,39 is the most effective approach to prevent tumor development and progression. A key consideration factor for successful endoscopic therapy is an ideal injectable submucosal solution. Normal saline is commonly used as a submucosal liquid cushion; however, the accuracy of endoscopic treatment is greatly affected because normal saline can be maintained for only a very short time due to its high permeability and fast diffusion. To address these limitations, submucosal injectable hydrogels (3-5 mL) offer a competitive strategy for endoscopic precise treatment (Figure 1), and there has been exponential growth in preclinical investigations of injectable submucosal hydrogels. 40 However, very few studies have actually entered clinical trials (Table 1). Ideal submucosal hydrogels must have low viscosity and sufficient elasticity in local sites to maintain their volume and sustained submucosal elevation height. In addition, repeated submucosal injections are avoided compared to normal saline. There are 2 approaches for submucosal injection. One approach is directly injecting synthetic gels into the submucosal layer in sites of interest; shear-thinning polymer hydrogels are extensively explored to overcome the paradox between viscosity and elasticity. 40 Another approach is to inject material precursors that form gels via in-situ chemical crosslinking within the physiological environment.
Diverse natural hydrogels have been explored for use as therapeutic submucosal injections for precision treatment of gastrointestinal early cancer and premalignant lesions. Significant efforts have been devoted to optimizing the physicochemical properties of natural hydrogels by altering various components and crosslinking technologies. Of all natural hydrogels studied in preclinical investigations, a nature-derived hydrogel of gelatin-oxidized alginate (G-OALG) was first reported by Fan et al, showing higher performance in controllable gelation, higher viscosity, and more stable properties. Due to good biocompatibility, excellent endoscopic injectability, and prolonged submucosal elevation, G-OALG could be a promising submucosal injection agent for ESD. 41 Similarly, a study conducted by Massachusetts Institute of Technology developed endoscopically injectable shear-thinning hydrogels (EISHs), which can serve as safe and easily injectable agents to provide durable and ideally elevated submucosal cushions. It has been validated in large live animal models for accurate removal at the early stage of colorectal tumors. 40 Fibrin glue (FG) has been explored in extensive biomedical applications due to its biocompatibility and easy fabrication process. Comparing the capability of maintaining submucosal elevation among FG, HA, and normal saline, Takao et al demonstrated that the FG had the best submucosal lifting. 42 However, uncontrolled degradation is a major limitation in its clinical translation because it is easily affected by tissue factors and enzymes in-vivo. There are still many other natural hydrogels available, but they are often not easily transferable to clinical applications and wide-scale industrial use. [43][44][45][46] Gastrointestinal Hemorrhage The most common complication of gastrointestinal tumors, or any other type of cancer that attacks the digestive tract, is a 47 Accumulating studies have demonstrated that the older population has a significantly higher frequency of developing different cancers. However, a large proportion of older patients cannot tolerate invasive surgical resection and prefer to choose noninvasive drug treatment when acute gastrointestinal bleeding occurs.
There are several agents used to treat gastrointestinal bleeding, such as hemostatic spray powders, oral thrombin, and adrenaline solution. However, the hemostatic effect of these agents usually lasts for a short time and needs repeated administration due to low adhesion to the ulcer and fast dissolution in the digestive environment. 48 Thus, developing efficient biomaterials to treat gastrointestinal bleeding is highly desired in clinical practice.
There is a rapidly increasing investigation of hemostatic hydrogels. To form therapeutic hydrogels at target sites, especially in fluidically and mechanically dynamic gastrointestinal environments, appropriate gelation time and bioadhesion to the target site are 2 major consideration factors during the preparation of hydrogels. Endoscopic injectable pH-responsive hydrogels were subsequently developed, which are suitable for biological use in monitoring the pH of ulcer sites, stopping bleeding, and accelerating the self-healing process. 48 For example, He et al presented endoscopic injectable pH-responsive adhesive and self-healing hydrogels for the treatment of gastrointestinal bleeding. It has been validated through animal models that this multifunctional hydrogel shows a suitable gelation time and efficient and good hemostatic properties. 48 Contrary to pH-responsive hydrogels, Xu et al explored hydrogels that exhibited ultrafast gelation and sufficient adhesion independent of pH, providing a protective barrier and accelerating the healing process of ulcers. 49 Compared with previous hemostatic powders, these therapeutic hydrogels can reduce the potential risk of biliary orifice obstruction, poor pancreatic drainage, and even choking. Rapid in-situ formation of stable and adhesive hydrogels achieves precision therapy of gastrointestinal ulcer.
Discussion and Future Perspectives
At present, the following problems are commonly encountered in clinical translation. First, despite the development of hydrogel-based drug screening and delivery systems, key technological challenges and practical adaptability remain major hurdles in their successful clinical translation. Second, immunological adverse events of hydrogels, such as inflammation, local pain, fibrosis, and indefinite long-term impact, remain a worrying limitation for wide-scale clinical translation. Third, although engineered tumor models have potential translatability in the clinic, great efforts are still needed to recapitulate the heterogeneity of tumors and enhance their ability to keep tumor samples viable outside the body. Last, as we have already described, most early precancerous lesions of the digestive tract require ESD treatment. However, ESD requires clinicians with more than 5 years of experience to perform the operation and has many disadvantages, such as general anesthesia, high bleeding risk, high perforation probability, and unaffordable hospitalization costs. Through endoscopic submucosal injection of individually tailored hydrogels, the complex and highrisk ESD procedure can be transformed into a simple and low-risk EMR procedure, which will save patients and the country huge medical expenses. Therefore, we believe that the hydrogel will have great potential in the application of endoscopic early-stage precancerous lesion resection in the future.
In the future, as the avenues toward personalized precision medicine take a leap to explore more precise and more efficient hydrogel-based tumor models in-vitro for patient-tailored treatment. In particular, hydrogel-based tumor models currently represent innovative tools to address the research gap between basic development and clinical translation, moving complex tumor progression and novel drug development into the age of precision medicine.
|
2023-01-06T22:12:27.396Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "443dbfa74feadac099bc862c716a2b5c6001f1f5",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "00763129587feadce1361b4de0f4481e90bc5fa4",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4671997
|
pes2o/s2orc
|
v3-fos-license
|
Kruppel-Like Factor 2-Mediated Suppression of MicroRNA-155 Reduces the Proinflammatory Activation of Macrophages
Objective Recent evidence indicates that significant interactions exist between Kruppel-like factor 2 (KLF2) and microRNAs (miRNAs) in endothelial cells. Because KLF2 is known to exert anti-inflammatory effects and inhibit the pro-inflammatory activation of monocytes, we sought to identify how inflammation-associated miR-155 is regulated by KLF2 in macrophages. Approach and Results Peritoneal macrophages from wild-type (WT) C57Bl/6 mice were transfected with either recombinant adenovirus vector expressing KLF2 (Ad-KLF2) or siRNA targeting KLF2 (KLF2-siRNA) for 24 h–48 h, then stimulated with oxidized low-density lipoproteins (ox-LDL, 50 μg/mL) for 24 h. Quantitative real-time polymerase chain reaction showed that KLF2 markedly reduced the expression of miR-155 in quiescent/ox-LDL-stimulated macrophages. We also found that the increased expression of miR-155, monocyte chemoattractant protein (MCP-1) and interleukin (IL)-6 and the decreased expression of the suppressor of cytokine signaling (SOCS)-1 and IL-10 in ox-LDL-treated macrophages were significantly suppressed by KLF2. Most importantly, over-expression of miR-155 could partly reverse the suppressive effects of KLF2 on the inflammatory response of macrophages. Conversely, the suppression of miR-155 in KLF2 knockdown macrophages significantly overcame the pro-inflammatory properties associated with KLF2 knockdown. Finally, Ad-KLF2 significantly attenuated the diet-induced formation of atherosclerotic lesions in apolipoprotein E-deficient (apoE-/-) mice, which was associated with a significantly reduced expression of miR-155 and its relative inflammatory cytokine genes in the aortic arch and in macrophages. Conclusion KLF2-mediated suppression of miR-155 reduced the inflammatory response of macrophages.
Introduction
Inflammation is crucial for the initiation and progression of atherosclerosis from the initial lesions to end-stage complications. Macrophage activation exacerbates the inflammatory responses in atheromatous plaques and promotes their structural instability [1]. The inflammatory response could therefore be a critical target in atheromatous lesions to prevent atherogenesis [2]. In recent years, it has become clear that Kruppel-like factor 2 (KLF2) is a central regulator of endothelial and monocyte/macrophage proinflammatory action [3,4]. Although the effects of KLF2 in macrophage activation predicts that it likely inhibits vascular inflammation, the mechanisms of action of KLF2 in this process remain uncertain.
MiRNAs are small (22 nucleotide long) single-stranded non-coding RNAs transcribed in the nucleus, processed by the enzymes Drosha (DROSHA) and Dicer (DICER1) and incorporated in RNA-induced silencing complexes that mediate the translational inhibition or degradation of target messenger RNAs [5]. Many miRNAs have been identified that play key roles in physiological and pathophysiological processes, including atherosclerosis [6,7]. MiR-155, a typical multi-functional miRNA, is emerging as a novel regulator involved in the inflammation signaling pathway in the pathogenesis of atherosclerosis. In macrophages, several miRNAs, including miR-155, miR-146, miR-125b, have been found to be substantially up-regulated by Toll-like receptor (TLR) ligands [8,9]. Although the functional relevance of macrophage miR-155 expression is unclear, studies have indicated that miR-155 shows both anti-and proinflammatory effects by regulating TAB2 and SOCS-1, respectively [10,11,12]. However, the role of miR-155 in the pathogenesis of atherosclerosis remains unclear. Indeed, two recent studies have shown opposite results regarding the effects of bone marrow cells with miR-155 deficiency on the process of atherosclerosis. One report showed that bone marrow cells with miR-155 deficiency increased atherosclerosis in low-density lipoprotein receptor (LDLR) −/− mice fed a high-fat diet by generating a more pro-atherogenic immune cell profile and a more pro-inflammatory monocyte/macrophage phenotype, indicating that miR-155 is atheroprotective in that model [13] whereas another report showed that miR-155 promoted atherosclerosis in apoE -/mice by repressing B-cell lymphoma 6 protein in macrophages, thus enhancing vascular inflammation, suggesting that miR-155 is proatherogenic [14].
Given that both KLF2 and miR-155 play key roles in regulating the function of macrophages in the activation of inflammation, we sought to investigate how miR-155 is regulated by KLF2 and might be responsible for mediating the suppression of the pro-inflammatory activation of macrophages by KLF2.
Recombinant adenoviral KLF2 over-expression
Experiments in which stable recombinant adenoviral KLF2 was over-expressed were performed by constructing recombinant adenoviral vectors expressing KLF2. The entire mouse KLF2 gene open reading frame was obtained by RT-PCR, cloned into the CMV-MCS-EGFP GV135 vector, and ligated into a shuttle plasmid. Subsequently, the shuttle plasmid and adenoviral backbone plasmid were co-transfected into HEK-293A cells to produce the recombinant adenoviral vector Ad-KLF2. An otherwise identical vector without KLF2 cDNA was used to generate empty viruses as controls (EV).
(density = 1.03 to 1.063 g/L) were isolated from the plasma by preparative ultracentrifugation at 50 000 rpm for 22 hours using a type 50 rotor as our previously described [15]. LDLs were dialyzed against phosphate-buffered saline (PBS) containing 0.3 mM EDTA, sterilized by filtration through a 0.22-μm filter, and stored under nitrogen gas at 4°C. The protein content was determined by the method of Lowry et al. Copper oxidation of LDL was performed by incubation of post-dialyzed LDL (1 mg of protein/mL in EDTA-free PBS) with copper sulfate (10 mM) for 24 hours at 37°C. Lipoprotein oxidation was confirmed by analysis of thiobarbituric acid-reactive substances.
Macrophage culture, transfection, siRNA-mediated gene knockdown, and adenovirus infection Elicited peritoneal macrophages were collected from mice after an intraperitoneal injection of 1 mL of 3% thioglycolate (Sigma-Aldrich). Cells were resuspended in DMEM culture media supplemented with 5% FBS (Atlanta Biologicals) and plated at a concentration of 5 x 10 5 cells/mL on 10-cm culture plates for 18 hours. Cells were then detached, counted, and re-plated at the required cell density for further treatment. Treated macrophages were either lysed with TRIzol reagent for RNA extraction and RT-PCR or lysed with RIPA buffer containing 1% proteinase inhibitor and 1% phosphatase inhibitor cocktail for Western blotting.
Mouse treatment and specimen collection
Male apoE -/mice at five weeks of age were purchased from the Department of Experimental Animals of the Medical Center, Peking University, and were fed a high-fat diet (HFD, Western Diet, TD.88137, Harlan Teklad, Madison, WI) containing 17.3% protein, 48.5% carbohydrate, 21.2% fat, and 0.2% cholesterol by weight, with 42% kcal from fat. C57Bl/6 wild type (WT) control mice were fed a normal chow diet (Oriental Yeast). The animals were bred and maintained in a specific pathogen-free (SPF) barrier facility at Tongji Medical College (Wuhan, China) with a 12 hours light-dark cycle. All animal experiments were approved by the Institutional Animal Care and Use Committee of Tongji Medical College. The protocol was approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology (IORG No: IORG0003571). All surgery was performed under sodium pentobarbital anesthesia, and all efforts were made to minimize suffering. After 1 weeks of accommodation, the mice were randomly divided into the following four groups: WT control mice treated with EV (WT+EV), WT control mice treated with Ad-KLF2 (WT+Ad-KLF2), apoE-/-mice fed a HFD and treated with EV (apoE-/-+HFD+EV), and apoE-/-mice fed a HFD and treated with Ad-KLF2 (apoE-/-+HFD+Ad-KLF2). The initial concentration of Ad-KLF2 and EV-control vectors was 5×10 11 plaque-forming units (pfu)/mL. They were diluted in sterile PBS and administered to the mice through a single bolus injection via the jugular vein. Animals were anesthetized with avertin during the vectors injection. After 18 weeks of an atherogenic diet, mice were anesthetized and euthanized, and their aortas were harvested for analysis.
Atherosclerotic lesion analysis
The arch portions of the aorta were removed and perfused with PBS and snap-frozen in optimal cutting temperature (OCT) medium (Tissue-Tek). To measure the plaque size, longitudinal sections of the aortic arches were examined under the microscope. The composition of the atherosclerotic lesions was analyzed on cryosections of aortic arch tissues fixed in acetone, airdried, and stained with hematoxylin-eosin (H&E). Total plaque lesion area in the ostia of the innominate, common carotid, and the subclavian arteries of each mouse were used to compute averages per group [16,17,18]. Quantification of the atherosclerotic areas was performed by using computer-assisted image quantification (Image-Pro Plus software, Media Cybernetics).
Western blot analysis
Proteins were extracted from aortic arch samples, and cell lysate proteins were separated by SDS-polyacrylamide gel electrophoresis and transferred onto PVDF membranes (Amersham Pharmacia Biotech, Uppsala, Sweden). After incubation with the primary antibodies, the membranes were incubated with a peroxidase-conjugated secondary antibody, and the signals were visualized using a chemiluminescence kit (Amersham Pharmacia Biotech). The bands were scanned and quantified using Kodak 1D Image Analysis software. The protein levels were normalized to β-actin and plotted as indicated.
ELISAs for cytokines
The peritoneal macrophages were collected, and transfection experiments were performed as described in the previous section. Culture supernatants were assayed for the cytokine levels of MCP-1, IL-6, and IL-10 by ELISA according to the manufacturer's instructions. Each sample was tested for each cytokine in triplicate.
Statistical analysis
All of the statistical analyses were carried out using the SPSS version 11 software. Unless stated otherwise, the means (+/-SD) of at least 3 independent experiments are shown. Statistical evaluations were carried out using analysis of variance with Tukey's post-hoc test. P<0.05 was considered statistically significant.
Results
Ox-LDL induced miR-155/SOCS-1 and the pro-inflammatory responses of macrophages are regulated by KLF2 Macrophages respond to various inflammatory stimuli through the differential regulation of a small set of miRNAs and increased expression of pro-inflammatory factors and cytokines/chemokines [8]. It has been reported that SOCS-1, a negative regulator of the TLR4-mediated inflammation pathway, is a miR-155 target protein. Because KLF2 is known to exert antiinflammatory effects, we undertook over-expression studies to gain insight into the role of KLF2 in the regulation of miR-155 and the expression of its targeting gene SOCS-1 in macrophages. Macrophages from WT C57Bl/6 mice transfected with either EV or Ad-KLF2 for 24 h-48 h were then stimulated with ox-LDL (50 μg/mL) for 24 h. RT-PCR was performed to quantify the levels of miR-155, SOCS-1 and the cytokines MCP-1, IL-6, and IL-10. The levels of SOCS-1 protein and cytokines were also measured by Western blot and ELISA, respectively. The results show that KLF2 markedly reduced the expression of miR-155 in unstimulated macrophages (Fig 1A, p<0.05). We also showed that the exposure of macrophages to ox-LDL led to marked up-regulation of miR-155 but down-regulation of SOCS-1 (Fig 1B and 1C, p<0.05). Most importantly, the ox-LDL-induced increase in miR-155 decreased the expression of SOCS-1 protein associated with a rise in MCP-1 and IL-6 and a decline in IL-10 in macrophages that were significantly suppressed by KLF2 (Fig 1D-1F, # p<0.001 and à p<0.05 for the indicated comparisons).
To further confirm the regulation of macrophage miR-155, its targeting gene SOCS-1 and inflammatory cytokines by KLF2, small interfering RNA-mediated (siRNA-KLF2) knockdown studies were also undertaken. Consistent with the results of our over-expression studies, our data showed that miR-155 was increased by KLF2 knockdown in un-stimulated and ox-LDLstimulated macrophages (Fig 2A, p<0.05). KLF2 knockdown increased ox-LDL-induced proinflammatory activation of macrophages by increasing miR-155 and decreasing SOCS-1 expression (Fig 2B-2F). By using both gain-of-function and loss-of-function approaches, our above data strongly suggest that miR-155 and its targeted gene SOCS-1 expression in macrophages are regulated by KLF2 and are correlated with the expression of inflammatory cytokines.
Role of miR-155 in KLF2-mediated inhibition of macrophage activation
To demonstrate the functional relationship between miR-155 and KLF2, mimic-miR-155 or mimic-miR-155 control was transfected into KLF2 over-expressing peritoneal macrophages using Lipofectamine 2000. Unless otherwise stated, the cells were incubated for 24 hours posttransfection and then exposed to ox-LDL (50 μg/mL) for another 24 hours. The production of cytokines in macrophages was determined by ELISA. Our results showed that increased levels of MCP-1 and IL-6 in macrophages stimulated by ox-LDL were further enhanced by the over-expression of miR-155. Most interestingly, restoration of miR-155 levels in KLF2 over-expressing macrophages increased MCP-1 and IL-6 but reduced IL-10 expression, indicating that the over-expression of miR-155 could partly reverse the suppressing effects of KLF2 on the macrophage inflammatory response induced by ox-LDL (Fig 3A and 3B, # p<0.05 for the indicated comparisons). To further confirm a possible role of miR-155 in the KLF2-mediated inhibition of macrophage inflammatory responses, we transfected anti-miR-155 (inhibitor) into KLF2 knockdown macrophages and exposed the transfected cells to ox-LDL after 24 hours. The anti-miR-155 transfected cells showed that the increased levels of MCP-1 and IL-6 but decreased levels of IL-10 induced by ox-LDL were significantly reduced compared with the miRNA control transfected cells. The suppression of miR-155 activity in KLF2 knockdown macrophages with anti-miR-155 significantly overcame the pro-inflammatory properties associated with KLF2 knockdown (Fig 3C and 3D, # p<0.05 for the indicated comparisons).
The effect of KLF2 on miR-155/SOCS-1, cytokine gene expression and atherosclerotic lesion formation in the aortic arches of apoE-deficient mice WT or apoE−/− mice at five weeks of age received one bolus injection of Ad-KLF2 or EV as nonspecific control. After 18 weeks of an atherogenic diet, mice were euthanized, and their aortic arches were harvested for analysis. We then examined the expression of KLF2, miR-155, SOCS-1 and cytokine genes by RT-PCR and Western blotting analysis and the formation of atherosclerotic lesions in the aortic arch sections by staining with H&E, as described previously Ox-LDL-induced miR-155/SOCS-1 and pro-inflammatory responses in macrophages are regulated by KLF2 over-expression. The effects of KLF2 over-expression on miR-155 expression in macrophages (mø) cultured alone or activated by ox-LDL were examined. Mø from WT C57BL/6 mice were infected with either empty virus (EV) (as a control) or virus containing Ad-KLF2 for 24-48 h and then stimulated with ox-LDL (50 μg/ml) for 24 h. Total RNA was extracted and subjected to RT-PCR analysis to determine miR-155 mRNA expression levels. Bar graphs indicate relative miR-155 mRNA levels normalized to U6 mRNA levels (A). The effects of KLF2 over-expression on SOCS-1 mRNA (B) and protein (C) expression in Mø cultured alone or activated by ox-LDL were assessed. Data were normalized to an endogenous internal control gene (GAPDH for mRNA and β-actin for protein). The effects of KLF2 overexpression on MCP-1, IL-6, and IL-10 mRNA expression in Mø cultured alone or activated by ox-LDL were examined. Bar graphs depict relative MCP-1, IL-6, and IL-10 mRNA levels normalized to GAPDH mRNA levels (D). Culture supernatants were assessed by ELISA to determine levels of the cytokines MCP-1, IL-6, and IL-10 (E and F). All of the above data are presented as means ± S.D. for three independent experiments (with #, *, and δ indicating p<0.001, p<0.05, and p>0.05, respectively, for the indicated comparisons). [16,17,18]. RT-PCR analysis showed that KLF2 was reduced in proatherogenic mice treated with EV compared with WT control mice but was increased in the aortic arches of WT mice +Ad-KLF2 and in apoE-/-mice +HFD +Ad-KLF2 (Fig 4A and p<0.05). Additionally, the aortic arches of apoE-/-+HFD mice expressed higher levels of miR-155 ( Fig 4B) but lower levels of SOCS-1 (Fig 4C and 4E) compared with WT mice. KLF2 over-expression significantly decreased the expression of miR-155 ( Fig 5B) but increased its target gene SOCS-1 (Fig 4C and 4E) in association with a high level of expression of the inflammatory cytokine genes (MCP-1, IL-6) but a lower level of expression of anti-inflammatory cytokine genes (IL-10) in the aortic arches of apoE-/-mice after 18 weeks of a HFD (Fig 4D and 4F). Examination of the aortic arches ( Fig 4H) showed that the atherosclerotic lesions typically developed at lesion-prone sites, such as the lesser curvature of the aortic arch and the ostia of the innominate, common carotid, and the subclavian arteries. There was a marked decrease in the progression of atherosclerotic lesions in apoE-/-mice +HFD treated with Ad-KLF2. Quantitative analysis revealed a significant decrease in the area of the atherosclerotic lesions in Ad-KLF2 treated mice compared with apoE-/-+HFD mice (Fig 4G, 408±11 μm2 versus 598±48 μm2; p<0.05).
The effect of KLF2 on miR-155/SOCS-1 and the expression of cytokine genes in pro-inflammatory macrophages
We next sought to assess the effect of KLF2 on macrophage function. Murine peritoneal macrophages were obtained from the above four treatment groups. The expression of KLF2, miR- The effects of silencing KLF2 on MCP-1, IL-6, and IL-10 mRNA expression in Mø cultured alone or activated by ox-LDL were examined. Bar graphs depict relative MCP-1, IL-6, and IL-10 mRNA levels normalized to GAPDH mRNA levels (D). Culture supernatants were evaluated by ELISA to determine levels of the cytokines MCP-1, IL-6, and IL-10 (E and F). All of the above data are presented as means ± S.D. for three independent experiments (with #, *, and δ indicating p<0.001, p<0.05, and p>0.05, respectively, for the indicated comparisons). 155, SOCS-1 and the production of cytokines in macrophages was determined by RT-PCR. We found a much higher miR-155 (Fig 5B) but lower SOCS-1 (Fig 5C) expression level in the peritoneal macrophages of HFD-treated apoE-/-mice that was associated with a reduced KLF2 (Fig 5A) expression level compared with the macrophages from WT mice. The expression of KLF2 was increased in the peritoneal macrophages of Ad-KLF2-and HFD-treated apoE-/mice compared with the peritoneal macrophages from HFD-treated apoE-/-mice (Fig 5A), which was accompanied by a significantly decreased miR-155 ( Fig 5B) but increased SOCS-1 expression level (Fig 5C). Most importantly, macrophages from Ad-KLF2-treated mice were significantly decreased in their capacity to produce pro-inflammatory cytokines/chemokines (MCP-1, IL-6) compared with macrophages from HFD-treated apoE-/-mice. In contrast, the production of the anti-inflammatory cytokine IL-10 was enhanced (Fig 5D).
Discussion
The shear-responsive transcription factor KLF2 is a critical regulator of the patterns of endothelial gene expression induced by atheroprotective flow [19]. Of considerable interest, recent evidence indicates that significant interactions exist between KLF2 and miRNAs in endothelial cells. In particular, miR-126 has been reported to be up-regulated by flow in a KLF2-dependent manner in zebrafish embryos [20]. Atheroprotective flow causes a down-regulation of miR-92a in endothelial cells, which in turn elevates KLF2 mRNA [21]. Additionally, miR-143/145 are regulated by KLF2 in endothelial cells and may contribute to the vasculo-protective functions of KLF2 [22].Although KLF2 is known to exert anti-inflammatory effects and inhibit the pro- inflammatory activation of monocytes, whether KLF2 also affects the expression of miRNAs in macrophages and their role in preventing the pro-inflammatory activation of macrophages has remained elusive to date. MiR-155 has been shown to play important roles in immunity and inflammation, particularly in the inflammatory responses of macrophages, implying that it may also be involved in atherogenesis. However, the function of miR-155 in ox-LDL-stimulated inflammation and atherosclerosis remains unclear. Interestingly, the treatment of macrophages with ox-LDL appears to suppress several miRNAs induced after inflammatory stimulation such as miR-146a, miR-155, and miR-21 [23,24]. However, ox-LDL can also upregulate miR-125a-5p and miR-155, which reduces the accumulation of lipids and the secretion of cytokine in macrophages [25,26]. In the current study, we observed that the expression of the inflammation-associated miR-155 was significantly suppressed by KLF2 in unstimulated/ ox-LDL-stimulated macrophages (Fig 1A). In contrast, silencing of KLF2 markedly increased miR-155 expression in un-stimulated/ox-LDL-stimulated macrophages (Fig 2A). In agreement with a recently published study [27,28], we also showed that the exposure of macrophages to ox-LDL led to a marked up-regulation of miR-155 expression, which was positively correlated with the expression of pro-inflammatory cytokines (Fig 1D-1F). Moreover, the effect of Five-week-old male littermate apoE -/mice and WT controls were fed a high-fat, high-cholesterol diet (HFD) and a normal diet, respectively. Mice were randomly divided into the following four groups: WT+EV, WT+Ad-KLF2, apoE -/-+HFD+EV, and apoE -/-+HFD+Ad-KLF2. After 18 weeks on an atherogenic diet, the mice were euthanized, and the arch portions of their aortas were harvested for analysis. KLF2, miR-155, and SOCS-1 expression in the aortic arch were analyzed by RT-PCR, and the areas of atherosclerotic lesions in aortic arch regions were detected by hematoxylin-eosin staining. Bar graphs indicate relative mRNA levels for KLF2 normalized to GAPDH mRNA levels (A); miR-155 normalized to U6 mRNA levels (B); SOCS-1 normalized to GAPDH mRNA levels (C); and cytokine genes (MCP-1, IL-6, and IL-10) normalized to GAPDH mRNA levels (D). SOCS-1 (E) and cytokine (F) protein expression in the aortic arch were detected by Western blotting. Treatment with Ad-KLF2 was associated with a reduction in aortic arch atherosclerosis among apoE -/mice fed an HFD. The brachiocephalic trunk artery (BCA), left common carotid artery (LCCA) and left subclavian artery (LSA) are indicated in the figure (H). Ad-KLF2 treatment produced a reduction in the size of atherosclerotic lesions. Photomicrographs of longitudinal sections of the mouse aortic arch-wall region were analyzed by computer-assisted image quantification (G) (#, *, and δ indicate p<0.001, p<0.05, and p>0.05, respectively, for the indicated comparisons). recombinant adenovirus-mediated KLF2 significantly attenuated diet-induced atherosclerotic lesion formation in apoE -/mice and was associated with a significant decrease in the expression of miR-155 and the inflammatory cytokine genes (MCP-1, IL-6) as well as increased expression of the anti-inflammatory cytokine gene (IL-10) in the aortic arches and macrophages of pro-atherogenic mice (Fig 4).
To further confirm a possible role of miR-155 in the KLF2-mediated inhibition of the macrophage inflammatory response, we found by using gain-of-function and loss-of-function approaches that the increased levels of MCP-1 and IL-6 but decreased level of IL-10 in macrophages induced by ox-LDL were further enhanced by over-expression of miR-155. Most interestingly, restoration of miR-155 levels in KLF2 over-expressing macrophages increased MCP-1 and IL-6 but reduced IL-10 expression, indicating that over-expression of miR-155 could partly reverse the suppressive effects of KLF2 on the macrophage inflammatory response induced by ox-LDL (Fig 3A and 3B). Conversely, the increased levels of MCP-1 and IL-6 but decreased level of IL-10 induced in macrophages by ox-LDL were suppressed by inhibition of miR-155. The suppression of miR-155 activity in KLF2-knockdown macrophages with anti-miR-155 significantly overcame the pro-inflammatory properties associated with KLF2 knockdown (Fig 3C and 3D). These results indicate that the KLF2 inhibition of the pro-inflammatory activation of macrophages is at least partly due to KLF2-mediated suppression of the expression of miR-155.
The KLF2 transcription factor has previously been shown to modulate miRNA expression in several cell types. For example, KLF2 binds to the promoter of the miR-143/145 gene cluster to up-regulate the expression of vascular protective genes in endothelial cells [22]. Additionally, KLF2 also mediates the expression of miR-126 in endothelial and glioma cells [20,29]. Lingrel JB and colleagues recently observed reduced expression of miR-124a and miR-150 in macrophages from myeKlf2 -/mice, thus indicating that KLF2 directly mediates the expression of these two miRNAs in macrophages [30]. To investigate the possibility that KLF2 may directly induce miR-155 transcription, we analyzed the promoter of miR-155 in silico using MatInspector [31]. However, we were unable to detect the presence of the KLF2 transcription binding site in the promoters of miR-155.
Mechanistically, even though a consensus KLF2 binding sequence in the promoters of miR-155 has not yet been identified, the KLF2-regulated transcriptome probably contains a large number of indirect targets as well because KLF2 has been reported to regulate the expression of over a thousand genes [32,33]. Most of the anti-inflammatory effects of KLF2 (other than from direct eNOS induction) are probably indirect. For instance, KLF2 has been shown to inhibit the transcriptional activity of NF-κB, leading to attenuation of the expression of inflammatory genes [34]. Recent studies have shown that several transcriptional factors, including AP-1, C-myb, and NF-κB, up-regulate the expression of miR-155 in the immune system [35,36,37] but the transcriptional repressors of miR-155 remain unknown. Although the molecular basis of the KLF2-mediated inhibition of miR-155 in macrophages remains unknown, our study raises the interesting possibility that the ability of KLF2 to regulate various biological processes may be related to its ability to directly regulate gene transcription activity as well as indirectly modulate cellular miRNA. Because KLF2 and miR-155 play key roles in regulating the function of macrophages in inflammation, additional studies aimed at identifying the relationship between KLF2 expression and miRNA levels in macrophages are warranted.
|
2018-04-03T04:41:40.683Z
|
2015-09-25T00:00:00.000
|
{
"year": 2015,
"sha1": "401a8aa262bcd907da30ae66a0505b01d08de0eb",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0139060&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "401a8aa262bcd907da30ae66a0505b01d08de0eb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
75868556
|
pes2o/s2orc
|
v3-fos-license
|
MELIOIDOSIS MIMICKING PULMONARY TUBERCULOSIS
Melioidosis is caused by the soil bacterium Burkholderia pseudomallei . The bacterium is an oxidase-positive, motile gram-negative bacillus, showing bipolar staining. While most cases are considered to be from percutaneous inoculation, inhalation is also a well-recognized mode of infection.
Introduction
Melioidosis is caused by the soil bacterium Burkholderia pseudomallei. The bacterium is an oxidase-positive, motile gram-negative bacillus, showing bipolar staining. While most cases are considered to be from percutaneous inoculation, inhalation is also a well-recognized mode of infection 1 .
Melioidosis is endemic in Southeast Asia, Northern Australia and the Indian subcontinent 2 . Sri Lanka is situated in the endemic region for melioidosis and the incidence of the disease is increasing 3 . Melioidosis can present with a variety of clinical manifestations. Clinical course may be acute, subacute or chronic. In subacute and chronic forms involving the respiratory system, the presenting features may resemble other chronic pulmonary infections including tuberculosis. Melioidosis tends to cause suppurative visceral lesions which may accompany the pulmonary manifestations 4 . It has been referred to as the 'great mimicker' by various authors 5 . Presentations mimicking tuberculosis are important clinical considerations as a significant number of patients are diagnosed clinically as tuberculosis where the results of their bacteriological tests are negative. We report a case of melioidosis with a clinical presentation similar to tuberculosis.
Case Presentation
A 29 year old army soldier serving in Nuwara Eliya for the past 5 years, presented to our unit with intermittent fever, loss of appetite and loss of weight of 3 months duration. He also complained of a mild intermittent cough of 2 months duration. Prior to his appointment to Nuwara Eliya he was engaged in paddy farming at Mahiyangana. He had a history of IgA nephropathy, detected 6 months previously, and was on daily low dose corticosteroids.
Examination revealed a febrile patient with a temperature of 39 ºC. His pulse rate was 112/minute while the blood pressure measured was 140/90 mmHg. Upon admission the patient developed right sided flank pain and renal angle tenderness was elicited on the same side. The rest of the system examination was unremarkable.
The hemoglobin concentration was 10.5 g/dl and the white blood cell count was 32.5×10 9 /mm 3 comprising 74% neutrophils, 22% lymphocyte and 3% eosinophils. The platelet count was 464,000/mm 3 . Urine microscopy showed pyuria (field full pus cells/HPF), however the urine culture was sterile. The renal function tests were impaired revealing a blood urea level of 16.7 mg/dL and a creatinine value of 2.13 mg/dL. The serum electrolyte levels were normal. The liver function tests were within normal limits. HIV antibody test was negative. The ESR was 120 mm in the first hour. The CRP titre was 40 IU/L. Ultrasonography of the abdomen revealed a right side renal fullness compatible with pyelonephritis.
MELIOIDOSIS MIMICKING PULMONARY TUBERCULOSIS
The initial chest radiograph (Figure 1) revealed a consolidation with cavity formation in the left upper lobe, for which the most common differential diagnosis is pulmonary tuberculosis.
Sputum for acid fast bacilli (AFB) was negative and sputum AFB culture was also negative. Tuberculin test carried with 5TU of PPD showed an induration of 13 mm diameter. Melioidosis antibody titre was highly positive with a titre of 1/5120 and blood culture yielded Burkholderia species, probably B. pseudomallei. The culture was sensitive to Meropenem and Ciprofloxacin.
The patient was commenced on initial intensive phase of the management with intravenous Meropenem 1 g every 8 hours with close monitoring of the renal functions. His fever subsided within 72 hours and there was a dramatic improvement in his renal functions, white cell count, and CRP in the subsequent two weeks following treatment. Repeat chest radiography showed significant resolution of the pulmonary lesions ( Figure 2) while the repeat ultrasound scan of the abdomen was normal and showed resolution of pyelonephritis.
The intensive phase was continued over 28 days and the patient was discharged on Cotrimoxazole and Ciprofloxacin. The maintenance dose of Prednisolone was continued as the therapy for IgA nephropathy.
Following discharge the patient was monitored biweekly with regard to his clinical status. At the end of 3 months of therapy the white cell count was 10×10 9 /mm 3 while the ESR was 42. Chest radiography at that point showed a significant improvement (Figure 3). The therapy was continued for 6 months. The patient remains well up to now after completion of treatment which was 9
Discussion
This patient's clinical presentation and chest radiography closely mimicked tuberculosis. Repeatedly negative bacteriological tests for tuberculosis should alert clinicians to look for alternative diagnoses. The concomitant suppuration, which was pyelonephritis in our patient, along with the pulmonary lesions should raise the possibility of melioidosis. Since the detection of melioidosis in Sri Lanka is increasing, it is an important differential diagnosis to be considered in patients suspected of tuberculosis without microbiological confirmation. Early initiation of treatment is important as rapid deterioration with fatal outcome has been reported 6 .
The infection is known to have a prolonged latent period with possible reactivation into acute and fulminating infection 2 . The reactivation of the latent disease is often associated with concurrent diseases such as diabetes mellitus, chronic lung disease and chronic renal failure, which are considered as risk factors for developing Melioidosis. Use of steroids is also associated with an increased risk of Melioidosis 2 . The steroid therapy for IgA nephropathy in our patient, probably would have resulted in immune suppression leading to activation of latent B. pseudomallei infection, acquired previously. This case report highlights the importance of considering other differential diagnoses in patients suspected of Tuberculosis as management of them differs and delayed treatment can significantly increase morbidity and mortality.
|
2019-03-13T13:39:24.164Z
|
2016-05-09T00:00:00.000
|
{
"year": 2016,
"sha1": "ba17eb13309eaeac229b742adcf556d911fcf1ab",
"oa_license": "CCBY",
"oa_url": "http://sljm.sljol.info/articles/10.4038/sljm.v24i1.5/galley/5/download/",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b6f116c4472c4886c2d5e6872c9328ade8f865cc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7810770
|
pes2o/s2orc
|
v3-fos-license
|
Treatment with high-dose recombinant human hyaluronidase-facilitated subcutaneous immune globulins in patients with juvenile dermatomyositis who are intolerant to intravenous immune globulins: a report of 5 cases
Background High-dose intravenous immune globulins (IVIg) are frequently used in refractory juvenile dermatomyositis (JDM) but are often poorly tolerated. High-dose recombinant human hyaluronidase-facilitated subcutaneous immune globulins (fSCIg) allow the administration of much higher doses of immune globulins than conventional subcutaneous immune globulin therapy and may be an alternative to IVIg. The safety and efficacy of fSCIg therapy in JDM is unknown. Case Presentation In this retrospective case series, five patients with steroid-refractory severe JDM were treated with high-dose fSCIg due to IVIg adverse effects (severe headaches, nausea, vomiting, difficult venous access). Peak serum IgG levels, muscle enzymes, the childhood myositis assessment scale and adverse effects were retrieved for at least 6 months following intiation of fSCIg. Data were analyzed by descriptive statistics. Patients initially received fSCIg 1 g/kg every 14 days, resulting in median IgG peak levels of 1901 mg/dl (1606–2719 mg/dl), compared to median IgG peak and trough levels while previously receiving IVIg of 2741 mg/dl (2429–2849 mg/dl) and 1351 mg/dl (1156–1710 mg/dl). Additional antirheumatic therapies consisted of low-dose glucocorticoid therapy, methotrexate, mycophenolate mofetil and/or rituximab. Two patients maintained clinically inactive disease and three patients had only a partial treatment response. In the three patients with partial treatment response, fSCIg 1 g/kg was then given on days 1 and 6 of every 28-day cycle resulting in IgG peak levels of between 2300–2846 mg/dl (previously 1606–1901 mg/dl on the biweekly regimen), resulting in clinically inactive disease in two of the three patients. There were no relevant adverse effects that limited continuation of fSCIg treatment. Conclusions High-dose fSCIg is well-tolerated in patients with JDM and high peak serum IgG levels can be achieved which may be important for treatment success. High-dose fSCIg may therefore be an alternative to high-dose IVIg and deserves further study. Trial registration This is a case series and data were retrospectively registered.
Background
Juvenile dermatomyositis (JDM) is a severe inflammatory myopathy characterized by vasculopathy affecting skin, muscle and sometimes internal organs [1]. While mortality is low with contemporary treatment, morbidities, such as dystrophic calcification, contractures and muscle weakness still cause a substantial long-term disease burden [2,3]. However, little high-quality data from clinical trials exists to guide treatment decisions and treatment is mostly guided by expert opinion and consensus treatment protocols [1,[4][5][6]. There is evidence that the routine treatment of moderately-severe JDM should include high-dose steroids and methotrexate [7]. Additional steroid-sparing immunomodulatory therapies are often employed, especially including intravenous immune globulins (IVIg), hydroxychloroquine, cyclosporine, azathioprine, mycophenolate mofetil and rituximab. Even though randomized clinical trials for IVIg in JDM are lacking, it appears to be highly effective in severe or refractory JDM [8,9]. Typically, the administration of up to 2 g per kg and month of IVIg are required to achieve treatment success, since antiinflammatory efficacy may depend on peak serum immune globulin (Ig)G levels [10,11]. High-dose IVIg treatment causes adverse effects in 5-10 % of patients [11], including severe headaches, nausea, vomiting, aseptic meningitis, thrombosis and venous access may be difficult over time whereas subcutaneous Ig (SCIg) treatment is usually welltolerated [12]. However, since the extracellular matrix limits subcutaneous bulk fluid flow, the maximal volume infusible by standard SCIg in one site is 30 ml, effectively limiting the amount of Ig that can be applied in a single treatment and, thus, its utility as an anti-inflammatory therapy. By means of facilitating SCIg with recombinant human hyaluronidase (rHUPH20) (fSCIg), a 20-fold higher volume of up to 600 ml fSCIg (equaling 60 g of Ig) can be administered at one site [13]. Cleaving hyaluronic acid increases permeability of the subcutaneous tissue markedly. Since rHUPH20 is not available systemically, short-acting and non-immunogenic, and since tissue hyaluronic acid is rapidly resynthesized, there are typically no short-term or long-term adverse effects. Regarding the pharmacokinetics of fSCIg, the median time to reach peak serum IgG levels is five days [13]. fSCIg is currently approved for the use in patients with primary immunodeficiency disorders older than 18 years of age but has been tested in children older than two years of age. The safety and efficacy of high-dose fSCIg therapy in JDM is unknown.
Patients
Five patients with moderately severe or severe and refractory definitive JDM who had initially received IVIg were treated with fSCIg for at least six months were identified at the German Center for Pediatric and Adolescent Rheumatology in Garmisch-Partenkirchen between January 2012 and December 2015 [5]. Overall, 42 patients with JDM were treated at the center in that time frame, of whom 26 had received IVIg (62 %). Informed consent was obtained from all patients and guardians. Patient demographics, myositis-specific autoantibodies, clinical phenotype, disease severity [5], concurrent antirheumatic therapy and reasons for initiation of/switching to fSCIg treatment are shown in Table 1.
Administration of subcutaneous immune globulins
Patients 1-4 started fSCIg treatment 4-6 weeks after the last IVIg dose, and patient 5, who had not recently received IVIg, started fSCIg after a disease flare. All patients received high-dose fSCIg with recombinant human hyaluronidase (HyQvia, Baxalta, Unterschleißheim, Germany) between 1.7 to 2 g/kg per month (maximally 70 g per month) divided into two doses. Patients each received two inpatient training fSCIg treatments (first 0.3 g/kg, then 1 g/kg). Following hospital discharge, patients and parents received one or two fSCIg treatments at home guided by a nurse practitioner especially trained in the application of fSCIg. Eutectic mixture of local anesthetics was applied prior to the subcutaneous infusions and fSCIg was administered according to the manufacturer's instruction with maximal infusion rates of 160 ml/h (body weight <40 kg). Each individual fSCIg administration took on average three to four hours. For three patients, the regimen was later switched to two monthly fSCIg doses five days apart, i.e., the same monthly total dose was given divided into two doses at days 1 and 6 of each 28-day cycle.
Data collection and analysis
As part of the clinical routine at the center multiple clinical and laboratory parameters are collected prospectively for all patients with JDM. For this retrospective analysis, the following clinical and laboratory parameters were retrieved for analysis: serum IgG levels, muscle enzyme levels, childhood myositis assessment scale (CMAS) score, physician global assessment of disease activity, and potential treatment adverse effects. Data were analyzed using descriptive statistics.
Serum IgG levels
For the various time points of measuring serum IgG levels (before Ig therapy, peak/trough during IVIg, and peak during SCIg) up to nine different data point were available and mean IgG levels were calculated for each patient and time point. Overall, fSCIg treatment every 14 days resulted in median IgG peak levels, measured five days after administration, of 1901 mg/dl (range 1606-2719 mg/dl), compared to median IgG peak (one day after dose) and trough levels (28 days after dose) while previously receiving IVIg 2 g per [5] kg and month of 2741 mg/dl (range 2429-2849 mg/dl) and 1351 mg/dl (1156-1710 mg/dl), respectively. In order to achieve higher peak serum levels and improved immunomodulatory efficacy, for three patients the fSCIg administration was switched to two monthly doses five days apart (i.e., days 1 and 6 of each 28-day cycle). Following fSCIg 1 g/kg on days 1 and 6, IgG serum levels on day 11 showed much increased IgG serum levels of 2846 mg/dl (up from 1901 mg/dl), 2300 mg/dl (up from 1774 mg/dl), and 2757 mg/dl (up from 1606 mg/dl). Individual levels and courses are also shown in Fig. 1.
Disease activity
Muscle enzymes remained stable and within normal limits. Patient 1 experienced mild worsening of the CMAS (from 51 to 49), which improved again when switched to fSCIg five days apart. Patient 2 had mild, stable residual disease despite switching to fSCIg five days apart. Patients 3 and 4 maintained clinically inactive disease and kept fSCIg 14 days apart. Patient 5 had resolution of skin disease but mild residual muscle weakness with fSCIg every 14 days which resolved after switching to fSCIg five days apart.
Subcutaneous immune globulin administration and monitoring for adverse effects
During and after fSCIg administration, the patients developed a localized subcutaneous pocket which resolved within 24 h. One patient (Patient 5) had transient mild headaches following administration of SCIg and a minor local site reaction. None of the patients required premedications (which all had required while receiving IVIg). There were no incidences of treatment abortion due to adverse effects. In three patients fSCIg five or 14 days apart was tolerated equally well. There was no evidence of dystrophic calcification at the site of fSCIg administration or elsewhere. All five patients reported that they strongly preferred treatment with fSCIg over IVIg, mostly due to the lack of adverse effects and the avoidance of hospitalization for i.v. therapy.
Conclusions
This is, to our knowledge, the first report on the treatment of severe refractory JDM with high-dose fSCIg. The application of standard preparations of SCIg (without recombinant human hyaluronidase) in moderately severe JDM (up to 1 g/kg and month) and refractory adult DM (up to 0.8 g/kg and month) resulting in improved quality of life has been reported previously [14][15][16]. While standard SCIg is typically applied between one and three times weekly (in clinical practice maximally 56 g per month), fSCIg may be applied less frequently, e.g., once monthly in case of primary immunodeficiency, since it shows a better bioavailability and in addition allows administration of up to 120 g of Ig per month [13]. We demonstrated that the administration of two doses of fSCIg separated by five days for a total dose of 2 g/kg and month results in peak serum IgG levels similar to those achieved by regular high-dose IVIg treatment but not by conventional SCIg treatment. It is suggested that high peak serum IgG levels may be necessary for immune modulatory efficacy of Ig therapy [10,11]. We were able to avoid premedication before IgG application and used oral corticosteroids sparingly to avoid long-term steroid adverse effects. High-dose fSCIg therapy was well tolerated by our patients and even when given five days apart producing high IgG serum levels, we did not observe any relevant local or systemic adverse effects. Specifically, there was no evidence of increased local inflammation or dystrophic calcification at the administration sites and no evidence of aseptic meningitis or severe headaches. While high-dose IVIg treatment typically requires hospitalization and intravenous access, fSCIg can be administered without difficulty at home and may therefore be both time-and cost-saving [13,17].
Our report is limited by the fact that this is a retrospective analysis of a small cohort of patients so that the assessment of treatment efficacy is very limited.
In summary, high-dose fSCIg therapy may be an attractive treatment option for patients with moderately severe or severe refractory JDM as a remission maintenance treatment and potentially also as a remission induction therapy in case of IVIg intolerance, IgA deficiency or difficult venous access, in order to improve quality of life (avoiding hospital admissions) and to reduce cost of treatment. Fig. 1 IgG levels before initiation of Ig therapy, peak and trough serum IgG levels (during high-dose IVIg therapy) and peak levels during SCIg therapy. Values represent mean values and error bars standard deviation (if multiple measurements are available). The dashed lines represent the upper and lower limit of the normal range
|
2018-04-03T03:40:39.146Z
|
2016-09-13T00:00:00.000
|
{
"year": 2016,
"sha1": "f7abfd0faf24361ba0fb8f460600d8b8791d995c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12969-016-0112-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7abfd0faf24361ba0fb8f460600d8b8791d995c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253470163
|
pes2o/s2orc
|
v3-fos-license
|
Aplastic anemia and paroxysmal nocturnal hemoglobinuria in children and adults in two centers of Northern Greece
Bone marrow failure (BMF) syndromes are a group of various hematological diseases with cytopenia as a main common characteristic. Given their rarity and continuous progress in the field, we aim to provide data considering the efficiency and safety of the therapeutic methods, focusing on the treatment of aplastic anemia(AA) and paroxysmal nocturnal hemoglobinuria (PNH). We enrolled consecutive patients diagnosed with BMF in two referral centers of Northern Greece from 2008 to 2020. We studied 43 patients with AA (37 adults and 6 children/adolescents) and 6 with classical PNH. Regarding classical PNH, 4 patients have received eculizumab treatment with 1/4 presenting extravascular hemolysis. Among 43 patients with aplastic anemia, PNH clones were detected in 11. Regarding patients that did not receive alloHCT (n=15), 14/15 were treated with ATG and cyclosporine as first line, with the addition of eltrombopag in patients treated after its approval (n=9). With a median follow-up of 16.7 (1.8-56.2) months from diagnosis, 12/14 (85.7%) are alive (4-year OS: 85.1%). AlloHCT was performed in 28 patients. Five patients developed TA-TMA which did not resolve in 3/5 (all with a pre-transplant PNH clone). With the follow-up among survivors reaching 86.3 (6.3-262.4) months, 10-year OS was 56.9%, independently associated with PNH clones after adjusting for age (p=0.024). In conclusion, our real-world experience confirms that novel treatments are changing the field of BMF syndromes. Nevertheless, there is still an unmet need to personalize algorithms in this field.
Introduction
Bone marrow failure (BMF) syndromes are a group of various hematological diseases that have a main common characteristic: the cytopenia of one or more blood cell lines resulting in anemia, neutropenia and/or thrombopenia (1). Bone marrow failure syndromes can be divided in two main categories, acquired and congenital disorders (2). Acquired syndromes represent an abnormal immune response to an external factor such as infections mostly viral, drugs or chemicals and usually affect all three lines causing pancytopenia. On the other hand, inherited bone marrow failure syndromes (IBMF) occur due to mutations in the hematopoietic stem cell or other progenitor cells (3). The most common congenital disorders (1), eithercause pancytopenia such as Fanconi anemia (4) and dyskeratosis congenita (5) or primary affect one lineage such as Shwachman-Diamond syndrome (6), Diamond-Blackfan anemia (7), Kostmann syndrome and congenital amegakaryocytic thrombocytopenia (8). They also feature a predisposition for congenital malformations and progression to myelodysplasia, acute leukemias and solid tumors (9). Due to presentation variability increased awareness and continuous follow-up are always needed, even though acquired BMF syndromes are more frequent in both adults and children (10).
Aplastic anemia (AA) is the most common acquired BMF syndrome with an incidence of 2 per million in Western countries and up to 6 per million in Asia (11). AA is a diagnosis of exclusion with hypoplastic MDS and IBMF syndromes being the main conditions that need to be excluded especially in children aged <10 years. In favor of AA are the presence of peripheral pancytopenia, an ''empty'' bone marrow with a lack of dysplasia (12)(13)(14). In the context of acquired BMF syndromes, paroxysmal nocturnal hemoglobinuria (PNH) is a clonal disease caused by a somatic mutation in the PIGA-gene resulting in the deficiency of GPI-anchored proteins (such as CD55 and CD59) and leads to erythrocytes unable to control complement activation (15)(16)(17)(18). This results in chronic intravascular hemolysis, thrombosis in unusual locations and BMF caused by cellular autoimmunity to HSCs (19)(20)(21)(22). Apart from its classical form that is characterized by a cellular or even hypercellular marrow, a PNH clone can also be detected in up to 70% of patients with AA (23-25). In children, the percentage of PNH clone is much lower and varies between 21-53% according to different studies (26, 27). When the PNH clone increases, especially after immunosuppressive therapy, they may present with classic complications of PNH (28, 29).
There are two main upfront treatments for AA: the immunosuppressive treatment (IST), a choice suitable for patients older than 40 years, and allogeneic hematopoietic cell transplantation (HCT) for patients younger than 40 (30-32). The best cell source for HCT remains the bone marrow because it reduces the possibility of a Graft-versus-Host Disease (GVHD);the most suitable donor is a matched sibling donor (33-35). If a matched sibling donor isn't available, a matched unrelated donor (UD) is searched, while other options are unrelated cord blood and haploidentical transplants (34, 36). HCT from UD can also be the frontline treatment for pediatric patients (younger than 20 years) (37). On the other hand, complement inhibitors are the main treatment for PNH. Eculizumab (a humanized monoclonal antibody against C5) has been the first-in-class inhibitor (38). Despite its benefits, new C5 inhibitors are being developed with the second-generation C5 inhibitors being approved, ravulizumaband crovalimab (a long acting anti-C5 monoclonal antibody) showing non-inferiority to eculizumab (39-42). The results from the development of upstream inhibitors with the C3 inhibitor pegcetacoplan receiving approval, and factors B and D being investigated within phase 3 registration trials are also encouraging (43-46).
Given the rarity of these entities and continuous progress in the field, we aim to provide data considering the efficiency and safety of the therapeutic methods, focusing on the treatment of AA and PNH.
Patient population
We enrolled consecutive patients diagnosed with BMF in two referral centers of Northern Greece from 2008 to 2020: AHEPA Hospital for the pediatric and adolescent population and Papanikolaou Hospital for the adolescent and adult p o p u l a t i o n . P a t i e n t s d i a g n o s e d w i t h h y p o p l a s t i c myelodysplastic syndrome (MDS) were excluded from the present study, in order to avoid heterogeneity in the study population. All patients were tested for PNH clones using a standardized flow cytometry protocol based on FLAER (fluorescent aerolysin) detection (47). Disease was treated according to ongoing recommendations during each treatment period (33). In particular, patients younger than 40 years with a sibling donor proceeded to upfront alloHCT. Patients older than 40 years, or without a sibling donor, immunosuppression was the first-line therapy. Refractory or relapsed patients proceeded to alloHCT if eligible and with a suitable donor. HLA typing was performed at diagnosis for all patients. Standard of care was similar to both centers, according to current guidelines.
This study was a retrospective chart review, and it was approved by the institutional review board and ethics committee of G. Papanicolaou Hospital. All patients gave written informed consent. The study was conducted in compliance with the Helsinki Declaration.
Standard of care
BMF patients were admitted to neutropenic isolation rooms. According to ongoing protocols, patients received irradiated Red Blood Cell (RBC) and platelet transfusions only at a clinical indication (signs/symptoms of anemia or thrombocytopenia) or at Hemoglobin < 7 g/dl or platelet count < 10K/mL. GCSF (Granulocyte colony-stimulating factor) was administered in cases of persistent grade 4 neutropenia in patients with signs/ symptoms of infection. Routine blood and urine cultures were performed once weekly in hospitalized patients. Wide spectrum antibiotics were administered according to ongoing protocols, with modification according to cultures. Prophylaxis for Pneumocystis jiroveci, herpes simplex, and Candida spp were also administered in hospitalized and neutropenic outpatient setting. Patients with PNH received prophylactic anticoagulation as standard practice. Prophylaxis was given to all patients, even those without thrombosis and was stopped after initiation of complement inhibition.
HCT standards of practice
Conditioning regimens included Cyclophosphamide (50 mg/ kg/day for 4 days) and Antithymocyte Globulin (rabbit ATG, Thymoglobulin 2.5 mg/kg/day for 3 days). In patients sensitized with multiple transfusions (48), modifications were performed accordingly: Cyclophosphamide (50 mg/kg/day for 4 days), Fludarabine (30 mg/m2/day for 4 days), ATG 7.5 mg/kg and 10 mg/kg for sibling and unrelated donors respectively, as previously described (49). GVHD prophylaxis consisted of Methotrexate and Cyclosporine. Cyclosporine was slowly tapered and stopped between 9-12 months post-transplant with a careful follow-up of blood counts. STR (short tandem repeat) fragment analysis was performed regularly (on day + 14, + 30, + 60, + 90) in unfractionated bone marrow for chimerism evaluation. Complete donor chimerism was defined as donor chimerism ≥99% (50). Regarding supportive care, patients were admitted to neutropenic isolation rooms with HEPA filters. Prophylaxis for Pneumocystisjiroveci, herpes simplex, and Candida spp was administered. Patients underwent Cytomegalovirus (CMV) and Epstein-Barr (EBV) surveillance using peripheral blood molecular assays (51). Chi-square test, Student's t-test or Mann-Whitney test were used to compare variables. Overall survival (OS) probability was calculated with Kaplan-Meier curves. Variables with p<0.1 in univariate analysis were included in multivariate analysis using Cox proportional hazards. Cumulative incidence of competing events analysis was calculated by the EZR software (52). Statistical significance was assessed by the Gray test and Fine and Gray regression modeling. Significance level was 0.05 and two-tailed.
Patient population
In total, we examined43 patients with severe or very severe AA (37 adults and 6 children/adolescents, Table 1) and 6 adult patients with classical PNH. Regarding cytogenetics, bone marrow samples failed to yield metaphases in 12 patients and normal cytogenetics were detected in 26patients. Cytogenetic abnormalities were detected in 11patients: trisomy 6 in 6 patients, trisomy 8 in 4, andmonosomy 7 in 1 patient.
Regarding patients with classical PNH, 4 patients have received eculizumab treatment for a median of 5.1 years (range 2.1-8.2). Among them, 3 out of 4 have shown hemoglobin normalization and no transfusion requirements, while the fourth patient presented with extravascular hemolysis and regular transfusion requirements. No adverse event related to eculizumab was noted. Three patients remain under eculizumab treatment. Furthermore, 2 patients received crovalimab under the COMMODORE-1/2 open-label clinical trials: one switched from eculizumab (NCT04434092) and the other one started as a naïve patient (NCT04432584). One patient with classical PNH and history of thrombosis has never consented to receive complement inhibitors and remains with supportive treatment.
Aplastic anemia
Among 43 patients with aplastic anemia, PNH clones were detected in 11 patients (median 4%, range 1-65%), of whom, 4 patients did not proceed to alloHCT. Only one patient was treated with eculizumab, two years after immunosuppressive therapy, due to prominent hemolytic anemia attributed to a PNH clone larger than 30% in neutrophils (65%). The patient achieved hemoglobin normalization. Table 1 summarizes baseline characteristics in patients that received alloHCT compared to those who did not.
Aplastic anemia patients that did not receive alloHCT
Regarding patients that did not receive alloHCT (n=15), 14/ 15 were treated with rabbit ATG and cyclosporine as first line, with the addition of eltrombopag in patients treated after its approval (n=9). Only one adolescent patient, who was excluded from further analysis, did not receive ATG as first line due to comorbidities (schizophrenia) and poor performance status. The patient received only cyclosporine and steroids for a short period and succumbed due to septic shock.
The median follow-up was 32 (1.8-56.2) months from diagnosis. Grade II-III infections were detected in 3 patients (2/3 bacteremias). One patient died early due to Ps. Aeruginosa and Fusarium infection. Another patient was diagnosed with lung cancer 2 years after treatment and succumbed to lung cancer complications. Increased viral loads of CMV and EBV (quantitative PCR) were detected in 5 patients (CMV:2, EBV:5) and gradually decreased without treatment. Other toxicities rated as grade ≥II were: hepatotoxicity (7 patients), nephrotoxicity (3 patients), and polyneuropathy (1 patient). Twelve patients became transfusion independent for RBCs in a median period of 85 days (range 10-200) and for PLTs in a median period of 62 days (range 17-187), while the absolute neutrophil count reached >500 per cubic millimeter threshold in a median of 55 days (range 9-150).PNH clones were not associated with poor response in patients not receiving alloHCT. Complete response (CR) was achieved in 7/14 (50%) and partial response (PR) in 30% of patients, with an overall response rate (ORR) of 80%. There was no significant difference in patients with or without eltrombopag. Two adult patients with PR relapsed within 40 months from initial treatment, while one pediatric patient progressed to AML and received chemotherapy. In total, 12/14 (85.7%) are alive (4-year OS: 85.1%, Figure 1A). It should be highlighted that these results include patients that did not undergo alloHCT either because they had no indication or no suitable donor at any time-point.
Aplastic anemia patients that underwent alloHCT
AlloHCT was performed in 28 patients, upfront in 12/28. Bone marrow grafts (25/28) and sibling donors (19/28) were preferred when available. Engraftment was evident at day 13 posttransplant (range 12-21) for neutrophils and 39 (16-121) for platelets. Complete donor chimerism was achieved in all patients. No graft rejection or failure was observed. Two Figure 1B, Table 2). Figure 1 presents comparison of OS from disease diagnosis in patients that received alloHCT (upfront or after immunosuppressive treatment) and not (p=0.877). No secondary malignancies or fatal long-term complications were documented.
Discussion
Our study reflects the clinical spectrum of BMF presenting with several challenges in the real-world setting. Interestingly, AA was the most common diagnosis, with PNH clones being detected in many patients. Complement inhibition treatment has revolutionized the field providing safety and efficacy in treated patients, with or without AA. AlloHCT also showed safety and efficacy. Nevertheless, the presence of a PNH clone had an independent negative impact in survival post alloHCT.
Complement inhibition with eculizumab can indeed be efficient in patients with PNH clones, regardless of the existence or not of AA (53). As reflected by our rather small patient population, approximately 25% of PNH patients develop extravascular hemolysis. Since novel complement inhibitors are under advanced clinical development, these patients may benefit from upstream complement inhibition, such as pegcetacoplan that is currently FDA approved (54). Beyond novel complement inhibitors, the role of complement inhibitors in the transplant setting remains also to be clarified. Despite that PNH clones are common in patients with aplastic anemia, only a few recent reports have considered the presence of PNH clones (55). Previous real-world reports have not taken this issue into consideration (56, 57). Recently, DeZern et al. reported successful outcomes with eculizumab bridging before alloHCT in 8 severe/very severe (SAA) patients (58). In addition, two recent studies have also explored outcomes of patients with PNH clones in the age of eculizumab (59, 60). Although both studies presented the potential benefits of eculizumab post alloHCT in 8 and 2 patients respectively, there was no clear comparison with a historical control group that did not receive eculizumab (59, 60). This comparison would clarify the role of complement inhibition, given that alloHCT mortality in SAA patients with PNH clones has been reported at approximately 30% (61).
PNH has been traditionally considered a negative predictor after alloHCT due to the heterogeneity of clinical presentations and severe signs of hemolysis and thrombocytopenia (62). Thrombosis is the major cause of death in PNH patients (63). Despite the multifactorial nature of thrombosis in PNH, complement inhibition seem to block this vicious cycle (63). However, early prediction of thrombotic or cardiovascular risk is not yet feasible, because little is known about patients post alloHCT (64). Interestingly, the incidence of TA-TMA reported in this cohort (18%) which is similar to the incidence of 16% previously reported in all patients receiving alloHCT in our center irrespective of indication (64).
In the group of SAA patients without a PNH clone, our results were comparable to those recently reported by the Aplastic Anemia Working Party of the European Group for Blood and Marrow Transplantation with the use of sibling or unrelated donors (34). The majority of recent previous reports has documented a risk of graft rejection/failure ranging from 3% to 33% (55). In our cohort, there was no graft rejection/failure. The use of fludarabine in the conditioning regimen along with ATG might have contributed to this result (65).
Our study is limited by its retrospective nature, the rather small number of participants and experience from two centers. In contrast, it reflects the local epidemiology from Northern Greece and reports data from the pediatric and adult hematology and BMT centers located in this area. In addition, our study was performed with both sibling and unrelated donors, since expansion of the donor pool using alternative donors remains currently under consideration as an alternative option for those patients (66, 67). It should be noted however that this study was conducted according to standard operating procedures with a long-term follow-up despite difficulties during the COVID-19 period (68).
In conclusion, our real-world experience confirms that novel treatments have revolutionized the field of BMF syndromes. Nevertheless, further studies are needed to personalize algorithms in the era of precision medicine.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material.
Ethics statement
The studies involving human participants were reviewed and approved by Institutional Review Board of G Papanicolaou Hospital, Exochi, Thessaloniki, Greece. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
Author contributions
All authors have made contributions to the writing and design of the manuscript, collection or analysis of the data and drafting the article or revising it critically for important intellectual content. All authors have read and agreed to the published version of the manuscript.
Conflict of interest
EG has received honoraria from Alexion and Omeros Pharmaceuticals.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2022-11-12T16:22:16.781Z
|
2022-11-10T00:00:00.000
|
{
"year": 2022,
"sha1": "b6084db66ecac319642b616d37a496fc7b53bdff",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.947410/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cea22352e1fdb565f61145c7cf1fd5f0e1d0e6bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
266655281
|
pes2o/s2orc
|
v3-fos-license
|
Robot-assisted radical nephrectomy with inferior vena cava thrombectomy: a case report
Background Recently, robot-assisted surgery has been widely used to treat several urological cancers. Robot-assisted radical nephrectomy (RARN) was approved by the health insurance system in April 2022; however, RARN with inferior vena cava tumor thrombectomy (IVCTT) is still challenging. Also, its safety and feasibility have not yet been established owing to lack of literature, especially in Japan. Case Description We performed RARN with IVCTT in four patients between April 2022 and March 2023 at Fujita Health University Hospital. To reduce the risk of tumor embolism and major hemorrhage, an “IVC-first, kidney-last” robotic technique was developed. The safety and feasibility of RARN with IVCTT were evaluated by assessing the perioperative outcomes. Three women and one man were enrolled in this study. The median age was 72 years, and the tumor was on the right side in all cases. According to the Mayo Clinic thrombus classification, two patients were classified as level I, and the others were classified as level II. The two patients at level I did not undergo presurgical treatments, whereas the others at level II underwent presurgical treatments, which were combinations of tyrosine kinase inhibitors and immune-checkpoint inhibitors. The median operation and console times were 341 and 247 min, respectively. The median bleeding volume was 577 mL, and no complications beyond grade III of the Clavien-Dindo classification were observed. The median length of postoperative hospital stay was 10 days. Conclusions Although the sample size was relatively small, we demonstrated the safety and feasibility of RARN with IVCTT in the Japanese population.
Introduction
Inferior vena cava (IVC) thrombus occurs in approximately 6-10% of renal cell carcinoma (RCC) cases (1).Radical nephrectomy (RN) with IVC tumor thrombectomy (TT) using open surgery has remained the gold standard for the treatment of RCC with an IVC thrombus (2) since the first report by Skinner et al. in 1972 (3).There have been a few reports on RN with IVC tumor thrombectomy
Case Report
Robot-assisted radical nephrectomy with inferior vena cava thrombectomy: a case report (IVCTT) using a laparoscopic approach (4,5).However, the application of the laparoscopic approach for RN with IVCTT is limited because of the complexity of the operation and potentially fatal complications.With the widespread adoption of robot-assisted surgery, Abaza et al. first performed robot-assisted RN (RARN) with IVCTT (6).Recently, Garg et al. reported that when experienced surgeons performed RARN with IVCTT in carefully selected patients, acceptable outcomes could be obtained, according to a systematic review and meta-analysis of perioperative outcomes (7).
In Japan, RARN was approved by the health insurance system in April 2022.Motoyama et al. reported the first successful treatment with RARN with IVCTT (8).However, because a few well-experienced urologists in a limited number of high-volume centers currently performed RARN with IVCTT owing to the high levels of surgical complexity and variation; its safety remains unknown, especially in Japan.In this study, we evaluated the safety and feasibility of RARN with IVCTT by assessing perioperative outcomes in a few initial cases.We present this case in accordance with the CARE reporting checklist (available at https://tcr.amegroups.com/article/view/10.21037/tcr-23-855/rc).
Case presentation
We performed RARN with IVCTT in four patients between April 2022 and March 2023 at Fujita Health University Hospital.The patients' characteristics, including age, sex, body mass index (BMI), and American Society of Anesthesiologists (ASA) score, were recorded preoperatively.Clinical disease characteristics included the tumor side, metastatic disease, and presurgical treatment.Levels of IVC thrombi were depicted using Roman numerals and classified according to the Mayo Clinic thrombus classification (9).Surgical parameters included the surgical approach, surgical time, console time, estimated blood loss (EBL), excised weight, negative surgical margins, thromboembolism, need for anticoagulation, grade of complications [Clavien-Dindo (CD)] (10), pathology, postoperative hospital stay, and hospital stay.
The characteristics of the patients, including age, sex, BMI, IVC thrombus level at diagnosis, metastases, and presurgical treatment, are shown in Table 1.Among the two cases with presurgical treatment, one was administered avelumab plus axitinib for 8 months, while the other was administered pembrolizumab plus lenvatinib for 7 months.In both cases, the IVC thrombus decreased from level II to level I before surgery.
Before surgery, all patients underwent unenhanced abdominal computed tomography (CT) and four-phase dynamic contrast-enhanced CT examinations using ultrahigh-resolution CT to construct three-dimensional images for intraoperative navigation.IVC filters were not placed in any case.All RARN with IVCTT procedures were performed using the da Vinci Xi Surgical System (Intuitive Surgical, Sunnyvale, CA, USA) by four surgeons who completed the Japan-approved da Vinci certification program.
The patients were positioned in a modified left lateral decubitus position with flank elevation.Three robotic ports and one camera port were placed on the lateral side of the rectus abdominis muscle.The placement of a 5 or 12 mm assistant port is shown in Figure 1.Regarding the summary of the RARN procedure with IVCTT, the caudal IVC, cephalic IVC, and left renal vein was secured using twice-wrapped vessel loops after exposing the bilateral renal veins and IVC (Figure 2A).The lumbar veins draining into the IVC were dissected to avoid backflow in all four cases.The right renal artery was clamped or dissected at the intercaval region of IVC and aorta.The position of the IVC tumor thrombus was visualized using a laparoscopic ultrasound probe to identify its upper limit (Figure 2B).The caudal IVC, left renal vein, and cephalic IVC were clamped using twice-wrapped vessel loops and bulldogs from the caudal side.Subsequently, the IVC wall was cut, and the thrombus was removed along with the right renal vein (Figure 2C).IVC reconstruction was performed using 4-0 polypropylene (Figure 2D).After IVC reconstruction, the cephalic IVC, left renal vein, and caudal IVC were released from the cranial side.RARN was completed after removal of the right adrenal gland.Systemic heparinization was not performed before IVC clamping; however, diluted heparin was injected into the IVC at the end of IVC reconstruction.
Perioperative factors, including ASA score, IVC thrombus level at operation, surgical approach, surgical time, console time, EBL, excised weight, negative surgical margins, thromboembolism, need for anticoagulation, pathology, complications (CD) ≥3, postoperative hospital stay, and hospital stay, are shown in Table 2.In all cases, thromboembolism did not occur and anticoagulation was not needed.
All procedures performed in this study were in accordance with the ethical standards of the institutional research committee and with the Helsinki Declaration (as revised in 2013).Fujita Health University Ethics Review Committee approved this study (No.HM19-265) and waived patient consent due to the retrospective nature of this study.
Discussion
Open RN (ORN) with IVCTT remains the gold standard treatment for RCC with IVC tumor thrombi.However, recent advances in minimally invasive robot-assisted surgery have enabled urologists to perform RARN using IVCTT.Robotic surgeries often provide some superior benefits (e.g., less pain, smaller incision, easier recovery) than those of open surgeries; however, these advantages vary depending on the difficulty of the surgery.Garg et al. performed a systematic review to assess the safety and feasibility of RARN with IVCTT regarding perioperative outcomes and compared these outcomes with those of ORN.Compared to ORN, RARN with IVCTT was associated with a lower blood transfusion rate, fewer overall complications, and shorter hospital stay.They concluded that RARN with IVCTT appeared to be safe and feasible with acceptable perioperative outcomes when well-experienced urologists performed them in carefully selected patients (7).
In Japan, it has only been a short time since RARN was approved by the health insurance system in April, 2022.In the context of RARN with IVCTT, no studies have been conducted since the first report by Motoyama et al. (8).Accordingly, the safety of RARN for IVCTT still remains unknown, particularly in Japan.
In the present study, we performed RARN with IVCTT in four patients.In all cases, an IVC filter was not used for presurgical treatment.So far, several investigators have advocated indications for the use of IVCFs.Some investigators have shown that preoperative filter placement could complicate proximal surgical control and tumor thrombus removal (11), whereas others have shown that preoperative placement involves incorporation of the tumor into the filter (12,13).A Cochrane Database review was completed in 2010, which stated that no recommendation could be made regarding the use of IVCFs (14).
No significant intraoperative or postoperative complications occurred in any of the patients, resulting in satisfactory perioperative outcomes.Notably, all procedures for RARN with IVCTT were performed at the level I of the Mayo Clinic thrombus classification, which was considered a reason for satisfactory perioperative outcomes.In two cases, the IVC thrombus decreased from level II to level I owing to presurgical treatments, which were combinations of tyrosine kinase inhibitors and immunecheckpoint inhibitors (ICIs).Dason et al. have reported that significant extrarenal disease, excessive surgical morbidity, poor performance status unrelated to IVC thrombus, and patient preference were relative indications for presurgical treatments (2).Other studies have shown that immediate cytoreductive nephrectomy (CN) for metastatic RCC (mRCC) is currently considered only for a limited number of patients, while deferred CN could be applied in a larger patient population that has favorably responded to systemic therapy (15).In the ICI era, a small number of case reports and case series have described deferred CN for patients with mRCC who achieved complete response (CR) or nearly CR (16)(17)(18)(19)(20). Pignot et al. concluded that delayed CN in patients who responded to ICI treatment provided promising oncological outcomes, and most patients could discontinue
A B C D
systemic treatment (20).However, from a surgical perspective, ICI-based combination therapy results in a severe desmoplastic reaction, which increases perinephric adhesions and inflammation, thus increasing surgical complexity (21).Accordingly, ongoing prospective studies, such as PROBE and NORDIC-SUN, will better define the role of CN in the rapidly evolving treatment landscape of mRCC in combination with ICI-based systemic therapy.
In contrast, RARN with IVCTT of more than level II IVC thrombus has been amongst the most challenging urologic-oncologic surgeries and has been reported in a limited series (22)(23)(24)(25)(26). Complete mobilization of the liver and placement of a tourniquet in the suprahepatic infradiaphragmatic IVC proximal to the thrombus are needed in the management of a level III tumor thrombus.Moreover, the management of level IV tumor thrombus using a robotic approach is an evolving technique.Hui et al. reported the use of thoracoscopic isolation and occlusion of the supradiaphragmatic IVC (24).Some studies have reported that RARN with IVCTT was feasible even for more than level II thrombi, however, it could be true that these procedures have proven to be highly risky and require advanced robotic technique.To maximize intraoperative safety and chances of success, a thorough understanding of careful patient selection and a highly experienced robotics team is essential.Considering the results of ongoing prospective studies regarding the role of CN in the rapidly evolving treatment landscape for mRCC with a combination ICI-based systemic therapy, the procedure of RARN with IVCTT should be carefully selected, especially in Japan, where these procedures have just been introduced.
Conclusions
Favorable perioperative outcomes were obtained in four patients who underwent RARN with IVCTT.Although the sample size was relatively small, we demonstrated the safety and feasibility of RARN with IVCTT in the Japanese population.
Footnote
Provenance and Peer Review: This article was commissioned by the Guest Editor (Takuya Koie) for the series "Current
Figure 1
Figure 1 Port placement for robot-assisted radical nephrectomy with inferior vena cava tumor thrombectomy.
Figure 2
Figure 2 Key pictures of robot-assisted radical nephrectomy with IVC tumor thrombectomy.(A) Securing of the caudal IVC, cephalic IVC, and left renal vein by twice-wrapped vessel loops.(B) Visualization of IVC tumor thrombus (arrow) using a laparoscopic ultrasound probe.(C) Removal of IVC thrombus (arrow) along with the right renal vein.(D) IVC reconstruction with 4-0 polypropylene.IVC, inferior vena cava.
Table 2
Perioperative outcomes
|
2023-12-31T16:18:37.899Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c5c4fc88689b08d613d920c9fb0199592999692a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.21037/tcr-23-855",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02b5d12e2e076092b1c1a7c63501f83fc03be4eb",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250283959
|
pes2o/s2orc
|
v3-fos-license
|
Ultrasensitive Quantification of Multiple Estrogens in Songbird Blood and Microdissected Brain by LC-MS/MS
Abstract Neuroestrogens are synthesized within the brain and regulate social behavior, learning and memory, and cognition. In song sparrows, Melospiza melodia, 17β-estradiol (17β-E2) promotes aggressive behavior, including during the nonbreeding season when circulating steroid levels are low. Estrogens are challenging to measure because they are present at very low levels, and current techniques often lack the sensitivity required. Furthermore, current methods often focus on 17β-E2 and disregard other estrogens. Here, we developed and validated a method to measure four estrogens [estrone (E1), 17β-E2, 17α-estradiol (17α-E2), estriol (E3)] simultaneously in microdissected songbird brain, with high specificity, sensitivity, accuracy, and precision. We used liquid chromatography tandem mass spectrometry (LC-MS/MS), and to improve sensitivity, we derivatized estrogens using 1,2-dimethylimidazole-5-sulfonyl-chloride (DMIS). The straightforward protocol improved sensitivity by 10-fold for some analytes. There is substantial regional variation in neuroestrogen levels in brain areas that regulate social behavior in male song sparrows. For example, the auditory area NCM, which has high aromatase levels, has the highest E1 and 17β-E2 levels. In contrast, estrogen levels in blood are very low. Estrogen levels in both brain and circulation are lower in the nonbreeding season than in the breeding season. This technique will be useful for estrogen measurement in songbirds and potentially other animal models.
Introduction
The brain can locally produce steroids, either de novo from cholesterol or from conversion of circulating precursors (Schlinger et al., 1999;Schmidt et al., 2008). The brain expresses all the necessary enzymes for steroid synthesis (Tsutsui, 2011) in a region-specific manner (Soma et al., 2003;Wacker et al., 2010). Brain-derived steroids, known as neurosteroids, were first characterized in rodents (Baulieu, 1991;Mellon et al., 2001;Hojo and Kawato, 2018), and later in other vertebrates such as birds (Schmidt et al., 2008;Tsutsui, 2011;Schlinger, 2015). In particular, estrogens are produced from androgens by the aromatase enzyme, which exhibits high activity in specific brain regions (Naftolin and Ryan, 1975). Bioactive estrogenic metabolites such as catechol estrogens are also locally synthesized within the brain, although their functions are less clear (Fowke et al., 2003;Denver et al., 2019b).
High neuroestrogen production occurs in songbirds, and seasonal changes in local estrogen production affect aggressive behavior. The song sparrow, Melospiza melodia, which is common along the Pacific Coast of North America, is an excellent model for investigating neurosteroid production and the regulation of territorial aggression (Patten and Pruett, 2009;Wacker et al., 2010). Males exhibit aggression during breeding (spring) and nonbreeding (autumn) seasons but not during molt (late summer; Wingfield and Hahn, 1994;Soma et al., 2003). Circulating levels of testosterone are high during the breeding season but very low during the nonbreeding season. In the nonbreeding season, castration does not reduce aggression, but inhibition of aromatase does reduce aggression (Wingfield, 1994;Soma et al., 2000a, b). Administration of 17b -estradiol (17b -E 2 ) increases aggression in nonbreeding males (Heimovics et al., 2015). Social behavior, including aggression, is regulated by the social behavior network (SBN). The SBN expresses steroidogenic enzymes as well as sex steroid receptors, indicating that this circuit is both steroid-synthetic and steroid-sensitive (Newman, 1999;Goodson, 2005). In song sparrows, aromatase is highly expressed in the SBN and varies seasonally (Soma et al., 2003;Wacker, 2019). Altogether, these data suggest that seasonal changes in neuroestrogen synthesis contribute to seasonal changes in territorial aggression in song sparrows (Jalabert et al., 2018;Quintana et al., 2021). To measure estrogens in specific regions of the SBN, a highly specific and sensitive method is required.
Estrogens can be measured by liquid chromatography tandem mass spectrometry (LC-MS/MS), a highly specific and sensitive technique that allows for simultaneous measurement of multiple analytes. Immunoassays can suffer from cross-reactivity with structurally similar steroids (Faqehi et al., 2016). Estrogen measurement with mass spectrometry can be impeded by poor ionization efficiency, but this problem can be ameliorated by derivatization, which adds an easily ionized group or charged moiety to the analyte of interest (Faqehi et al., 2016;Denver et al., 2019b). Dansyl chloride is a commonly used derivatization reagent for estrogens. However, with dansyl chloride, the product ions are not analyte-specific because they are solely produced from the dansyl chloride moiety (Xu and Spink, 2008;Li and Franke, 2015). In addition, the sensitivity increase with dansyl chloride is not sufficient to measure estrogens in microdissected brain (C. Jalabert and K. K. Soma, unpublished results). In contrast, 1,2-dimethylimidazole-5-sulfonyl-chloride (DMIS) derivatization is estrogen-specific, generates analyte-specific product ions, produces lower background values, and yields greater sensitivity in measuring 17b -E 2 and estrone (E 1 ).
Field procedures
Song sparrows are widespread and abundant throughout North America (especially near Vancouver) and their conservation status is of least concern, according to the IUCN red list. Free-living adult male song sparrows were captured in the nonbreeding season (October 26 to November 8, 2018, n = 11) and breeding season (April 9 to April 24, 2019, n = 10). Another four animals were captured for method validation. Subjects were captured near Vancouver using a mist net and conspecific song playback for a maximum of 5 min (breeding: 1.6 6 0.5 min, nonbreeding: 1.6 6 0.6 min; p = 0.51), to avoid effects of song playback on steroid levels. Immediately after capture, the subject was rapidly and deeply anesthetized with isoflurane and then rapidly decapitated. There was a maximum of 3 min between capture and euthanasia (breeding: 2.6 6 0.2 min, nonbreeding: 2.2 6 0.3 min; p = 0.23), to avoid effects of handling on steroid levels. The brain was immediately collected and snap frozen on powdered dry ice. Trunk blood was collected in heparinized microhematocrit tubes (Fisher Scientific) that were kept on ice packs until return to the laboratory within 5 h.
Once in the laboratory, blood was divided into two aliquots. One half of the blood sample was frozen. The other half of the blood sample was centrifuged, and then plasma was collected and frozen. All samples were stored at -70°C until steroids were extracted. Blood and plasma Figure 1. The panel of estrogens that are measured in this study. Chemical structures of E 1 , 17b -E 2 , 17a-E 2 , E 3 , 2OH-E 2 , 4OH-E 2 , 2Me-E 2 , and 4Me-E 2 . Similarly, E 1 can be hydroxylated at the two or four positions by CYP1A1 and CYP1B1, respectively, and then methylated by the COMT enzyme to produce 2-methoxyestrone and 4-methoxyestrone. E 1 , estrone; 17b -E 2 , 17b -estradiol; 17a-E 2 , 17a-estradiol; E 3 , estriol; 2OH-E 2 , 2-hydroxyestradiol; 4OH-E 2 , 4-hydroxyestradiol; 2Me-E 2 , 2-methoxyestradiol; 4Me-E 2 ; 4-methoxyestradiol. samples were used to measure circulating levels of estrogens. Plasma overestimates circulating steroid levels, and therefore blood was used as a more accurate estimate of circulating steroid levels and to compare to brain steroid levels (Taves et al., 2010(Taves et al., , 2011(Taves et al., , 2015.
All procedures were in compliance with the Canadian Council on Animal Care and protocols were approved by the Canadian Wildlife Service and the University of British Columbia Animal Care Committee.
Frozen sections were microdissected using a stainlesssteel biopsy punch tool (Integra Miltex biopsy punch tool, 1-mm diameter, tissue wet weight 0.245 mg per punch). One punch (centered at the midline for one section) was collected containing the NAc, ventral to the area X. Four punches (two per side for two serial sections) were collected containing the POA in two sections caudal to the last section containing the tractus septopalliomesencephalicus and rostral to the AH. Four punches (two per side for two serial sections) were collected for the AH, immediately caudal to POA sections and ventral to the anterior commissure (CoA). Four punches (two per side for two serial sections) were collected for the LS and BnST, dorsal to the CoA. The LS was collected medial to the lateral ventricles, and the BnST was collected at the tip of each lateral ventricle. Four punches (two per side for two serial sections) containing VMH were collected ventral to the AH. Four punches (two per side for two serial sections) containing the VTA were collected ventrolateral to the oculomotor nerve. Four punches (two per side for two serial sections) containing the CG were collected ventral to the posterior commissure. Six punches (two per side for three serial sections) containing the NCM were collected starting at the last appearance of the CoA and tractus occipito-mesencephalicus path from the ventromedial telencephalon. Six punches (two per side for three serial sections) containing the TnA were collected immediately caudal to the disappearance of the CoA and tractus occipito-mesencephalicus. Six punches (two at the midline for three serial sections) containing the Cb were collected starting at its first appearance. Punches were expelled into 2-ml polypropylene tubes (Sarstedt AG & Co, 72.694.007) that each contained five zirconium ceramic oxide beads (1.4-mm diameter). Punches were then stored at À70°C until further processing.
Reagents
High performance liquid chromatography (HPLC)-grade acetone, acetonitrile, hexane, and methanol were from Fisher Chemical. Here, we used DMIS for derivatization (Fig. 3). Note that we did not use the isomer 1,2-dimethylimidazole-4-sulfonyl chloride, which has been used for estrogen derivatization, but the fragmentation is not analyte-specific (Xu and Spink, 2007). Dry DMIS (Apollo Scientific, lot #AS478881, CAS #849351-92-4) was stored at 4°C under nitrogen gas and protected from light and moisture. Then, dry DMIS was aliquoted and stored at 4°C (protected from light and moisture but not under nitrogen gas) for up to 12 months (time of storage did not affect dry DMIS stability). Acetone was added to individual aliquots of DMIS on the day of derivatization to prepare fresh DMIS solution at 1 mg/ml. Sodium bicarbonate buffer (50 mM, pH 10.5) was prepared in Milli-Q water.
Stock solutions were prepared in HPLC-grade methanol. Certified reference standards of E 1 , 17b -E 2 , and E 3 were obtained from Cerilliant. 17a-E 2 , 2Me-E 2 , 4Me-E 2 , and 4OH-E 2 were obtained from Steraloids. Calibration curves were prepared in 50% methanol. The calibration curve ranged from 0.01 to 20 pg per tube for E 1 , 17b -E 2 , 17a-E 2 , E 3 , 2Me-E 2 , and 4Me-E 2 , and from 0.1 to 200 pg per tube for 4OH-E 2 . The catechol estrogens, 2OH-E 2 and 4OH-E 2 , displayed the same fragmentation patterns and retention times after DMIS derivatization and thus were indistinguishable using our assay. As a result, we only included 4OH-E 2 in our calibration curve. 17a-E 2 showed the same fragmentation pattern as 17b -E 2 and the retention time only differed by ;0.14 min (causing the peaks to overlap). Therefore, we included 17a-E 2 in a separate calibration curve. Internal standard (IS) stock solution of 17b -E 2 -2,4,16,16-d4 (17b -E 2 -d4, C/D/N Isotopes, catalog #D-4318, CAS #66789-03-5) was prepared in methanol and further diluted with 50% methanol to a final working solution of 40 pg/ml.
Steroid extraction
Steroids were extracted from brain tissue (sample amount detailed above for each brain region), blood (20 ml), and plasma (20 ml) similar to before . One milliliter of acetonitrile was added to all samples, and 50 ml (i.e., 2 pg) of IS 17b -E 2 -d4 was added to all samples except "double blanks." Samples were then homogenized using a bead mill homogenizer (Omni International Inc.) at 4 m/s for 30 s. Samples were then centrifuged at 16,100 Â g for 5 min, and 1 ml of supernatant was taken from each sample and placed into a borosilicate glass culture tube (12 Â 75 mm) that had been cleaned with methanol. After the addition of 500 ml of hexane, tubes were vortexed and centrifuged at 3200 Â g for 2 min. Hexane was removed and discarded, and extracts were dried at 60°C for 45 min in a Research Article: New Research vacuum centrifuge (ThermoElectron SPD111V; Thermo Fisher Scientific). Calibration curves, quality controls (QCs), blanks, and double blanks were prepared alongside samples. Underivatized standards, in which acetone was added without DMIS, were prepared in parallel to measure any underivatized estrogens and calculate derivatization reaction efficiency (94-100% for all estrogens).
DMIS derivatization
Derivatization was based on previous studies (Keski-Rahkonen et al., 2015;Handelsman et al., 2020). Here, the protocol was slightly modified to reduce reagent evaporation. Dried extracts were immersed in an ice bath, then samples were reconstituted with 30 ml of sodium bicarbonate buffer (50 mM, pH 10.5), briefly vortexed, and 20 ml of 1 mg/ml DMIS in acetone was added. Samples were then vortexed and centrifuged at 3200 Â g for 1 min before being transferred to glass LC-MS vial inserts placed in LC-MS vials (Agilent). Vials were capped to prevent evaporation during incubation for 15 min at 60°C. This was followed by a cooling period of 15 min at 4°C. Samples were centrifuged at 3200 Â g for 1 min, and then stored at À20°C for no more than 24 h before steroid analysis.
Steroid analysis by LC-MS/MS
Steroids were quantified using a Sciex 6500 Qtrap UHPLC-MS/MS system . Samples were transferred into a refrigerated autoinjector (15°C). Then, 35 ml from each sample were injected into a Nexera X2 UHPLC system (Shimadzu Corp.), passed through a KrudKatcher ULTRA HPLC In-Line Filter (Phenomenex) and then an Agilent 120 HPH C18 guard column (2.1 mm), and separated on an Agilent 120 HPH C18 column (2.1 Â 50 mm; 2.7 mm; at 40°C) using 0.1 mM ammonium fluoride in MilliQ water as mobile phase A (MPA) and methanol as mobile phase B (MPB). The flow rate was 0.4 ml/min. During loading, MPB was at 10% for 1.6 min, and from 1.6 to 4 min, the gradient profile was at 42% MPB, which was ramped up to 60% MPB until 9.4 min. From 9.4 to 11.9 min, the gradient was ramped from 60% to 98% MPB until 13.4 min. Finally, a column wash was performed from 11.9 to 13.4 min at 98% MPB. The MPB was then returned to starting conditions of 10% MPB for 1.5 min. Total run time was 14.9 min. The needle was rinsed externally with 100% isopropanol before and after each sample injection.
We used two multiple reaction monitoring (MRM) transitions for each estrogen and one MRM transition for the deuterated IS (Table 1). Steroid concentrations were acquired on a Sciex 6500 Qtrap triple quadrupole tandem mass spectrometer (Sciex LLC) in positive electrospray ionization mode for all derivatized estrogens and negative electrospray ionization mode for underivatized estrogens (Table 1). All water blanks were below the lowest standard on the calibration curves.
Stability of IS
Deuterated IS can potentially experience hydrogendeuterium exchange (Kwok et al., 2008;Viljanto et al., 2018), so we tested for possible alterations of 17b -E 2 -d4 caused by the derivatization procedure. We compared the mass spectra of 17b -E 2 -d4 directly from the stock solution, after sham derivatization (resuspension in buffer and acetone followed by incubation for 15 min at 60°C, without DMIS), or after derivatization (resuspension in buffer and DMIS in acetone followed by incubation for 15 min at 60°C). In addition, we tested for effects of heating on the IS by comparing the mass spectra of 17b -E 2 -d4 resuspended in buffer and acetone either incubated for 15 min at 60°C or not incubated. We also examined unlabeled 17b -E 2 either directly from the stock solution or after derivatization (resuspension in buffer and DMIS in acetone followed by incubation for 15 min at 60°C). All samples were prepared at 10 mg/ml for the infusion at 7 ml/min using a syringe pump and all other LC-MS/MS parameters were identical to those described in the previous section.
For nonderivatized samples, we evaluated quadrupole 1 (Q1) ion of 17b -E 2 (271 m/z) and 17b -E 2 -d4 (275 m/z), Assay accuracy and precision Assay accuracy was determined by measuring QCs containing known amounts of estrogens (0.5 and 2 pg for all estrogens, except for the catechol estrogens where 5 and 20 pg were used) in neat solution. Precision was determined from both intra-assay and interassay variation by calculating the coefficient of variation of QCs. The acceptance criteria aligned with FDA style guidelines.
Stability of derivatized analytes
Stability of derivatized analytes was assessed by measuring a 10 pg standard of E 1 , 17b -E 2 , 17a-E 2 , E 3 , 2Me-E 2 , and 4Me-E 2 and 20 pg of 4OH-E 2 at different storage times and temperatures. Samples were derivatized on different days, so that all samples could be injected on the same day, to ensure that LC-MS/MS conditions were the same for all samples. One set (n = 3) of standards was injected immediately after derivatization. The other sets of standards were injected after 24 h at 15°C, as well as after 1, 4, 8, and 31 d at À20°C or À70°C (n = 3 per set).
Matrix effects and recoveries
The protocol was validated in song sparrow brain, blood, and plasma. First, matrix effects were tested by creating pools and then performing serial dilutions (0.5, 1, 2, and 4 mg for brain tissue, and 2, 5, 10, and 20 ml for blood or plasma) to assess linearity and parallelism to the calibration curves. Second, we compared the peak areas for the IS in the three matrices and neat solution. Differences in IS peak area of ,20% were considered acceptable. Third, recovery was assessed by creating a pool that was divided in two, one was spiked with a known amount of steroid and the other one was unspiked. We calculated the difference in steroid concentration between those two and compared with the spike in neat solution. Recoveries were evaluated in blood, plasma, and brain at the sample amounts described above.
Statistical analysis
A value was considered nondetectable if it was below the lowest standard on the calibration curve. When 20% or more of the samples in a group (blood or brain region) were detectable, then the nondetectable values were estimated via quantile regression imputation of left-censored missing data using MetImp web tool (Wei et al., 2018a, b;Tobiansky et al., 2020Tobiansky et al., , 2021. Data were imputed for each season and each estrogen independently, and imputed values were between 0 and the lowest standard on the calibration curve. When ,20% of the samples in a group (blood or brain region) were detectable, then imputations were not performed, data were not analyzed statistically, and data are only reported in the text. To compare steroid levels in brain and blood, we assumed that 1 ml of blood weighs 1 g (Taves et al., 2011;Tobiansky et al., 2020).
Statistics were conducted using GraphPad Prism version 9.02 (GraphPad Software). When necessary, data were log transformed before analysis. Regional differences in estrogen levels were analyzed by repeated measures one-way analysis ANOVA. ANOVAs were followed by Tukey multiple comparison tests and corrected p values are shown. Significance criterion was set at p 0.05. Graphs show the mean 6 SEM and are presented using the nontransformed data.
Specificity
As seen before, DMIS interacted exclusively with the hydroxyl group on the phenolic ring and it was not reactive with the 17-hydroxyl group of estrogens (represented in Fig. 3; Keski-Rahkonen et al., 2015;Huang et al., 2021). Further, catechol estrogens showed double derivatives, but not mono derivatives as DMIS bound to both hydroxyl groups of the A-ring (Table 1).
For E 1 , 17b -E 2 , 17a-E 2 , and E 3 , the assay showed high specificity after the optimization of the liquid chromatography and scheduled MRM transitions (Table 1).
The isomers 2OH-E 2 and 4OH-E 2 were indistinguishable because of their identical retention times and identical quantifier and qualifier transitions ( Table 1). As a result, we only included 4OH-E 2 in the calibration curve and QCs.
Sensitivity
Derivatization with DMIS greatly improved sensitivity of estrogen measurement. Using 1 pg of each estrogen in neat solution, we observed increased peak areas of analytes when derivatized with DMIS (Fig. 4B) compared with an assay without DMIS (Fig. 4A). The calibration curves were linear, even at the low range, demonstrating excellent assay sensitivity (Fig. 6).
The lower limit of quantification (LLOQ) was enhanced by DMIS derivatization for all estrogens (Table 2). After DMIS derivatization, 17b -E 2 and E 3 showed a 10-fold improvement in sensitivity, and the LLOQ went from 0.2 pg/ tube to 0.02 pg/tube. For 17a-E 2 and E 1 , the LLOQ went from 0.1 to 0.02 pg/tube ( Table 2).
Accuracy and precision
Accuracy and precision were measured using QCs at two amounts of estrogens in neat solution (Table 3). Accuracies were ;100% for all estrogens at both amounts (Table 3).
Precision was measured as the coefficient of variation for QC replicates at both amounts. The intra-assay variation was acceptable in all cases (Table 3). For the interassay variation, the QCs were measured across multiple assays and acceptable for E 1 (11%), 17b -E 2 (7%), 17a-E 2 (8%), and E 3 (6%).
Stability of derivatized analytes
Stability of seven derivatized estrogens were measured at varying temperatures and durations of storage. Storage temperatures were 15°C (autosampler temperature), À20°C , and À70°C. Durations of storage were 0, 1, 4, 8, and 31 d. Analyte/IS area ratios were expressed relative to time 0 (T0), in which injection into the LC-MS/MS occurred immediately following derivatization.
All derivatized estrogens were unaffected by storage in the autosampler at 15°C for 1 d. Moreover, derivatized E 1 , 17b -E 2 , 17a-E 2 , and E 3 were unaffected by storage up to 31 d at À20°C or À70°C (Table 5). These data indicate that the DMIS derivatives of E 1 , 17b -E 2 , 17a-E 2 , and E 3 are stable under normal laboratory operating conditions.
Stability of IS
The Q1 and MRM of 17b -E 2 -d4 (directly from stock solution, after sham derivatization, and after DMIS derivatization) and 17b -E 2 (directly from stock solution and after DMIS derivatization) are presented in Figure 5. The 17b -E 2 mass spectrum is characterized by the presence of abundant 271 m/z deprotonated molecule (Fig. 5A,B). The mass spectra of 17b -E 2 -d4 directly from stock solution (Fig. 5E,F) and after sham derivatization (Fig. 5G,H) showed the presence of abundant 275 m/z molecule, indicating that deuterium loss did not occur. Derivatized 17b -E 2 mass spectrum showed abundant 431 m/z (Fig. Research Article: New Research 5C,D) and, most importantly, derivatized 17b -E 2 -d4 was characterized by abundant 435 m/z (Fig. 5I,J). In addition, we did not detect an effect of heating 17b -E 2 -d4 (data not shown). Taken together, the data indicate the stability of the deuterated IS under the present conditions for derivatization.
Method validation in brain matrix
First, matrix effects were assessed by creating a 60-mg pool of homogenized song sparrow forebrain tissue. This pool of brain homogenate was then spiked with estrogens and serial diluted (4, 2, 1, and 0.5 mg per tube) to evaluate linearity. The slope of each estrogen in neat solution was compared with its slope in brain tissue, to determine the extent of matrix interference. Differences in slope were measured for E 1 (7%), 17b -E 2 (2%), E 3 (7%), 4OH-E 2 (2%), 2Me-E 2 (1%), and 4Me-E 2 (19%) and were satisfactory (Table 4). Second, the IS peak area in brain tissue was compared with the IS peak area in neat solution and ranged from 111-118% across brain tissue amounts (0.5-4 mg). Third, recoveries were assessed by subtracting unspiked sample values from spiked sample values from the same pool and dividing by the amount of estrogen added. Recoveries were calculated across brain tissue amounts (0.5-4 mg) and were acceptable for E 1 (102%), 17b -E 2 (102%), and E 3 (93%). Recoveries were high and not acceptable for 4OH-E 2 (613%), 2Me-E 2 (214%), and 4Me-E 2 (247%; Table 4), suggesting matrix effects with brain tissue for 4OH-E 2 , 2Me-E 2 , and 4Me-E 2 .
Method validation in blood matrix
First, matrix effects were assessed by creating a 266-ml pool of song sparrow blood. This pool of blood was then spiked and serial diluted (20, 10, 5, and 2 ml per tube) to evaluate linearity. The slope of each estrogen in neat solution was compared with its slope in blood. Differences in slope were measured for E 1 (3%), 17b -E 2 (1%), E 3 (5%), 2Me-E 2 (3%), and 4Me-E 2 (3%) and were satisfactory (Table 4). However, 4OH-E 2 was not detectable when spiked in blood (Table 4). Second, the IS peak area in blood was compared with the IS peak area in neat solution and ranged from 85-115% across blood volumes (2-20 ml). Third, recoveries were assessed by subtracting unspiked sample values from spiked sample values from the same pool and dividing by the amount of estrogen added. Recoveries were calculated across blood volumes (2-20 ml) and were acceptable for E 1 (90%), 17b -E 2 (94%), and E 3 (92%). Recoveries were high and not acceptable for 2Me-E 2 (295%) and 4Me-E 2 (200%; Table 4), suggesting matrix effects with blood for 2Me-E 2 and 4Me-E 2 .
Estrogen levels in microdissected brain regions
We examined 11 brain regions in subjects from two seasons (n = 10-11 subjects per season). In nonbreeding males, the only estrogen detected in the brain was 17b -E 2 in the NCM (14.8 6 0.9 pg/g).
In contrast, in breeding males, nine brain regions had detectable levels of E 1 (Fig. 7) and ten brain regions had detectable levels of 17b -E 2 (Fig. 7). The Cb had detectable 17b -E 2 but not E 1 . Both E 1 and 17b -E 2 were nondetectable in the NAc (Fig. 7). In breeding males, E 1 and 17b -E 2 levels showed similar patterns across brain regions, with highest levels in NCM (Fig. 7). 17a-E 2 and E 3 were nondetectable in the brain of breeding males. Although there were matrix effects with brain tissue, we can very tentatively suggest that 4OH-E 2 (LLOQ 2 ng/g), 2Me-E 2 (LLOQ 0.05 ng/g), and 4Me-E 2 (LLOQ 0.02 ng/g) were nondetectable in the brain of breeding males. (7) Accuracy was measured by the recovery of a QC with a known concentration of estrogen. Precision was measured by the coefficient of variation (CV) of replicates. Recovery was assessed for brain, blood, and plasma by comparing unspiked samples with samples spiked with a known amount of steroid. Recovery was not assessed for 17a-E 2 and only low QC was used for accuracy and precision so dashes are placed in those cells. n.d. is defined as nondetectable.
To compare E 1 levels across blood and brain regions in breeding males, a one-way repeated measures ANOVA was conducted. E 1 levels showed a significant effect of sample type (blood or brain region; F (9,81) = 113.6, p , 0.0001; Fig. 7A). Post hoc comparisons revealed that E 1 levels were higher in NCM than other brain regions (all p , 0.0001) except the AH. No differences in E 1 levels were found among POA, AH, VMH, TnA; nor among POA, VMH, and BnST. Lastly, no differences were found in E 1 levels among LS, VTA, and CG. E 1 levels were lower in blood than in POA, AH, LS, BnST, VMH, VTA, CG, NCM, and TnA.
For 17b -E 2 levels in breeding males, there was a significant effect of sample type (F (10,90) = 89.10 p , 0.0001; Fig. 7B). Post hoc comparisons revealed that 17b -E 2 levels were higher in NCM than other brain regions (all p , 0.0001) except POA and AH. No differences in 17b -E 2 concentrations were found among VTA, CG, Cb, LS and blood; nor between TnA, VMH, and POA. 17b -E 2 levels were lower in blood than in POA, AH, BnST, VMH, NCM, and TnA.
Estrogen levels in circulation
In the nonbreeding season, no estrogens were detectable in the blood or plasma (n = 11).
In the breeding season, E 1 was detectable in 50% of blood samples, and 17b -E 2 was detectable in 60% of blood samples (n = 10). In breeding males, blood E 1 level was 2.8 6 0.5 pg/ml, and blood 17b -E 2 level was 4.1 6 0.7 pg/ml (Fig. 7). In breeding males, E 1 and 17b -E 2 were detectable in 70% of plasma samples (n = 10). Plasma E 1 level was 3.7 6 0.5 pg/ml, and plasma 17b -E 2 level was 4.5 6 0.7 pg/ml. 17a-E 2 and E 3 were nondetectable in the blood and plasma of breeding males. Although there were matrix effects with blood and plasma, we can very tentatively suggest that 4OH-E 2 (LLOQ 100 pg/ml), 2Me-E 2 (LLOQ 2.5 pg/ml), and 4Me-E 2 (LLOQ 1 pg/ml) were nondetectable in the blood and plasma of breeding males.
Discussion
In the present study, we developed a method to measure four estrogens (E 1 , 17b -E 2 , 17a-E 2 , and E 3 ) with high specificity, sensitivity, accuracy, and precision. We also attempted to measure catechol and methoxy estrogens (2OH-E 2 , 4OH-E 2 , 2Me-E 2 , and 4Me-E 2 ) but encountered various problems. We employed DMIS, an estrogen-specific derivatization reagent, with LC-MS/MS. We validated DMIS derivatization for microdissected brain tissue (1-2 mg), whereas previous work applied DMIS only with serum samples. Assay sensitivity was improved by 10fold for some estrogens and is among the best reported in the literature. We found substantial regional and seasonal variation in neuroestrogen levels in male song sparrows. For example, the NCM, a region with high aromatase expression, has the highest E 1 and 17b -E 2 levels. Estrogen levels in blood are very low. Lastly, estrogen levels are lower in the nonbreeding season than in the breeding season.
Estrogen measurement
Estrogens are present at low concentrations and similar in structure (Fig. 1); and therefore, it is challenging to measure estrogens in biological samples. Historically, estrogens have been measured with immunoassays, but these can lack the necessary specificity because of antibody cross-reaction (Faupel-Badger et al., 2010;Haisenleder et al., 2011). LC-MS/MS has higher specificity than immunoassays (Grebe and Singh, 2011;Rosner et al., 2013;Gravitte et al., 2021) and can be combined with derivatization to measure various endogenous estrogens.
Several derivatization methods are used for estrogen measurement with LC-MS/MS. Dansyl chloride is the most widely used derivatization reagent for 17b -E 2 measurement. However, the product ion is generated from the dansyl moiety and is not specific for the analyte by mass (Xu and Spink, 2008;Li and Franke, 2015). Moreover, dansyl chloride does not provide the sensitivity required for measurement of estrogens in microdissected brain tissue (C. Jalabert and K. K. Soma, unpublished results). The reagent methyl-1-(5-fluoro-2,4-dinitrophenyl)À4-methylpiperazine (MPPZ) is useful for estrogen measurement but requires two reactions (Denver et al., 2019a). The reagent 2-fluoro-1-methylpyridinium-p-toluenesulfonate (FMP-TS) can be used to measure E 1 and 17b -E 2 , but the derivatives decline after only 2 d of storage at À20°C (Faqehi et al., 2016).
Other reagents require complex sample preparation protocols, which can be time and labor intensive (Wudy et al., 2018;Denver et al., 2019a).
DMIS has several advantages in comparison to other derivatization reagents. First, the protocol is straightforward, consisting of a single reaction with relatively mild conditions. Second, DMIS derivatization provides high continued Figure 5. The mass spectra of Q1 scan (left panel) and the MRM (right panel) of 17b -E 2 directly from the stock solution (A, B) or after derivatization (C, D) and 17b -E 2 -d4 directly from the stock solution (E, F), after sham derivatization (G, H), or after derivatization (I, J). Q1, quadrupole 1; Q3, quadrupole 3; m/z, mass-to-charge ratio. Difference in slope (D slope) was calculated by subtracting the slope of the sample with increasing amount with the slope of the standard curve in neat solution, and then dividing by the slope of the standard curve in neat solution multiplied by 100 and expressed in percentage (%). n.d. is defined as nondetectable.
specificity because product ions are analyte-specific by mass. Third, assay sensitivity is among the best reported in the literature. Fourth, DMIS reacts specifically with estrogens and allows the simultaneous measurement of nonderivatized androgens and derivatized estrogens in the same sample (Keski-Rahkonen et al., 2015;Handelsman et al., 2020). The present study is a step forward from the pioneering work by the Handelsman group. First, DMIS was used to quantify estrogens in human and mouse serum but not in brain (Keski-Rahkonen et al., 2015;Handelsman et al., 2020). In the present study, DMIS was used for the first time to measure brain estrogens. Second, we reduced reagent evaporation during the derivatization reaction. Third, the previous studies focused and 17b -E 2 and E 1 , and we added 17a-E 2 and E 3 to the panel. Fourth, we tested long-term stability of the derivatized analytes. Fifth, the previous studies used atmospheric pressure photoionization, which is relatively uncommon. This study used electrospray ionization, which is common, and makes the protocol more broadly applicable. Lastly, we tested stability of the deuterated IS and validated the use of 17b -E 2 -d4 for DMIS derivatization (see below).
Method development
Deuterated IS are more widely available and affordable than 13 C labeled IS. However, deuterated IS can be subject to hydrogen-deuterium exchange (Wudy et al., 2018). Here, we tested the stability of deuterated 17b -E 2 -d4 and did not observe deuterium loss (Fig. 5). Furthermore, in several brain regions (e.g., NCM, POA, VTA), the 17b -E 2 levels observed here are similar to those observed without DMIS indicating that DMIS derivatization yields accurate levels of 17b -E 2 . In addition, the QCs showed high accuracy and precision for E 1 , 17b -E 2 , 17a-E 2 , and E 3 (Table 3). Low accuracy and precision in QCs often indicate hydrogen-deuterium exchange. Lastly, the same deuterated IS was used previously for derivatization with DMIS and performed well, although the IS stability was not directly assessed in these studies (Keski-Rahkonen et al., 2015;Handelsman et al., 2020).
We assessed several assay parameters. Assay specificity is key because many estrogens are similar in structure (Fig. 1). Here, E 1 , 17b -E 2 , 17a-E 2 , E 3 , 2Me-E 2 , and 4Me-E 2 showed analyte-specific transitions patterns by mass and retention time (Table 1), whereas the 2OH-E 2 and 4OH-E 2 isomers were not distinguishable due retention time overlap. Assay sensitivity is also critical because estrogen amounts in blood and microdissected brain regions are extremely low. Here, DMIS derivatization improved the LLOQ for all seven estrogens ( Table 2). The largest increases in sensitivity (10-fold) were observed for continued Figure 6. calibration curves ranging from 0.02 to 20 pg with insets displaying the lowest standards on the curve for (A) estrone, (B) 17b -estradiol, (C) 17a-estradiol, and (D) estriol. Area ratio is calculated by dividing an analyte peak area with the IS peak area in the same sample.
17b -E 2 , 2Me-E 2 , and E 3 . This allowed measurement of E 1 and 17b -E 2 in regions in which we previously could not . We also assessed assay accuracy and precision, which were acceptable in all cases ( Table 3). Stability of all 7 derivatized estrogens was acceptable after storage in the autosampler (15°C) for 24 h, similar to previous results on 17b -E 2 (Keski-Rahkonen et al., 2015). Here, DMIS derivatives of E 1 , 17b -E 2 , 17a-E 2 , and E 3 showed good long-term stability (Table 5). However, derivatized catechol and methoxy estrogens were less stable, perhaps because of oxidation (MacLusky et al., 1981). Derivative stability is an important factor but often not reported (Denver et al., 2019a). No estrogens were measured in any blanks, and some biological samples (e.g., plasma samples from nonbreeding sparrows) had nondetectable estrogen levels, indicating that this ultrasensitive assay does not produce "false positives." Moreover, 17b -E 2 levels in breeding NCM were very similar to previous results (without DMIS; Jalabert et al., 2021). Overall, indices of assay performance were acceptable for E 1 , 17b -E 2 , 17a-E 2 , and E 3 .
Neuroestrogen measurement is also challenging because of the large amount of brain lipids that can interfere with assays (Taves et al., 2011). While many steroid extraction protocols are complex, ours is straightforward and rapid. Smaller tissue samples, as obtained by microdissection, contain less myelin and thus lower matrix effects. Estrogen measurement in large brain samples with dansyl chloride required a matrix surrogate for calibration curves, as matrix effects were present after extraction (Li and Gibbs, 2019). However, because of the limited amount of tissue obtained by microdissection (1-2 mg), there is a trade-off between reducing matrix effects and obtaining detectable quantities of estrogens. We used several techniques to assess potential matrix effects. No matrix effects were detectable with brain tissue for E 1 , 17b -E 2 , 17a-E 2 , and E 3 . In contrast, matrix effects were present with brain tissue for 4OH-E 2 , 2Me-E 2 , and 4Me-E 2 and suggest ion enhancement (Wudy et al., 2018). Similar results were observed in blood and plasma (Tables 3, 4).
Catechol and methoxy estrogens are challenging to measure (MacLusky et al., 1981;Mesaros et al., 2014), and we faced some difficulties for their quantification. We could not distinguish between 2OH-E 2 and 4OH-E 2 because of co-elution. A study using the derivatization reagent MPPZ had the same issue (Denver et al., 2019c) which was partially overcome by altering the liquid chromatography (Denver et al., 2019c). Derivatization reagents can interact with either (or both) hydroxyl groups in the aromatic ring, which can hinder measurements (Denver et al., 2019b). However, DMIS produced only double derivatives, which avoided this problem. The labile nature of catechol and methoxy estrogens is shown by our stability data (Table 5). Lastly, these analytes suffered from matrix effects (Table 3). Future studies can include additional IS for catechol and methoxy estrogens to correct for matrix effects. Overall, the current assay is sufficient to determine the presence or absence of 4OH-E 2 , 2Me-E 2 , and 4Me-E 2 in brain, blood and plasma samples but not sufficient for quantification of these analytes.
Estrogen levels in song sparrow circulation
We examined estrogens in the circulation of wild male song sparrows. No estrogens were detectable in the Figure 7. Male breeding song sparrows brain and blood levels of (A) estrone and (B) 17b -estradiol. Bar graphs represent the concentration of estrogens (ng/g brain tissue and ng/ml for blood circulation of nonbreeding males. In the breeding season, we observed very low concentrations of blood E 1 and 17b -E 2 (detectable in 50% and 60% of samples, respectively) and of plasma E 1 and 17b -E 2 (detectable in 70% of samples). This is consistent with previous studies in song sparrows that showed a small increase in plasma 17b -E 2 only at the beginning of the breeding season (Soma and Wingfield, 1999). Plasma 17b -E 2 levels were lower than those in our previous study using radioimmunoassay (Heimovics et al., 2016), probably because of the higher specificity of LC-MS/MS. 17a-E 2 and E 3 were not detected in blood or plasma samples from breeding males. Our data also suggest that 4OH-E 2 , 2Me-E 2 , and 4Me-E 2 are very low in the circulation of male song sparrows. In addition, our data suggest that circulating levels of 2OH-E 2 are very low, because we could not distinguish it from 4OH-E 2 .
Estrogen levels in song sparrow brain regions Estrogens are locally synthesized within the songbird brain. They can be produced either de novo from cholesterol or from conversion of circulating precursors (Schmidt et al., 2008;Jalabert et al., 2021). Key steroidogenic enzymes, such as 3b -hydroxysteroid dehydrogenase (3b -HSD; London et al., 2006), cytochrome P450 17a-hydroxylase/17,20-lyase (CYP17; London et al., 2003) and aromatase (Saldanha et al., 2000(Saldanha et al., , 2013, are expressed in the songbird brain. In the song sparrow brain, activities of 3b -HSD and aromatase are regionspecific and show seasonal changes (Soma et al., 2003;Pradhan et al., 2008Pradhan et al., , 2010. Thus, estrogen levels can differ greatly across specific brain regions. Many studies use whole brain or macro-dissection to collect large regions (e.g., forebrain or cerebral cortex), which lack spatial specificity (Li and Gibbs, 2019). In contrast, we used microdissected brain regions (1-2 mg), which allows for a simple extraction method (Grebe and Singh, 2011) and provides much greater spatial resolution.
We detected E 1 and 17b -E 2 in nearly all brain regions in breeding males. Overall, the present E 1 and 17b -E 2 brain levels match our previous data using a LC-MS/MS assay without derivatization . Importantly, the higher sensitivity of the current assay allowed us to detect estrogens in brain regions where we could not before, such as 17b -E 2 in the BnST, CG and Cb, and both 17b -E 2 and E 1 in the LS and VTA. In the NAc, none of the estrogens on our panel were detectable, probably because of its small size (only one punch for NAc) and low aromatase expression (Soma et al., 2003).
Estrogen measurement in the brain is challenging, as suggested by previous studies using immunoassays. Studies from the same lab have not been able to replicate results using an immunoassay for measurement of 17b -E 2 in zebra finch brain microdialysate, which might be because of changes in the commercial immunoassay (C. de Bournonville et al., 2020). Further, when 17b -E 2 was measured in the same sample by both LC-MS/MS and immunoassay in quail brain microdialysate, LC-MS/ MS detected far lower 17b -E 2 concentrations than the immunoassay (M.P. de Bournonville et al., 2021), suggesting antibody cross-reactivity. When a panel of 14 estrogens was analyzed in quail brain microdialysate by LC-MS/MS, 17b -E 2 represented ,20% of total estrogens and 2OH-E 1 levels were high (M.P. de Bournonville et al., 2021).
Estrogens in the SBN regulate a variety of social behaviors. The POA had high levels of estrogens (Fig. 7). Similarly, in male quail, estrogens are higher in the POA than in the circulation (Liere et al., 2019). Further, in the male quail POA, sexual interactions rapidly modulate aromatase activity (Cornil et al., 2005) and estrogen levels (M.P. de Bournonville et al., 2021). In wild male song sparrows, aromatase is expressed in the POA, and aromatase activity in the POA-diencephalon is higher in the breeding season than in the molt or nonbreeding season (Soma et al., 2003). Local estrogen production in the POA likely promotes sexual behavior of male sparrows. The NCM is an auditory area that contains high levels of aromatase (Saldanha et al., 2000;Soma et al., 2003) and here showed the highest levels of E 1 and 17b -E 2 in breeding males. In zebra finches, 17b -E 2 levels in NCM rapidly increase in response to the presence of females or when exposed to the song of another male (Remage-Healey et al., 2008), suggesting a role for locally produced estrogens in social interactions.
There is dramatic seasonal variation in brain estrogen levels. In nonbreeding males, we only detected 17b -E 2 in the NCM. Improved sensitivity allowed the measurement of 17b -E 2 in the NCM of nonbreeding males, which was not possible without DMIS derivatization . Aromatase expression in the song sparrow brain is generally highest in the breeding season . Consistent with this, the current data show that E 1 and 17b -E 2 are more abundant and widespread in the brain during the breeding season. Nevertheless, neuroestrogens promote nonbreeding aggression in male song sparrows (Soma et al., 2000b;Heimovics et al., 2015). Thus, neuroestrogen levels might be very low at baseline but increase rapidly in aggressive interactions, an idea that will be examined in a future study.
Here, we did not detect 17a-E 2 , E 3 , 4OH-E 2 , 2Me-E 2 , or 4Me-E 2 in any brain or blood samples from wild male song sparrows. The lack of 17a-E 2 in brain and circulation is consistent with the very low concentrations of 17a-testosterone (epitestosterone) in song sparrow plasma as 17a-E 2 can be synthesized from 17a-testosterone (Finkelstein et al., 1981). 17a-E 2 is also absent in the brain and circulation of male quail (Liere et al., 2019). In quail, activity of estrogen-2-hydroxylase (CYP1A1; synthesizes 2OH-E 2 and 2OH-E 1 ) is elevated within the SBN (Balthazart et al., 1994). Catechol estrogens are then methylated by catechol O-methyl transferase (COMT) to produce methoxy estrogens. Here, nondetectable 4OH-E 2 suggests that 2OH-E 2 levels are also very low. The lack of 2Me-E 2 is consistent with data from quail brain. In this study, males had not been challenged (no simulated territorial intrusion), which could explain why we did not detect catechol and methoxy estrogens. Future work will examine the effects of a conspecific aggressive interaction.
In conclusion, in the present study, we developed a method to measure E 1 , 17b -E 2 , 17a-E 2 , and E 3 in brain, blood, and plasma. The derivatization improved sensitivity, making this assay among the most sensitive reported in the literature. Further, the assay showed high specificity, accuracy, and precision. Its application to the song sparrow model provides insights into the neural synthesis of estrogens in songbirds. DMIS derivatization will have wideranging applications for measuring estrogens in songbirds and other animal models as well as in humans.
|
2022-07-06T06:16:57.557Z
|
2022-06-30T00:00:00.000
|
{
"year": 2022,
"sha1": "f83a9152570303be85c3a488e391e5ec022ed69c",
"oa_license": "CCBY",
"oa_url": "https://www.eneuro.org/content/eneuro/early/2022/06/29/ENEURO.0037-22.2022.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2026b495160709a853d8dafd70c9a7aeebdb5c69",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246653709
|
pes2o/s2orc
|
v3-fos-license
|
Evidence on the Relationship Between Emotional Intelligence and Risk Behavior: A Systematic and Meta-Analytic Review
The aim of the present study was to carry out a qualitative and quantitative synthesis of the existing literature studying the relationship between emotional intelligence and risk behavior. We conducted a systematic review and meta-analysis of the scientific evidence available relating both constructs. Particular attention was paid to identifying possible differences in this relationship as a function of the different conceptualizations of EI and the risk domain. The study was conducted following the Cochrane and PRISMA guidelines. Our results revealed a significant negative relationship between EI and health-related risk behaviors. However, this relationship was not observed in other risk domains such as finance and gambling. The relationship between EI and risk behavior differed according to the risk domain studied, which supports the notion that risk is a domain-specific construct. The results associated with the health-related risk behaviors are consistent with existing literature about the positive impact of emotional abilities on the health domain. A more complete understanding of the emotional mechanisms that underlie risk behavior could help to establish action guidelines and improve programmes to prevent and reduce the negative effects of risk behavior on our society.
INTRODUCTION
Emotions are fundamental in our lives, as they form part of the basis of our behavior and help us to make decisions, guiding our attention, memory, motivation, and learning (Dolan, 2002;Pessoa, 2008). In this regard, Emotional Intelligence (EI) combines two concepts that for years seemed to represent an oxymoron-cognition and emotion. EI refers to the ability to identify, understand, use and regulate one's own emotional states and those of others (Mayer et al., 2016). Higher EI abilities have been positively related to various aspects of life such as physical and psychological health (Martins et al., 2010;Domínguez-García and Fernández-Berrocal, 2018;Megías et al., 2018c), optimal coping abilities (Salovey et al., 1999, appropriate social interactions (Lopes et al., 2011), lower levels of aggressive behavior (Megías et al., 2018b;Gómez-Leal et al., 2020) or greater wellbeing and vital satisfaction (Johnson et al., 2009;Andrei and Petrides, 2013;Laborde et al., 2014). Emotion also plays a central role in risk behavior (Loewenstein et al., 2001;Reyna, 2004;Slovic, 2010). It is well-known that people adapt their behavior in risk situations not only through a rational process but also by following their emotions (Slovic et al., 2004;De Martino et al., 2006;Rivers et al., 2008). However, whilst there is an extensive body of literature on the influence of emotion on risk behavior, the relationship between EI and risk behavior has received relatively little attention.
Risk behavior is defined as any behavior that generates a probability of objective or subjective loss, this loss being significant for the individual (Yates and Stone, 1992). Engaging in this kind of behavior often poses a threat to fundamental needs such as our health, safety, or wellbeing (Pellmar et al., 2002;WHO, 2009WHO, , 2018. Some examples include unsafe sexual activities, substance abuse, risky driving, and gambling with large amounts of money. All theoretical models of risk behavior include emotion as a fundamental factor in these behavioral choices (Damasio, 1994;Loewenstein et al., 2001;Reyna, 2004;Slovic et al., 2004). For example, Slovic et al. (2004Slovic et al. ( , 2007 present risk as a feeling rather than as a statistical representation, and they coined the term affect heuristic to explain how stimulus-affect associations determine our behavior in many risk situations. In addition, another important factor to take into account is that the contexts where risk situations take place are usually characterized by a strong emotional charge, which influences our behavior (Ditto et al., 2006;Gutnik et al., 2006;Rivers et al., 2008;Megías et al., 2011). An emotional state of positive valence and high arousal-whether this is present prior to the contextual situation or generated by the situation itself-has been shown to encourage both unsafe sexual intercourse and increased gambling behavior (Sánchez et al., 2001;Ariely and Loewenstein, 2006;Cyders and Smith, 2008;Haase and Silbereisen, 2011). Evidence of the integration between emotional and cognitive processes in risk behavior has also been revealed at a neural level (Vorhold, 2008;Mohr et al., 2010;Megías et al., 2015). Research has shown that neural representations of risk activate brain areas involved in emotional processing such as the anterior insula, the amygdala, and the ventromedial prefrontal cortex, among others (Vorhold, 2008;Mohr et al., 2010;Megías et al., 2015Megías et al., , 2018a. Given the key role that emotion plays in risk behavior, it is expected that our ability to perceive, use, understand, and manage our emotions influence our tendency to engage in risk-taking. These abilities should act as a protective factor for risk behavior, that is, individuals with better abilities should show a tendency to engage in fewer risk behaviors. As already described, the concept of EI encompasses all these emotional abilities (Mayer et al., 2016). Some research studies (albeit scarce) have aimed to explore the relationship between EI and risk behavior (Rivers et al., 2013;Fernández-Abascal and Martín-Díaz, 2015;Lando-King et al., 2015;Hayley et al., 2017); however, the literature does not present conclusive results and no systematic review has yet been conducted to synthesize the results of these investigations.
One challenge inherent to the study of risk behavior is that risk is a domain-specific construct (Weber et al., 2002). Risktaking does not constitute a rigid pattern of behavior-rather, it is expressed in different ways across various areas of our lives (e.g., social, finance, health, security, or recreational). An individual can have a risky attitude in some areas and not in others. For instance, one might engage in unsafe sex and drunk driving but be conservative when dealing with financial investments. Thus, when studying attitudes toward risk, we should always take into account the context in which the decision is made. Accordingly, previous research has revealed how, depending on the contextual situation, different personality traits influence the tendency to take risks (Blais and Weber, 2006;Lozano et al., 2017). For example, impulsivity-related traits such as high levels of positive urgency predict increases in risky sexual practices and risky driving behaviors (Zapolski et al., 2009;Baltruschat et al., 2020), whilst high levels of negative urgency appear to be more strongly associated with problematic alcohol use, self-harming behaviors, or eating disorders (Dir et al., 2013;Mallorquí-Bagué et al., 2020). Likewise, the sensation seeking trait has been related to recreational risks rather than financial risks (Lozano et al., 2017). As is the case with these personality traits, the protective role of EI in risk-taking behavior could depend on the risk domain being studied.
It is also important to note that the concept of EI in the literature has been investigated from three different approaches, depending on the construct-method pairing: self-report mixed model, self-report ability model, and performance-based ability model (Joseph and Newman, 2010). The self-report mixed model understands EI as a broad construct composed of various measures of personality and affect, which are assessed using subjective self-report measures. The self-report ability model considers EI as a form of mental ability based on emotional aptitudes and employs subjective self-report measures through which people assess the perception of their own EI abilities. Finally, the performance-based ability model also treats EI as a form of mental ability but assessed EI in a more objective manner through instruments where individuals must solve questions with correct and incorrect responses. Although the three models are popular in the EI literature, research has shown that the performance-based ability model is less sensitive to subjective and social desirability bias (Brackett et al., 2006;Webb et al., 2013) and is more consistent in predicting general behavior (Mayer et al., 2016;Gutiérrez-Cobo et al., 2017). These differences in the definition of the construct and assessment method could result in discrepant findings in the study of the relationship between EI and risk behavior.
The purpose of the present study was to conduct a systematic review and meta-analysis that allows for a qualitative and quantitative synthesis of the scientific evidence available on the relationship between EI and risk behavior. Although it is wellknown that EI promotes numerous benefits in a wide variety of psychological and behavioral variables, to date, research studying the role of EI as a protective factor against risk-taking behavior is limited, and there is no systematic review that summarizes the existing literature and provides a complete overview of this phenomenon. We propose the existence of a negative relationship between EI and risk behavior, however, given some of the mixed findings reported in the literature, we pay particular attention to determining whether these differences among studies arise as a function of the risk domain where the behavior is performed and the conceptual model of EI employed. A more indepth understanding of this relationship could help to improve actions aimed at preventing and reducing the effects of risk behavior on our society.
METHODS
The systematic review and meta-analysis were conducted according to Cochrane guidelines (Higgins and Green, 2011).
Information Sources and Search Terms
In order to identify all eligible studies that associate EI with risk behavior, a comprehensive systematic literature search was conducted using the PsycINFO, PubMed and Scopus databases. The literature search was performed during April 2020. The searches included articles published between 1990 (inception of the concept of EI) and April 2020 containing in the title, abstract or keywords the term "emotional intelligence" together with one of the following terms: "risky behavior, " "risk behavior, " "risky behaviour, " "risk behaviour, " "risk taking, " and "risk perception." The search was restricted to only these terms in order to ensure that the selected articles assessed the constructs of EI and risk behavior by instruments designed specifically for this aim. In addition, hand searches were conducted on the reference lists of the selected articles to check that no studies were overlooked (no new articles were obtained from reference lists).
Eligibility Criteria
The aim of the search strategy was to locate and select for inclusion all those studies investigating the relationship between EI and risk behavior that have been published in peer-reviewed scientific journals before April 2020. For inclusion, the studies were required to assess EI through instruments based on one of the three theoretical models of EI (Joseph and Newman, 2010), and work with instruments specifically designed to assess risk behavior, understanding it as a decision-making process in which the individuals face the likelihood of incurring an objective or subjective loss, which must be of significance to said individuals (Yates and Stone, 1992). The exclusion criteria were: (a) studies not published in scientific journals such as theses, books, or reports; (b) theoretical, qualitative, or review articles; (c) articles written in a language other than English or Spanish; (d) studies that did not examine behavior that meets the definition of risk behavior; (e) studies assessing EI through instruments that are not considered measures of EI; (f) studies that used an EI questionnaire, not to evaluate EI, but a single aspect or ability associated with EI, for example emotion regulation; (g) studies that examined EI and risk behavior, but not the relationship between them.
Selection of Studies
Two review authors (M.T.S.L. and A.M.R.) working independently, carried out the search and examined the selected studies according to inclusion and exclusion criteria. Discrepancies were resolved through discussion with other two authors (P.F.B. and R.G.L.) The results of the literature search and study selection are shown (following PRISMA guidelines) in the flow chart presented in Figure 1 (Moher et al., 2009).
A total of 117 articles were identified by entering the search terms in the databases. After removing duplicates, 90 articles were obtained to screen by abstract. Of these, 58 articles were selected for a full-text review based on the exclusion criteria. Finally, 15 studies relating EI to risk behavior and meeting the inclusion and exclusion criteria were included in the systematic review.
Of the 90 total articles examined, 75 were removed based on the following exclusion criteria: 17 articles not published in scientific journals, 8 theoretical or review articles, 7 articles written in a language other than English or Spanish, 4 articles understanding risk behavior as a behavior external to the individual and not as a decision-making process that culminates in risk behavior (e.g., perceived risk of a terrorist attack or risk of revictimization), 14 articles that did not use a specific EI measurement instrument, 18 articles that investigated certain aspects related to emotional abilities but not EI per se (e.g., facial recognition of emotional expressions or emotional regulation strategies), and 7 articles that evaluated EI and risk behavior but did not explore the link between the two concepts.
Data Extraction
For each of the selected articles, we extracted a set of data related to authors, year of publication, sample size, mean age, gender, country of origin of the study, risk behavior and EI measurement instruments, risk behavior domain, EI model, primary outcomes, and effect size (see Table 1). Pearson's r correlation coefficient was used to determine effect size. When articles presented more than one measurement instrument for EI or risk behavior, the results for these instruments were described individually.
Data Synthesis and Statistical Analysis
The articles that fulfilled the inclusion and exclusion criteria were synthesized using a qualitative narrative approach and a quantitative meta-analysis. We decided to undertake a qualitative synthesis along with the meta-analysis to better address the heterogeneity of the selected articles. Many of the studies varied in their assessment methods, characteristics of the variables, and use of covariates, while some also included additional designs to those aimed at analyzing the primary relationship of interest. Thus, although a qualitative synthesis provides less objective results than a meta-analysis, it allows us to carry out a more in-depth individual discussion of each study.
The qualitative synthesis was based on the description of the data and results collected from the systematic review. For those studies in which the measurement instruments of EI and risk behavior did not provide a global score, but assessed the construct through several dimensions, the results for each of the dimensions were considered individually (see Table 1). For the risk measurement instruments, those dimensions that did not explicitly assessed risk behavior (e.g., feelings of anxiety) were excluded.
To conduct the meta-analysis, effect sizes were extracted from those articles containing such information. As already mentioned, we used Pearson's r correlation coefficient as a measure of effect size. When Pearson's r was not available in the article, we tried to compute this coefficient from descriptive or inferential statistics. However, these articles did not include the necessary information and we contacted the corresponding authors via email in order to request these values (Pearson's r). For those studies assessing several risk behaviors (e.g., traffic risk taking and substance risk taking) or using more than one EI measuring instrument, the individual effect size of each of these outcomes was included in the meta-analysis. In order to handle dependency among effect sizes within these studies, a three-level meta-analytic model was conducted (Van Den Noortgate et al., 2013). The three-level approach includes an additional level of analysis in which within-study effect sizes are nested prior to the between-study estimation. Moreover, there were articles that did not provide a global score of EI, but individual scores of the dimensions that comprise the EI construct. In these cases, we averaged the effect sizes of the EI dimensions within each study in order to get an approximate result to the global EI. With respect to the meta-analytic model used, given the differences across studies in characteristics of the sample and methods, a random-effects approach (a three-level random effects model) was conducted to pool the effect sizes (Hedges and Vevea, 1998;Viechtbauer, 2010). The model was estimated using restricted maximum likelihood (REML), since this procedure provides a good balance between unbiasedness and efficiency, particularly for small sample sizes (Viechtbauer, 2005). Heterogeneity among studies was evaluated using Cochran's Q statistic and potential publication bias was evaluated by Egger's test Rosenthal's Fail-Safe N test (Egger et al., 1997;Viechtbauer, 2010). The statistical analyses were conducted by the metafor package implemented in R software version 3.6 (The R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org).
Data Availability Statement
The raw data file included in the meta-analysis is available from the corresponding author on request. Furthermore, effect sizes for each study can be found in Table 1. For those studies that did not show a global score of EI, primary outcomes were reported separately for each EI dimension and effect sizes were averaged across EI dimensions within each study in order to provide an approximate effect size for the global EI (see Results of the meta-analysis section).
Search Results and Characteristics of the Included Studies
Fifteen articles meeting the inclusion and exclusion criteria were included in the systematic review. Table 1 provides an overview of the main characteristics of the studies. The total number of participants across the 15 articles was n = 5,461 (mean percentage of men across studies = 43.18%; mean age across studies = 22.97 years). The distribution of the nationalities was: USA (three studies), Italy and Spain (two studies each), Australia, Iran, Ireland, Lithuania, Mexico, Pakistan, Turkey, and UK (one study each). The selected articles measured EI from the three approaches proposed by Joseph and Newman (Joseph and Newman, 2010). Six articles used a EI measurement instrument based on mixed models: The Trait Emotional Intelligence Questionnaire, including its reduced version and adaptation for adolescents (TEIQue-SF and TEIQue-ASF; Petrides and Furnham, 2001;Petrides, 2009), the Bar-On Emotional Quotient Inventory: Youth Version (BarOn EQ-i: YV; Bar-On and Parker, 2000), the Shrink Emotional Intelligence Questionnaire (Yadegar Tirandaz et al., 2020) and the Scale of Emotional Intelligence (SEI; Batool and Khalid, 2009). Six articles used a EI measurement instrument based on the self-reported ability model: The Schutte Self-Report Inventory (SSRI; Schutte et al., 1998) and the Swinburne University Emotional Intelligence Test (SUEIT; Palmer and Stough, 2001). With regard to the performance-based ability model, two articles employed a EI measurement instrument based on this approach; specifically, these studies used the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; Mayer et al., 2002). In addition, there was an article that employed measures of both the mixed model and self-reported ability model (TEIQue and TMMS; Salovey et al., 1995;Petrides, 2009).
The instruments employed to measure risk behavior varied considerably between studies. We found 16 different measurement instruments (see Table 1), both self-report and behavioral measures, which provided results from the following risk domains: risk behaviors associated with health (e.g., sexual risk behavior, illicit substance and alcohol abuse, and risky driving behavior) and risk behaviors associated with finance and gambling. Some articles assessed several types of risk behavior in the same study and one article assessed risk perception in general (including different domains in a single risk score).
Qualitative Synthesis of the Systematic Review
Of the 15 articles included in the systematic review, 13 showed some statistically significant relationship between EI and risk behavior (see Table 1). Two articles did not find any significant relationship (Yip and Côté, 2013;Hayley et al., 2017). Focusing on those articles that reported significant results in exclusively one direction, we can observe that five revealed a negative relationship (Zavala and López, 2012;Rivers et al., 2013;Micklewright et al., 2015;Anwar et al., 2016;Vaughan et al., 2019) and four revealed a positive relationship (Alipour and Mijani, 2013;Panno et al., 2015;Panno, 2016; Dinç Aydemir and Aren, 2017). It should be noted that two of these studies did not show any relationship between EI and risk behavior through correlation analysis, but the relationship became significant and positive when it was integrated in more complex models involving confounding and mediating variables (Panno et al., 2015;Dinç Aydemir and Aren, 2017). The remaining four articles showed distinct patterns of results as a function of the EI dimension or type of risk studied (Fernández-Abascal and Martín-Díaz, 2015; Lana et al., 2015;Lando-King et al., 2015;Malinauskas et al., 2018). In this regard, the articles of Fernández-Abascal and Martín-Díaz (2015) and Lando-King et al. (2015) reported different results depending on the EI dimension evaluated (they did not compute a global EI score) and although emphasized the existence of a negative relationship between EI and risk behavior, they also found null relationships for some EI dimensions. Likewise, Lana et al. (2015) explored several types of risk behaviors and observed that participants with lower levels of EI had a higher probability of engaging in excessive alcohol consumption and unsafe sex, but no significant effects were found for illicit drug use. Finally, Malinauskas et al. (2018) revealed a positive relationship between EI and traffic risk taking and a negative relationship between EI and substance risk taking. Taken together these results, although there seems to be a tendency toward a negative relationship between EI and risk behavior, the complete review of this literature indicates mixed results. This lack of consistency could be a consequence of the different EI models used and the diversity of risk domains assessed in this field of research. For a better understanding of these findings, we decided to examine the studies by classifying them according to EI model and risk domain.
As shown in Table 1, the studies included in the systematic review have made use of the three different approaches of EI proposed by Joseph and Newman (2010). Focusing on those articles that employed the self-report mixed model, we found that there were three articles showing a negative relationship between EI and risk behavior, two showing a positive relationship, and two showing mainly negative relationships but also null relationships. With regard to the self-report ability model, one article showed a negative relationship, two showed a positive relationship, one showed null relationship and other three showed mixed results (one of these articles also included a mixed model measure). Finally, two articles used the performance-based ability model, one of them showed a negative relationship and the other a positive relationship. Therefore, according to these findings, the relationship between EI and risk behavior do not appear to depend on the EI model employed.
With respect to the risk measures, it is known that risk behavior is a construct that is dependent on the study domain, and it can be classified into domains such as health, social, financial, ethical, or recreational (Weber et al., 2002). By examining the risk domains assessed in each of the articles included in the systematic review and according to the Weber et al. (2002) categorization, we can observe how these articles can be grouped into two main blocks: health-related risk behaviors and financial or gambling-related risk behaviors (see Table 1; we excluded an article that studied risk perception in general). Eight of the articles focused on the study of health-related risk behaviors such as substance abuse, excessive alcohol consumption, sexual risk behavior, risky driving behavior or general health risk behavior. Of these eight articles, three reported exclusively a negative relationship and other three reported mainly negative relationships but also some null relationship. The only cases where EI did not seem to be negatively related to health-related risk behavior was in the field of driving. Two articles worked with risky driving behavior revealing a positive relationship with EI and an absence of relationship (Hayley et al., 2017;Malinauskas et al., 2018). It should be also be noted that the results in the health risk domain did not depend on the EI model (see Table 1). In summary, these results appear to support the existence of a negative relationship between EI and behaviors linked to the health risk domain (with the exception of risky driving). Conversely, the group of six articles employing risk measures related to finances and gambling tasks (two and four studies, respectively), did not revealed a uniform pattern of results. Two articles showed a positive relationship, two showed a negative relationship, and two showed no relationship.
Finally, it is worth noting that none of the articles analyzed the relationship between EI and risk behavior as a function of gender. With respect to age and country of origin of the study, we observed that there does not seem to be a pattern of results associated with these variables (see Table 1).
Results of the Meta-Analysis
Effect sizes from 12 of the 15 articles included in the systematic review were introduced in the meta-analysis (see Table 1). The three remaining articles were excluded because it was not possible to obtain the required effect sizes from the articles or by request from the corresponding authors. The whole sample of participants for the meta-analysis was n = 5,100 (mean percentage of men across studies = 41.98%; mean age across studies = 21.52 years).
The three-level random effects model revealed no significant relationship between EI and risk behavior [estimated effect size = −0.06, SE = 0.05, 95% CI [−0.17, −0.04], p > 0.05]. Test for heterogeneity suggested the presence of heterogeneity in the sample [Q (18) = 248.42, p < 0.001]. Since, following the findings of the qualitative synthesis, we have observed that the relationship between these constructs appear to depend on the risk domain studied, we decided to go one step further and include risk domain as a moderator in the meta-analytic model. The two levels of the moderator were health-related risk behaviors and financial/gambling-related risk behaviors. The results for this three-level random/mixed-effects model revealed a significant relationship between EI and health-related risk behaviors [estimated effect size = −0.13, SE = 0.06, 95% CI [−0.25, −0.01], p = 0.03], but not between EI and financial/gambling domain [estimated effect size = 0.05, SE = 0.08, 95% CI [−0.11, 0.21], p > 0.05]. The moderating effect of the risk domain factor was marginally significant [Q M(1) = 3.29, p = 0.06; heterogeneity: Q E(17) = 232.10, p < 0.001]. In addition, Egger's test did not reveal evidence of possible publication biases (p > 0.05) and Rosenberg's Fail-Safe N indicated that 294 additional studies with an effect size of zero would be required to reduce the p-value to a non-significant level in the health domain.
A forest plot showing the individual and pooled effect sizes (with 95% confidence interval) from the studies relating EI and healthrelated risk behaviors (i.e., from the significant risk domain) is presented in Figure 2.
DISCUSSION
The aim of this study was to synthesize existing findings on the relationship between EI and risk behavior in order to advance our understanding of the decision-making process in risk contexts. Importantly, this relationship was studied in terms of the various conceptualizations of EI and risk domains. To this end, we conducted a qualitative and quantitative systematic review of the existing literature.
Fifteen articles studying the relationship between EI and risk behavior were selected for the qualitative analysis after carrying out a systematic search of the literature (April 2020) and applying the inclusion and exclusion criteria described in the Method section. These articles provided a total sample of n = 5,461 participants. With respect to the quantitative analysis, 12 out of the 15 articles selected through the systematic review were appropriate and provided the information needed to be included in the meta-analysis (n = 5,100). The qualitative analysis revealed that five articles reported a significant negative relationship between global EI and risk behavior, four reported a significant positive relationship, and two reported no relationship. In addition, there were four articles that investigated the relationship between EI and risk through different dimensions of EI (did not report a global EI score) or in more than one type of risk behavior, reporting different results depending on the studied variable. In general, these four articles showed a greater support for the existence of a negative relationship, but null and positive results were also found as a function of the EI dimension and the type of risk. With respect to the results of the quantitative analysis, a three-level random effects meta-analytic model revealed no significant relationship between EI and risk behavior (estimated effect size = −0.06, p > 0.05). Preliminary analysis of these findings suggests a rather unclear pattern of results; however, as we describe below, a more in-depth analysis of these studies revealed that these differences depended on certain moderating factors.
When observing the results in more detail, we can appreciate that the articles included in the systematic review used the three EI models proposed by Joseph & Newman (Joseph and Newman, 2010). Negative, positive and null relationships were found for the three EI models, and any trends or patterns did not vary as a function of the model used. Thus, the relationship between EI and risk behavior seem to be independent of the type of EI model employed, at least in these studies. On the other hand, a key factor that does seem to shed light on the discrepancies found in the results is the risk domain. The selected articles primarily focused on two risk domains: risk behaviors associated with health (e.g., alcohol and substance abuse, sexual behavior, and risky driving behavior) and risk behaviors in matters related to finance and gambling. When differentiating between these two domains, we observed a clearer pattern of results for the health-related FIGURE 2 | Forest plot displaying the individual and pooled effect sizes (and 95% confidence intervals) of the studies relating EI and health-related risk behaviors included in the meta-analysis. Box sizes represent the weight of each study in the meta-analysis. risk domain. Three of the eight articles studying health-related risk domain showed a significant negative relationship with EI, and other four articles also showed mainly significant negative relationships, although coupled with some positive and null relationships depending on the EI dimension and risk domain studied. The results of the meta-analysis further clarify these findings, revealing that, when risk domain was included as a moderating variable, there was a negative relationship between EI and health-related risk behaviors (estimated effect size = −0.13, p = 0.03). The higher the EI levels, the lower the incidence of health risk behaviors. In this regard, EI could act as a protective factor against risk-taking. However, no clear pattern of results was found for the finance/gambling domain, with studies reporting positive, negative, and null relationships (meta-analysis results: estimated effect size = 0.05, p > 0.05).
Among the results found for health-related risk behavior it is worth noting the particular case of risky driving behavior. Unlike other risk behaviors associated with health, this type of behavior did not reveal any negative relationship with EI [one article found a positive relationship (Malinauskas et al., 2018) and another a null relationship (Hayley et al., 2017)]. Whilst risky driving behavior is considered a public health risk (WHO, 2018), this behavior has its own particularities that distinguish it from the rest of the risks studied in the health domain. We propose that, although the proneness to taking risks while driving evidently poses a danger to our physical integrity, in this case, the consequences of the behavior may depend more on our skills when compared with other health-related risk behaviors (Megías et al., 2018a,d).
In summary, with the exception of risky driving behavior, our findings support the existence of a negative relationship between EI and risk behavior in the health domain, regardless of the EI model used. Interestingly, in our systematic literature search, previous to apply the exclusion criteria, we found three additional articles that supported these findings. These articles were excluded because they did not use a measurement instrument to specifically evaluate risk behavior. Two of the articles aimed at assessing the level of EI in clinical population groups characterized by problems associated with risk health behaviors, such as illicit drug users and alcohol abusers (Kornreich et al., 2011;Romero-Ayuso et al., 2016). Both studies revealed that the clinical groups had lower levels of EI than the nonclinical groups. In the third article, Goudarzian et al. (2017) showed that EI training can help to reduce the potential use of illicit drugs.
From a theoretical perspective, the relationship between EI and health risk behavior could be understood through the critical role played by emotions in decision making, particularly in risk contexts (Ditto et al., 2006;Gutnik et al., 2006;Rivers et al., 2008;Megías et al., 2011). Many of the risk behaviors associated with health are usually characterized by positive shortterm consequences, such as satisfying impulses. Some examples include having unprotected sex for pleasure, drinking more than five or more drinks at a party for fun, riding a motorcycle without wearing a helmet due to considerations of comfort, driving at high speed for adrenaline, or walking through an unsafe area of town in order to take a short cut to our destination. In this type of contexts, the emotion elicited by the short-term rewards can guide our behavior (Cyders and Smith, 2008). This effect is particularly evident if the individual is already in a strong positive or negative emotional state, which increases the influence of the short-term rewards (Cyders and Smith, 2007;Deckman and DeWall, 2011;Smith and Cyders, 2016). Higher emotional abilities, such as a better perception and understanding of our emotions and a greater ability to control them, could act as protective factors against the tendency to be guided by short-term rewards and risk taking in health-related contexts. People with higher levels of EI would be better able to understand and weigh up the health risks in situations with a high emotional burden (Mayer et al., 2001).
The results of our review have also shown that there is no clear evidence supporting the existence of a relationship between EI and risk behavior in the domain of finance and gambling. While we know that people adapt their behavior in risk situations (De Martino et al., 2006;Slovic et al., 2007;Rivers et al., 2008), we also know that the way we adapt our behavior is specific to the risk domain (Weber et al., 2002). Thus, an individual can show a tendency to behave in a risky way in one domain but not in others. There are a wide variety of cognitive and emotional factors that can affect risky decision making and the relative weight of these factors will depend on the contextual situation (Loewenstein et al., 2001;Reyna, 2004;Slovic et al., 2007;Megías et al., 2015). Focusing on the case of financial risk-taking, this type of behavior involves markedly different contextual characteristics in comparison with the previously studied healthrelated risk behaviors. In the financial context, taking certain risks is unavoidable in the pursuit of economic gains, that is, it is an integral part of the business. In fact, risk taking is considered to be one of the most important aspects of entrepreneurship (Wiklund and Shepherd, 2005). A similar situation could be also occurring in those studies included in the systematic review in which risk behavior was assessed through gambling tasks such as the Iowa gambling task, Columbia card task, and Cambridge gambling task (see Table 1). In these gambling tasks, risk taking, when adopted appropriately, can be necessary for improving performance. 1 Taken together, these assumptions suggest that the decision to take risks has different consequences in health and financial/gambling contexts, and, therefore, different factors could be involved in the decision-making process. In this regard, the behavioral differences observed in the current review as a function of the context where the risk is performed are in accord with the domain specificity of risk behavior (Weber et al., 2002).
The results of the present study are not exempt from some limitations. The articles included in the systematic review only focused on the risk domains of health and finance/gambling, and in the latter case only six articles were found. With the objective of gaining a more complete understanding of the influence of EI on risk decision making, further research should focus on other risk scenarios such as those in social, recreational, and ethical contexts (Blais and Weber, 2006). In order to increase the generalizability of the findings, it will also be necessary to address possible gender and age differences. Moreover, future studies should employ experimental designs to examine causality and, thus, establish the possible protective role of EI in health risk behavior. Finally, we must also consider some intrinsic limitations of the measurement instruments used in the literature reviewed. A number of different EI and risk measures were included, each of them with very different characteristics (e.g., overall scores vs. dimensional scores, self-report vs. performance-based measures, different EI models and risk domains, etc.), which hinders extrapolation of the results. For example, as previously mentioned, risk situations are highly emotionally charged, which could bias self-report measures, since the responses of individuals in hypothetical situations (without exposure to the emotional burden) can be somewhat different to the responses elicited in context closer to real situations. Further, it is recommended that future research studies focus on performance-based ability measures of EI, such as the MSCEIT (Mayer et al., 2002). Most of the studies included in this review (13 of the 15) used self-report EI measures. Although these instruments present a greater ease and speed of administration, previous research has shown that the performance-based ability model, in comparison with selfreport ability and mixed models, has better divergent validity and greater predictive ability for performance in emotionally charged cognitive tasks and general behavior (Gutiérrez-Cobo et al., 2016;Mayer et al., 2016;Megías et al., 2017).
In conclusion, the results of this systematic review and metaanalysis contribute toward achieving an in-depth understanding of the relationship between EI and engagement in risk behavior in various settings. The findings obtained from our search of the literature support the notion that risk is a domain-specific construct (Weber et al., 2002). In particular, the relationship between EI and risk behavior differed according to the risk domain studied; a negative relationship was found when studying the health domain, whilst this relationship was unclear in the financial and gambling domain. The results associated with the health domain are consistent with existing literature about the positive impact of emotional abilities on the optimal health and wellbeing of individuals (Schutte et al., 2007;Laborde et al., 2014;Fernández-Berrocal and Extremera, 2016). In situations where our health can be put at risk, EI abilities could play an important role in protecting against the tendency to engage in risk behaviors. Given the considerable impact of risk-taking on public health, a better understanding of the mechanisms underlying the relationship between EI and risk behavior could help to inform the development of intervention programmes aimed at preventing and reducing the negative effects of these behaviors on our society.
|
2022-02-09T14:38:40.109Z
|
2022-02-09T00:00:00.000
|
{
"year": 2022,
"sha1": "4c3ec1e3a80552ba10476ad91227088396be9f59",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4c3ec1e3a80552ba10476ad91227088396be9f59",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
230609405
|
pes2o/s2orc
|
v3-fos-license
|
MiR-218 Inhibits CSE-Induced Apoptosis and Inflammation in BEAS-2B by Targeting BRD4
Background Chronic obstructive pulmonary disease (COPD) is an age-related disease, and its incidence rate is increasing every year. MicroRNAs (miRNAs) play critical roles in the COPD process and function as key biomarkers or potential therapeutic targets for patients with COPD. However, the potential roles and functional effects of miR-218 in COPD remain undefined. Methods The expression levels of miR-218 and bromodomain protein 4 (BRD4) were assessed by real-time quantitative polymerase chain reaction (RT-qPCR) or Western blot, respectively. In addition, a COPD cell model was established using cigarette smoke extract (CSE) in bronchial epithelial cell line (BEAS-2B). Enzyme-linked immunosorbent assay (ELISA) kit was applied to measure the concentrations of tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), and interleukin-8 (IL-8) in cell supernatants of BEAS-2B cells. Moreover, cell apoptosis was examined by flow cytometry assay. The association relationship between miR-218 and BRD4 was confirmed by dual-luciferase reporter and RNA immunoprecipitation assay. Results MiR-218 was downregulated in COPD and CSE-induced BEAS-2B cells, and it was positively correlated with forced expiratory volume in 1 second (FEV1) % in COPD patients. Mechanically, overexpression of miR-218 or knockdown of BRD4 mitigated apoptosis and inflammation in BEAS-2B cells induced by CSE. Additionally, overexpression of BRD4 weakened the miR-218-mediated effects on CSE-induced BEAS-2B cells. Conclusion Overexpression of miR-218 inhibited CSE-induced apoptosis and inflammation in BEAS-2B cells by targeting BRD4 expression.
Introduction
Chronic obstructive pulmonary disease (COPD) is a progressive destructive lung disease with persistent chronic inflammation, characterized by airflow limitation and severe respiratory failure. 1,2 Generally speaking, long-term exposure to cigarette smoke is identified as a major risk factor for COPD, inducing an airway epithelium inflammatory response and apoptosis in COPD. 3,4 As COPD is frequently exacerbated and often associated with severe complications, the clinical prognosis of COPD patients remains poor. 5 Not only that, the incidence of COPD is rapidly increasing in China, and a large number of aging people and industrial pollutants are major contributors to the high incidence rate of COPD. 6 MicroRNAs (miRNAs) could mediate gene expression at post-transcriptional level by either suppressing transcription of target genes or degrading them directly via matching with bases in the 3ʹ untranslated region (3ʹUTR). 7 Recent findings showed that miRNAs played an essential role in the development and progression of COPD. 8,9 For example, overexpression of miR-146-5p clearly reduced the release of interleukin-8 (IL-8), conversely, inhibition of miR-146-5p functions as a pro-inflammatory feature in COPD. 10 Besides, miRNAs are considered to be therapeutic targets of pulmonary disease. For instance, a previous study reported that miR-155 acted as a novel therapeutic target for asthma. 11 Moreover, Xu et al. reported that miR-218 acted as an anti-inflammatory via mediating activation of NF-κB, implying miR-218 was involved in inflammation in COPD. 12 The above results confirmed that dysregulation of miRNAs was associated with the pathogenesis of COPD. Therefore, the functional effects of miR-218 have not been investigated thoroughly in COPD.
Bromodomain protein 4 (BRD4), a primary member of the BET family proteins, is an important epigenetic regulator by binding to acetylated histones. 13 Additionally, BRD4 also regulated inflammatory gene NF-κB expression via combining acetylated RELA, which in turn increased transcriptional transactivation activity and stability of NF-κB in the nucleus. 14 Similarly, Huang et al. also revealed that BRD4 could activateNF-κB via Specific Binding to Acetylated RelA. 15 Therefore, we hypothesized that the expression of BRD4 was closely associated with inflammation in COPD.
COPD is largely attributable to cigarette smoke. In the present study, cigarette smoke extract (CSE) exposure is employed to induce inflammation and apoptosis in bronchial epithelial cells. The association between miR-218 and BRD4 was investigated in CSE-induced bronchial epithelial cells.
Clinical Samples
The lung tissue specimens of patients with COPD and control specimens of smokers/non-smokers were collected from The Second People's Hospital of Lanzhou City. The removed samples were promptly snap-frozen in liquid nitrogen and maintained at −80°C until further analyses. Written informed consent was achieved from patients prior to participation. All recruited subjects were subjected to physical examination, spirometry (airflow limitation available; FEV1/FVC less than 0.70 indicated presence of persistent airflow limitation and COPD using a postbronchodilator), and assessment of exacerbation risk and comorbidities. Patients were excluded if they (1) were diagnosed with other complicated disorders; (2) received preoperative treatments before admission; (3) required home mechanical ventilation. Inclusion criteria were (1) first time diagnosis; (2) received no therapies before admission; (3) willing to participate in follow-up. All the procedures had approval from the Ethics Committee of Second People's Hospital of Lanzhou City. The use of clinical specimens was conducted in accordance with the Declaration of Helsinki. The clinicopathologic features of these patients are displayed in Table 1.
Cell Culture
Human bronchial epithelial cell line (BEAS-2B) was purchased from the Type Culture Collection of the Chinese Academy of Sciences (Shanghai, China). BEAS-2B cells were cultured in RPMI-1640 medium (GIBCO BRL, Grand Island, NY, USA) in a humidified atmosphere with 5% CO 2 at 37°C. The 10% (v/v) fetal bovine serum (GIBCO BRL) was added to the medium. Additionally, CSE was prepared by commercial cigarettes (Liqun; Zhengzhou Tobacco Company, Henan, China; 11 mg of tar and 1 mg of nicotine). One cigarette was combusted with a vacuum pump, and then smoke was passed through the 5 mL medium to collect CSE as 100% CSE solution.
Transfection Assay
The mimics of miR-218 and negative control (miR-218 and Mir-NC), inhibitor of miR-218 and negative control (anti-miR-218 and anti-miR-NC), specific interference RNA against BRD4 (si-BRD4) and control (si-NC), and BRD4-overexpressing vector (BRD4) and control (pcDNA) were designed and obtained from Sangon (Shanghai, China). For transfection, BEAS-2B cells were seeded into 6-well plates with a density of 1×10 5 cells/ well, followed by incubation overnight. 40 nM of aforementioned oligonucleotides or 1 μg of plasmids was transfected into BEAS-2B cells by Lipofectamine 2000 reagent (Invitrogen). Additionally, BEAS-2B cells were collected at 48 h post-transfection for subsequent experiments and transfection efficacy was determined by RT-qPCR assay.
Enzyme-Linked Immunosorbent Assay (ELISA)
The interleukin-6 (IL-6), interleukin-8 (IL-8), and tumor necrosis factor-α (TNF-α) in the supernatants of BEAS-2B cells were determined by ELISA kits (Invitrogen; #BMS213HS, #BMS204-3, and #BMS223HS). In brief, the BEAS-2B cell suspension was added to a 96-well plate at a density of 5 × l0 3 cells/well, and cultured overnight. Afterward, 50 μL standard or medium supernatant sample was added to another 96-well plate covered with goat anti-mouse IgM and then incubated at 37°C. After washing, each well was supplemented with blocking buffer and substrate. Finally, the absorbance was read at 492 nm under a multi-well scanning spectrophotometer (Bio-Rad, Hercules, CA, USA).
Analysis of Apoptosis
Apoptosis rate of BEAS-2B cells was assessed by Annexin V-FITC Apoptosis Detection Kit (Thermo Fisher Scientific). In brief, BEAS-2B cells were harvested and then incubated in staining buffer containing Annexin V labeled with fluorescein isothiocyanate (FITC) and propidium iodide (PI) for 30 min in dark conditions. Subsequently, flow cytometer (Applied Biosystems) was used to assess apoptotic cells.
RNA Immunoprecipitation (RIP) Assay
The RNA immunoprecipitation assay was conducted to confirm whether BRD4 could interact with miR-218. Transfected BEAS-2B cells were collected and then lysed in RIP buffer (Millipore, Billerica, MA, USA). The cell lysates were incubated with anti-Ago2 antibody (Ambion) and negative control normal IgG (Ambion) for 2 h at 4°C. After centrifugation, RT-qPCR assay was performed to test RNA enrichment in precipitate complexes.
Western Blot Assay
Radio-Immunoprecipitation assay (RIPA; Thermo Fisher Scientific) was utilized to extract protein from cells or tissues. The proteins were separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto a polyvinylidene fluoride (PVDF) membranes (Invitrogen). The membranes were blocked with 5% bovine serum albumin and then completely immersed in 1:1000 dilution primary antibody at 4°C, including BRD4 (#13440S; Cell Signaling Technology, Danvers, MA, USA) or β-actin (#4970S; 1:1000 dilution; Cell Signaling Technology). The secondary antibody with horseradish peroxidase conjugated (#7074S; Cell Signaling Technology) was added to the membrane and incubated for 2 h at 1:5000 International Journal of Chronic Obstructive Pulmonary Disease 2020:15 submit your manuscript | www.dovepress.com DovePress dilution at room temperature. Finally, signal intensity of membranes was visualized with a chemiluminescence system.
Statistical Analysis
All statistical analyses were expressed as mean ± standard deviation with GraphPad Prism 7 (GraphPad Inc, La Jolla, CA, USA). A P value less than 0.05 was regarded as statistically significant. The difference between selected two groups was analyzed by the Student's t-test. The multiple groups' differences were estimated by one-way analysis of variance followed by Post-Hoc Test LSD. Pearson's correlation analysis was used to determine the correlation relationship.
MiR-218 Was Downregulated in COPD Patients
Expiratory volume in one second (FEV1) % is an essential index for assessment of living conditions of COPD patients. Our data suggested that COPD patients showed lower levels of FEV1% when compared with controls, including smokers and non-smokers ( Figure 1A). The results of RT-qPCR assay suggested that miR-218 was apparently decreased in COPD patients compared with controls ( Figure 1B). In addition, a notable positive correlation between FEV1% and miR-218 was observed in COPD patients ( Figure 1C). Interestingly, we also found that miR-218 level was negatively correlated with serum inflammatory factor levels, including TNF-α, IL-6 and IL-8. All data implied that downregulation of miR-218 might be associated with COPD.
Overexpression of miR-218 Suppressed Apoptosis and Inflammation in BEAS-2B Cells Caused by CSE
To assess the effect of miR-218 on COPD progression, BEAS-2B cells were treated with different concentrations of CSE. The results of RT-qPCR assay displayed that the expression level of miR-218 was greatly decreased in CSEinduced BEAS-2B cells with a dose/time dependent method (Figure 2A and B). Subsequently, we found that treatment with CSE led to the effective increase of TNF-α, IL-6 and IL-8 in cell supernatant, while their levels were significantly downregulated in the CSE+miR-218 group ( Figure 2C-F). Furthermore, apoptosis was assessed by flow cytometry in transfected BEAS-2B cells. The data indicated that overexpression of miR-218 prominently suppressed CSEinduced apoptosis in BEAS-2B cells ( Figure 2G). The results suggested that miR-218 reduced apoptosis and inflammation in BEAS-2B cells caused by CSE.
BRD4 Was a Potential Target of miR-218
To investigate the target of miR-218 in BEAS-2B cells, a bioinformatics assay was performed. As shown in Figure 3A, miR-218 had complementary sequences in 3ʹUTR of BRD4. Dual-luciferase report assay exhibited that BEAS-2B cells co-transfected with luciferase reporter BRD4-WT and miR-218 mimic showed lower luciferase activity compared with control ( Figure 3B). In addition, similar conclusions were confirmed with RIP assay, and the results revealed that overexpression of miR-218 led to an obvious enrichment of BRD4 in RIP-Ago2 group compared with control ( Figure 3C). All data implied that BRD4 was a target of miR-218 in BEAS-2B cells.
BRD4 Was Upregulated in COPD Patients
The association between BRD4 and COPD was investigated. As shown in Figure 4A and B, the expression level of BRD4 was significantly increased in COPD tissues, and it was negatively correlated with miR-218 expression. Importantly, CSE induced the upregulation of BRD4 in BEAS-2B cells in a dose/time dependent way, no matter whether mRNA and protein ( Figure 4C-F). Moreover, Western blot results revealed that miR-218 overexpression reduced protein level of BRD4, while protein level of BRD4 was effectively increased in BEAS-2B cells after transfection with anti-miR-218 ( Figure 4G). Taken together, miR-218 negatively regulated BRD4 expression.
CSE-Induced Apoptosis and Inflammation in BEAS-2B Cells Could Be Abolished by Silencing of BRD4
To further determine the functional effect of BRD4, BEAS-2B cells were transfected with si-BRD4 to knock down the expression of BRD4. As shown in Figure 5A, inhibition of Figure 5B-D). Besides, flow cytometry assay results confirmed that BRD4 knockdown protected BEAS-2B cells from CSE-induced apoptosis ( Figure 5E). Overall, these results implied that inhibition of BRD4 abolished CSE-induced effects on BEAS-2B cells.
Upregulation of BRD4 Abrogated the Effects of miR-218 Overexpression on CSE-Induced BEAS-2B Cells
As presented in Figure 6A, co-transfection with BRD4 and miR-218 apparently rescued the BRD4 expression compared with only transfection with miR-218 in CSEinduced BEAS-2B cells. Besides, gained functional experiences showed that overexpression of miR-218 reduced the levels of TNF-α, IL-6 and IL-8 in cell supernatants, which was abolished by overexpression of BRD4 ( Figure 6B-D). Similarly, the upregulation of miR-218 resulted in a great inhibition of apoptosis in CSE-induced BEAS-2B cells, which could be restored by overexpression of BRD4, as demonstrated by flow cytometry analysis ( Figure 6E). These results revealed that miR-218 inhibited CSE-induced apoptosis and inflammation in BEAS-2B by targeting BRD4.
Discussion
Conclusively, miR-218 was downregulated in COPD and in CSE-induced BEAS-2B cells; besides, the levels of miR-218 were closely correlated with secretion of inflammatory factor in COPD, revealing the anti-inflammatory properties of miR-218 in the development of COPD.
Cigarette smoke contributes to inflammation in the lung tissues via releasing of damage-associated molecular 16 besides, cigarette smoke-induced damage-associated molecular patterns contribute to the development of COPD. 17 Additionally, Jin et al. reported that lung tissues persistently exposed to cigarette smoke could increase serum concentrations of TNF-α. 18 Similarly, Gao et al. also disclosed that IL-6 and IL-8 were upregulated in lungs of COPD patients. 19 Therefore, treatment with CSE is employed to induce inflammation and apoptosis in BEAS-2B cells as a cell model for COPD. Importantly, upregulation of miR-218 suppressed apoptosis and inflammation in BEAS-2B cells caused by CSE.
It has been confirmed that miRNAs regulated the expression of mRNA by targeting 3ʹUTR mRNAs; 20 besides, miRNAs played crucial roles in the development and pathogenesis of lung diseases. 21 The downregulation of miR-218-5p had been observed in human bronchial, 22 small airway epithelium, 23 and lung squamous cells. 24 Wang et al. reported that miR-218 functioned as a tumor suppressor in lung cancer. 25 Importantly, Schembri et al. also implied that miR-218-5p was associated with COPD processes. 22 Similar to previous conclusions, 12 the anti-inflammation function of miR-218 was confirmed in our results. The overexpression of miR-218 repressed apoptosis and inflammation in CSEinduced BEAS-2B cells.
BRD4 protein might have a promoter role in COPD because it is implicated in the inflammatory process by increasing pro-inflammatory cytokines. 26,27 In addition, Song et al. found that BRD4 is implicated in inflammation in the development of COPD. 28 Meanwhile, miRNA might participate in COPD processes by regulating inflammatory cytokine expression through targeting BRD4, such as miR-29b. 29 Similarly, our data suggested that miR-218 inhibited CSE-induced apoptosis and inflammation in BEAS-2B by targeting BRD4.
Collectively, we found that BRD4 was a target of miR-218, and upregulation of BRD4 could attenuate effects of miR-218 overexpression on CSE-induced BEAS-2B cells, indicating that the miR-218/BRD4 axis might serve as a diagnostic target for COPD.
Conclusion
Collectively, our data showed that CSE could induce apoptosis and inflammation in BEAS-2B cells, which could be effectively weakened by enhancement of miR-218 or inhibition of BRD4. Mechanically, miR-218 regulated apoptosis and inflammation in CSE-induced BEAS-2B cells by targeting BRD4.
Patient Consent for Publication
Not applicable.
Publish your work in this journal
The International Journal of COPD is an international, peer-reviewed journal of therapeutics and pharmacology focusing on concise rapid reporting of clinical studies and reviews in COPD. Special focus is given to the pathophysiological processes underlying the disease, intervention programs, patient focused education, and self management protocols. This journal is indexed on PubMed Central, MedLine and CAS. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. Submit your manuscript here: https://www.dovepress.com/international-journal-of-chronic-obstructive-pulmonary-disease-journal submit your manuscript | www.dovepress.com
DovePress
International Journal of Chronic Obstructive Pulmonary Disease 2020:15
|
2020-12-31T09:02:09.335Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3beacdcc9dd641fb37489bab34731ef59f55273d",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=65264",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c79c10ca7b04f4ebcbb43fa74a4324acac45933",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267533302
|
pes2o/s2orc
|
v3-fos-license
|
The New Media Landscape and Its Effects on Skin Cancer Diagnostics, Prognostics, and Prevention: Scoping Review
Background: The wide availability of web-based sources, including social media (SM), has supported rapid, widespread dissemination of health information. This dissemination can be an asset during public health emergencies; however, it can also present challenges when the information is inaccurate or ill-informed. Of interest, many SM sources discuss cancer, specifically cutaneous melanoma and keratinocyte cancers (basal cell and squamous cell carcinoma). Objective: Through a comprehensive and scoping review of the literature, this study aims to gain an actionable perspective of the state of SM information regarding skin cancer diagnostics, prognostics, and prevention. Methods: We performed a scoping literature review to establish the relationship between SM and skin cancer. A literature search was conducted across MEDLINE, Embase, Cochrane Library, Web of Science, and Scopus from January 2000 to June 2023. The included studies discussed SM and its relationship to and effect on skin cancer. Results: Through the search, 1009 abstracts were initially identified, 188 received full-text review, and 112 met inclusion criteria. The included studies were divided into 7 groupings based on a publication’s primary objective: misinformation (n=40, 36%), prevention campaign (n=19, 17%), engagement (n=16, 14%), research (n=12, 11%), education (n=11, 10%), demographics (n=10, 9%), and patient support (n=4, 3%), which were the most common identified themes. Conclusions: Through this review, we gained a better understanding of the SM environment addressing skin cancer information, and we gained insight into the best practices by which SM could be used to positively influence the health care information ecosystem.
Introduction
As of April 2023, 4.8 billion people, or 59.9% of the world's population, were identified as social media (SM) users [1].In the age of omnipresent internet exposure, more people than ever receive and seek medical information from SM.More than 80% of US state health departments have an SM account, and SM has become a safe space for patients with cancer to discuss diagnoses and seek education [2].Over 80% of patients with cancer reported using SM to connect with peers, and over 77% of patients with cancer cited the internet as the most important source of medical information [3].When compared to legacy public health forums, SM and the new media landscape carry
Eligibility Criteria
The inclusion and exclusion criteria are listed in Textbox 1.Studies that were eligible for inclusion investigated the connection between skin cancer and SM.The search was conducted between January 1, 2000, and June 9, 2023, to limit the number of papers and to only include records that were relevant to this era of new communication, after the SM boom.
Data Extraction
Two authors (PLH and AJ) independently screened the titles and abstracts of each citation produced by the search strategy using the inclusion and exclusion criteria to decide which papers would progress to full-text review.Each record was reviewed twice, and, if a conflict was found, the lead investigator (KCN) would make the final decision.The full texts of all potentially eligible records were then analyzed independently by the investigators.Disagreements were resolved by reexamination and discussion.A flowchart was developed using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting guidelines to demonstrate the study selection process (Multimedia Appendix 2) [9].Author, publication year, study type, geographic location, platform investigated, principal findings, and STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) score were extracted from each included publication.A copy of the STROBE score criteria can be found in Multimedia Appendix 3 [10].The STROBE scoring system was used to ensure this review included high-quality studies.
The included publications were divided into 7 categories based on the primary evaluated aspect of the study: engagement, campaigns, demographics, research, education, patient support, and misinformation.To be included in the engagement category, a publication must discuss an attribute of interaction, participation, connection, and involvement designed to illicit a result [11].Engagement can be understood as the likes, comments, and shares posts acquire.Campaigns include publications that describe a new media intervention designed to promote primary or secondary skin cancer prevention and its effect on the population.A publication was included in the demographics category if it discussed demographic differences in skin cancer SM advertising.The research category encompasses papers that demonstrate how SM aids in skin cancer research recruitment.A publication in the education category must discuss a way new media communication can be used for physician-to-physician or physician-to-patient skin cancer education.The patient support category includes records that demonstrate how the new communication environment lends itself to supporting patients with skin cancer.Scientific misinformation is defined as misleading information relative to the best available scientific evidence [12].Therefore, to be included in the misinformation section, a publication must discuss false information dissemination or poor information quality regarding skin cancer across SM platforms.
Overview
We identified 1009 records through the initial search, with the removal of 556 duplicate records via Covidence (Veritas Health Innovation; Figure 1).Two investigators (PLH and AJ) independently screened the remaining studies' titles and abstracts, with 188 records receiving full-text review.After full-text review, 76 were excluded through dual reviewer evaluation.Records with contradictory decisions were sent to a third-party reviewer (KCN), who provided the deciding vote.The included studies were divided into 7 groupings based on the publication's primary objective: misinformation (n=40, 36%), prevention campaign (n=19, 17%), engagement (n=16, 14%), research (n=12, 11%), education (n=11, 10%), demographics (n=10, 9%), and patient support (n=4, 3%), which were the most common identified themes.The data were extracted from each record into a characteristics table (Multimedia Appendix 4 [5,).
Engagement
X (previously known as Twitter) has enormous potential for public health engagement; of the 112 included papers, 16 were included in the category of engagement [13].X is more public than Instagram or Facebook and is used more often than other SM platforms to promote scientific papers and increase interactions with scientific literature [14].On X, the top hashtag for skin cancer is #melanoma, and the key drivers of discussion are patient-focused entities [15].Posts using shock or humor generate the most likes or comments, and informative posts are most likely to be shared [16].Engagement with posts about skin cancer correlates not with skin cancer incidence in a given geography, but instead with SM literacy of the exposed users [17].To optimize the impact of X as a tool for skin cancer engagement, more information is needed to increase message dissemination and uniformity [18].
TikTok is a rapidly growing new media platform with over 755 million users in 2022 [124].The most popular skin cancer content on TikTok includes videos with on-screen text and health care attire, such as a white coat or scrubs [19].Skin cancer is among the top 8 dermatological TikTok topics, with patient testimonies being the most common format, followed by educational videos and clinical demonstrations [20].
Most Instagram content addressing skin cancer originates from influencers and celebrities, not dermatologists [21].Instagram offers a venue for patients to share their skin cancer journey (often with the #skincancerawareness hashtag [22]) and increase users' exposure to skin cancer information.Instagram posts referencing negative emotions (fear and anger), physical consequences, technical treatment information, or real skin cancer images increase audience interactivity, while positive posts have no effect on engagement [23].
This trend continues with Facebook, where the most-used technique to increase audience engagement is inducing fear [24].Like X, Facebook posts with a humorous element increase viewer satisfaction and attention [25].One advertising study compared Facebook user engagement of a parody video, a celebrity video, or a fact-based video regarding skin cancer and found engagement to be the highest for the parody video [25].Facebook also allows individuals to post their personal skin cancer narratives.For example, Tawny Willoughby went viral due to a graphic selfie of her significant facial inflammation during treatment with topical 5% 5-fluorouracil: the post received over 50,000 views and was correlated to a 162% increase in internet search queries about skin cancer [26].
Increased user interactivity correlates with enhanced engagement with the information.This trend is consistent across platforms but is specifically noticed in support groups and on websites.Support groups are particularly effective if they are larger and have active, web-based comment sections [27], whereas the interactivity of skin cancer websites promotes an individual's intention to use sun protection [28].
Prevention Campaigns
The category of prevention campaigns encompassed 19 of 112 included papers.The YouTube video "Dear 16-year-old Me" is a prime example of a successful SM prevention campaign.This video uses mixed emotion methods to address the importance of sun protection, which amplifies the impact of the message by evoking compassion to increase positive social behaviors [29,30].After viewing the video, surveys demonstrated increased viewer intent to pursue a professional skin examination [31].The video made a compounding impact when presented alongside lighthearted face-aging software [32].
Other YouTube skin cancer awareness campaigns include the "It's a beautiful day ... for Cancer" and "Don't be a Lobster." The "It's a beautiful day ... for Cancer" video was an ironic music video that spurred conversation of sun protection behaviors: it received 250,000 views, and 44% of viewers reported changed opinions on sun protection [31].The "Don't be a Lobster" campaign consisted of an anonymous YouTube video highlighting the replacement of the red dragon of the Welsh flag with a red lobster.This anonymity and clever placement of the red lobster image quickly gained media attention and started the viral campaign.The campaign's effectiveness was quantified by Google Trends, showing a 10% increase in skin cancer and a 300% increase in "sun cream" searches [33].
X's #dontfryday made a significant impact globally, with over 12 million impressions.The most influential posts were sent out by celebrities.One study found that while noncelebrity individuals contributed the most content for the campaign, celebrities made a monumental impact, with only 18 contributors generating 8,735,549 impressions [34,35].
As seen with #dontfryday, celebrity influence plays a huge role in enhancing the success of a prevention campaign.Actor Hugh Jackman has posted his skin cancer experience on SM.Each time he posts, the search "skin cancer" spikes on Google [36,37].Like Jackman, Dayanara Torres, a former Miss Universe, used her platform to discuss her diagnosis of melanoma.One dermatology clinic in New Jersey noted that after Torres' announcement, many Hispanic patients came to their clinic specifically with skin cancer screening concerns rather than their usual motivating factors [38].Now, Torres partners with the Melanoma Research Foundation as a spokesperson for the #GetNaked awareness campaign, promoting monthly self-screenings and yearly dermatologist skin examinations [125].In Portugal, athletes distributed skin cancer screening messages, and by the end of the study, more individuals were screened than in the previous years [39].
SM can perpetuate the tanned ideology, but with targeted interventions, this risk can be mitigated.Appearance-focused interventions, or interventions that use aging, wrinkles, and sunspots in their educational material, successfully reduced Instagram users' positive associations with SM images featuring people with tanned skin [40].Increasing SM literacy can also decrease the internalization of the tanned ideology.SM literacy is the ability of a user to evaluate and critically analyze posts, which aims to promote greater skepticism of appearance-related media [41,42].The self-persuasion theory is another method that can predict healthy behaviors and enhance skin protection intentions: individuals who share skin protection information predictably use those same practices [43][44][45].
A Danish antisunbed campaign focused on decreasing tanning bed use among adolescents, generating intense public debate, and increasing legislative support [46].With the new legislation, a parent must sign off on indoor tanning if a child is younger than 18 years.Targeting educational messages to mothers is a promising approach, as mothers who are more educated about the dangers of indoor tanning and equipped to discuss those dangers are less likely to allow their children to use tanning beds [47].
Demographics
In total, 10 of the 112 papers were categorized in the demographics group.The new communication environment offers an opportunity for skin cancer prevention but primarily targets younger demographics: the success of SM skin cancer prevention campaigns decreases as participant age increases [48][49][50].However, many young adults consider SM prevention messages to be uninfluential, because they are lost in the influx of other information [51,52].
One underrepresented demographic is individuals with darker-pigmented skin, as many skin cancer educational and prevention messages do not engage these populations.For example, 97% of skin cancer pins on Pinterest were of white skin individuals [53].Similarly, a review demonstrated that 100% of skin cancers depicted on SM advertisements had a background of Fitzpatrick type I or II skin [54].SM representation is critical, as a study that interviewed 27 African American individuals found SM to be a primary means by which people with darker pigmentation are exposed to public health messages related to skin cancer [55].Participants also stated it would be important for skin cancer awareness messages on SM to feature Black communities to feel that the information is relevant to them [55].
Sexual orientation and gender identification also have a role in engagement and prevention advertising [56].Indoor tanning motivations in sexual minority men have not been investigated; thus, targeted prevention campaigns are lacking.Compounding, sexual minority men are specifically targeted by tanning salons through SM marketing, further encouraging deleterious tanning behaviors in this population [57].
Research Recruitment
In total, 12 of the included 112 papers were designated as research recruitment, collecting a total of 2912 patient responses [5,[58][59][60][61][62][63].By distributing surveys through SM platforms, scientists can recruit patients with rare skin cancers (such as dermatofibroma sarcoma protuberans [58]) and distribute research recruitment efforts globally.Additionally, SM can be used in studies to assess patients' health-related quality of life.This concept was validated in one such study, which showed the alignment of current electronic health record data to SM data mining of symptoms that are common for patients receiving skin cancer treatment [64,65].SM can also support data crowdsourcing to help physicians understand the patient experience and identify high-risk individuals for prevention [66,67].New communication technology offers a unique opportunity for physicians to directly communicate with and understand their patients on a deeper level [68].
Education
Education through new media resources allows dermatologists to have a more substantial global reach in skin cancer prevention, which is what was primarily discussed in the 11 papers included within this category.In the past, studies have shown that the presence of dermatology-related content from reputable journals on SM is limited [69][70][71][72].It is effective to use social networking sites to provide an avenue for health care providers to communicate, share knowledge, and discuss care [73].For example, Doximity is a platform for health professionals to freely discuss topics such as skin cancer.Dermatologists can use Doximity to share skin cancer awareness messages, prevention strategies, or scientific papers with the broader physician community.Anyone can then share information from Doximity to SM sites to reach the wider patient population [74].
Similarly, physicians share posts during the American Society of Clinical Oncology meeting.From 2011 to 2012, "melanoma" was a trending term at the American Society of Clinical Oncology conference, and attending physicians dispersed the latest scientific research over X [75].Physicians can also connect with patients and teach proper skin self-examination through SM [76].One study noted that 79% of patients had increased confidence in performing skin self-examination after watching eHealth YouTube videos, which proved superior to classic methods such as informational brochures [77].
Education strategies using beauty technicians can also serve as an intervention tactic for skin cancer.For example, the Pele Alerta Project built a website to assist beauty professionals in the early detection of skin cancers [78]; in addition, tattoo artists were targeted to provide skin protection information in their aftercare instructions [79].Each educational opportunity gives patients a greater chance of catching their skin cancer early.
Patient Support
In total, 4 of the 112 included papers discussed social media and its use in patient support.Patients often use SM to share their firsthand experiences, such as skin cancer excision procedures, to help provide realistic expectations for other patients [80].They also use SM to discuss the effects of skin cancer on their quality of life.Mental health struggles and uncertainty were the 2 most common themes for forums for patients with skin cancer [81], and emotional burden, treatment, and diagnosis were common conversation topics throughout these support groups [82].Over 52% of melanoma Facebook groups are used to support patients [83].
Misinformation
Finally, the majority of included records discussed misinformation, with 40 of 112 papers belonging to this category.Participants in one study viewed a misinformation video and afterward had less intention to wear sunscreen, demonstrating the detrimental effect of misinformation.
Comments posted correcting the misinformation in the video showed no significant increase in attitudes regarding sunscreen use [84].
Many misinformation studies verify a positive correlation between SM use and indoor tanning behaviors [85][86][87].Not only does SM propagate skin tone dissatisfaction, but it also has provided a place of advertisement for tanning salons.Indoor tanning businesses propagate misleading information to increase their customer base, such as "indoor tanning is a safe way to get vitamin D" [88,89].Companies have used "#paleshaming" to bring adolescents to their salons by damaging their self-esteem and motivating their engagement in tanning behaviors [90].Not only do tanning salons use SM for business promotion, but also tanning, in general, is glorified across new media [91].A review of tanning hashtags was conducted for TikTok, Pinterest, YouTube, and X, where 90%, 85%, 68%, and 68.9% of tanning content was positive, respectively [92][93][94][95].Further research showed that, over a 2-week period, only 2.56% of 154,496 tanning posts on X mentioned skin cancer as a risk [96].In summary, SM propagates indoor tanning behaviors by adding to skin tone dissatisfaction, advertising for tanning salons, and broadcasting a positive attitude toward tanning and sunburn.
While there has been a positive progression in educational content on YouTube from 2014 to 2018 [108,109], misinformation and low-quality information still plague the viewing streams.For instance, YouTube creators grossly overestimate the relationship between COVID-19 and vitamin D, encouraging tanning behaviors during the pandemic [110].
Similarly, multiple studies found blatant misinformation from many YouTube videos regarding alternative therapies, especially concerning "black salve" as a "100% cure for skin cancer" [111,112].The largest issue is there is no correlation between the quality of content and the amount of engagement that content receives [113].Even if dermatologists developed high-quality educational videos, users may still engage with lower-quality, inaccurate videos, as YouTube offers no verification or credentialing functionality.
Like YouTube, many reviewers found a trend of misinformation, high variability, and low readability on websites.The readability scores of sampled skin cancer websites averaged at the high school level, whereas the recommended readability score for medical information is at the seventh-grade level [114,115].
Misinformation is found across all SM platforms.A review of skin cancer records across Facebook, X, and Pinterest found that 44.7% of records were imprecise and 20% were confusing [116].The #Stop5G campaign that went viral on X and Facebook broadcasted inaccurate health information, stating that 5G phones were causing skin cancer [117].Longitudinal melanonychia also went viral on TikTok in 2022.Of the 100 videos examined, only 30% of TikTok postings regarding longitudinal melanonychia encouraged patients to see their physician, and the information was of poor quality as seen by the DISCERN score average of 1.58/5 [118].Pinterest portrays a low general risk of skin cancer to its users, recommends alternative medicines twice as often as traditional biomedical treatments, and spreads false sunscreen information [119].
Antisunscreen campaigns have become more popular, specifically targeting parents and encouraging homemade sunscreen that is ineffective in protecting the skin [120,121].
Even skin cancer screening examinations, a well-established early detection intervention, are impacted by misinformation: 25% of screening posts on Pinterest were negative, expressing doubts regarding the merit of skin examinations [122].Facebook support groups may also be poor sources of cancer care information: in one examination of Facebook skin cancer support group comments, 35% of posts had comments that offered medical advice, of which 87% did not align with guideline-concordant care [123].
Principal Findings
This review has addressed SM's positive and negative effects on skin cancer.SM drives most persons' day-to-day communication and can be a powerful tool for health care leaders to communicate important cancer control information.However, communication via SM also introduces the risk of disseminating misinformation.A critical knowledge gap regarding methods to reduce health misinformation within SM has developed.Studies indicate how increasing interactivity and emotions can increase engagement and success of cancer prevention campaigns.Platforms have the potential to disseminate and gather information quickly and to target patients of many demographics.This review identifies the best practices of SM regarding skin cancer and the drawbacks of the ever-changing information environment to help public health figures use SM in the most productive ways and curb the harmful effects of digital media.
Best Practices
Table 1 is a culmination of the most effective and engaging ways for health officials to use SM to discuss skin cancer.New communication strategies have so much potential and, if used properly, could increase awareness of skin cancer.Many of the studies included in this review attempted to understand the most engaging ways for physicians and researchers to use SM for public health purposes.The most effective strategies use interactivity, emotion, and promotion from a public influencer.Through the education of patients, providers, and other technicians, the opportunity for skin cancer to be caught early and in turn treated easily will increase.Physicians can also use SM to educate themselves on the popular complaints of skin cancer treatments and to understand their patients' questions and concerns.SM opens a new line of communication that will revolutionize the patient-physician relationship.The affordable nature of the platforms along with the ease of information spread would allow physicians or researchers to easily educate individuals on the best ways to protect themselves from skin cancer and to protect patients from other misinformation across new communication platforms.If public health officials apply these best practices on SM, they can encourage skin health and publicize prevention methods.
Drawbacks
Limited statistical data regarding user demographics on SM make developing targeted interventions and drawing clear conclusions from SM data mining incomprehensible [126,127].
SM research demographics do not accurately represent the entire patient population with skin cancer.This disables researchers in applying SM trends to the general population with skin cancer, specifically regarding gender or higher education distribution (Table 2) [66].The educational value of prevention campaigns remains in question.When health care leaders or influencers abuse campaign power, it can reduce the public health campaign's credibility and effectiveness.While some campaigns have proven effective, there are significant demographic discrepancies in which they reach.These campaigns display a bias toward White individuals, and they cannot significantly reach older individuals or young adults due to ineffective communication methods or minimally engaging content.Campaigns require modification with SM changes to remain relevant and reach all demographics.
The current landscape of skin cancer SM content is poor, and dermatologists' presence is lacking across platforms.After observing the quality of health care content available to patients, SM cannot be considered a reliable source and should remain unsanctioned by physicians.
Medical misinformation research has demonstrated that the presence of misinformation has increased with new technology.Medical misinformation was extensively studied following the COVID-19 pandemic, and it was found that patients' trust in misinformation increased as their opinion on public health and medical institutions became more negative [128].This mistrust may come from the growing influence of misinformation, which may lead patients to resist corrections coming from accredited sources [129].The challenges seen through this scoping review have mirrored other research findings, showing that web-based platforms pose a challenge due to the ease of distribution of medical misinformation.Furthermore, SM provides a platform for users to share information without consequence or peer review and under the protection of freedom of speech.One pilot study discovered that practitioners encountered misinformation regularly across all specialties.Specifically, they found that 92% of the surveyed dermatologists had encountered medical misinformation presented by their patients [130].
While it is accepted that misinformation is generating obstacles for practitioners, the solution is still heavily debated.To combat misinformation, practitioners must have knowledge of what is being spread to provide their patients with high-quality, evidence-based resources.Through our scoping review of the current SM research environment, we may provide clinicians with an actionable understanding of the current state of SM information.In conjunction, SM platforms and new media technology can adapt content algorithms to modify patterns of misinformation exposure.These platforms could additionally develop technologies that allow users to flag problematic content for other SM users [128].
Future Research and Interventions
Future research is needed to understand the quality of skin cancer content and develop, implement, and evaluate new prevention campaigns on SM platforms, such as TikTok.The current lack of research on TikTok is alarming, considering the frequency of its use among younger patients.SM requires effective and efficient physician engagement methods to reduce misinformation and promote accurate skin cancer content.Increasing dermatologist engagement could ensure high-quality information and establish credible sources for users.As seen through the studies discussing research recruitment, SM data mining offers enormous opportunities to understand the skin cancer landscape on SM.Future studies using data mining related to skin cancer are needed to understand the scope of skin cancer information across new media.This review identified specific populations who could benefit from SM interventions, specifically, low SM literate individuals and populations commonly disregarded by prevention campaigns.Increasing SM literacy is one of the most influential methods to ensure users properly digest information and are protected from misinformation.In the past, campaigns and advertisements regarding sun protection have underemphasized people of darker complexion.SM provides an easy, affordable campaign platform to target all audiences.The Dayanara effect [38] and Admassu's use of Grindr to target sexual minority men [56] demonstrate the credibility of targeting specific audiences through SM.Both campaigns amplified cognizance of skin cancer in communities demographically underrepresented by prevention campaigns.It is essential to diversify our intervention strategies to educate all people who could be diagnosed with skin cancer.
Limitations
As with all literature reviews, ours is reliant on the quality of the previously published data.Other limitations include word choice and database selection, which inadvertently exclude relevant publications.A language bias may be present, as we excluded all papers for which an English full text could not be identified.Interpretation of data, either our own or that of the original author, potentially risks data misinterpretation.The amount of quantitative data available on this topic was limited, and each study's variables differed.In addition, much of the research currently involving SM's effects on skin cancer is contradictory.Some studies conclude that SM has immense potential for prevention, while others argue that it is a source of misinformation.This contradiction was often due to study design or sampling bias by the original authors.
Conclusions
New communication technology represents both an opportunity to improve public health practices and an obstacle for practitioners to overcome.The full potential of SM has yet to be reached, and health care leaders can make these platforms educational and productive regarding skin cancer prevention.Every day users are at risk for exposure to misinformation, which can decrease their trust in evidence-based medicine and increase their intentions to engage in harmful skin behaviors.This review uncovered the importance of collaboration between health care and SM industries to develop techniques to decrease the spread of misinformation.As SM becomes ubiquitous in society, developing quality strategies that break through and reach target populations becomes essential.Establishing a symbiotic relationship between public health officials and SM communication enables new communication technologies to be used as an accurate source of skin cancer information and could prevent harmful behaviors.
Textbox 1 .•
Inclusion and exclusion criteria.Artificial intelligence technology rather than social media • Teledermatology rather than social media • Not dermatologic information • No skin cancer information • No social media information
Figure 1 .
Figure 1.PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram for study inclusion.
Table 1 .
Best practices demonstrating the best ways to increase audience engagement and the educational benefits of social media.
Table 2 .
A collection of the studies that used SM to recruit participants, broken down by demographics.
a Not available.
|
2024-02-08T16:14:43.328Z
|
2023-10-09T00:00:00.000
|
{
"year": 2024,
"sha1": "e8998c6451a9ce5a0d3f41c35df6d86041b9238b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/53373",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ec93cfd5fcb76441e946240e45a62fd37e6d8083",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
258864419
|
pes2o/s2orc
|
v3-fos-license
|
Chlorination of Pu and U Metal Using GaCl3
The oxidative chlorination of the plutonium metal was achieved through a reaction with gallium(III) chloride (GaCl3). In DME (DME = 1,2-dimethoxyethane) as the solvent, substoichiometric (2.8 equiv) amounts of GaCl3 were added, which consumed roughly 60% of the plutonium metal over the course of 10 days. The salt species [PuCl2(dme)3][GaCl4] was isolated as pale-purple crystals, and both solid-state and solution UV–vis–NIR spectroscopies were consistent with the formation of a trivalent plutonium complex. The analogous reaction was performed with uranium metal, generating a dicationic trivalent uranium complex crystallized as the [UCl(dme)3][GaCl4]2 salt. The extraction of [UCl(dme)3][GaCl4]2 in DME at 70 °C followed by crystallization produced [{U(dme)3}2(μ-Cl3)][GaCl4]3, a product arising from the loss of GaCl3. This method of halogenation worked on a small scale for plutonium and uranium, providing a route to cationic Pu3+ and dicationic U3+ complexes using GaCl3 in DME.
M any recent advances in molecular actinide (An) reactivity studies can be attributed to growth in the availability of nonaqueous, organic-soluble, halide starting materials.For plutonium, anhydrous PuCl 3 species are commonly isolated from acidic stock solutions or via chlorination of the metal using Cl 2 gas. 1−13 Plutonium iodide materials are typically generated from I 2 addition to the metal, after which supporting solvent ligands can be exchanged. 1ecent studies demonstrated the isolation of PuI 3 through I 2 loss arising from the inherent instability of PuI 4 . 14Generating anhydrous Pu starting materials typically requires technically rigorous drying methods or the use of strongly oxidizing halogenating reagents.Furthermore, simple PuCl 3 is insoluble and difficult to use with ensuing chemistry, and further development of soluble forms of Pu−Cl bonds is advantageous for subsequent experiments.Alternative methods may facilitate new access routes and encourage further investigations of Pu chemistry.
Here, we investigate gallium trichloride (GaCl 3 ) as a chlorine source for the oxidation of plutonium and uranium metal to generate coordination complexes in the 3+ oxidation state.The conjugate reductant of GaCl 3 , gallium dichloride (Ga 2 Cl 4 ), is a stable complex, suggesting that GaCl 3 has the potential to serve as a mild oxidant and halide source for the formation of UCl 3 and PuCl 3 from the respective metal via chlorine transfer, generating Ga 2 Cl 4 as a side product.The proposed thermodynamics are favorable, given the simplified equations: These negative heats of formation bolster the hypothesis that chlorine transfer from GaCl 3 to plutonium or uranium metal will proceed smoothly. 15This work explores the products arising from halogen transfer to plutonium and uranium metal using GaCl 3 as the oxidizing chlorine transfer agent in DME (DME = 1,2-dimethoxyethane).It is well established that GaCl 3 is found as [GaCl 2 (dme) 2 ][GaCl 4 ] in DME. 16The reactivity described herein provides an anhydrous synthetic route to a Pu 3+ complex under mild reaction conditions, generating a cationic coordination complex.The same reactivity with uranium generated a U 3+ dicationic complex.
The reaction with plutonium metal was used to generate a Pu 3+ halide species under anhydrous conditions without the use of I 2 or Br 2 .The 0.025 g piece of metal used was cut from a larger piece and was a low-surface-area block, resulting in slow reaction kinetics (Figure S1).DME was chosen as the reaction solvent based on its successful prior use in the synthesis of actinide starting materials. 17,18The reaction was performed over the course of 10 days at room temperature in the presence of substoichiometric quantities of GaCl 3 (2.8 equiv) to prevent over-oxidation.The solution turned from colorless to purple, and a darker-purple/gray precipitate formed.The unreacted plutonium metal was then separated and weighed, confirming the consumption of approximately 60% of the starting metal.−22 [PuCl 2 (dme) 3 ][GaCl 4 ] displayed poor solubility in DME once crystallized, and only approximate concentrations for solution UV−vis−NIR were obtained.
Single-crystal X-ray diffraction (SC-XRD) of the pale-purple crystals confirmed the structure as the eight-coordinate plutonium complex [PuCl 2 (dme) 3 ][GaCl 4 ], with three DME molecules arranged equatorially and two axial chloride ligands best described as a trigonal dodecahedron (Figure 2).The [GaCl 4 ] − counterion is noncoordinating with no close-contact interactions.The general structural motif of [PuX 2 ] + (X = Cl or I) has been previously reported. 1In the case of X = Cl, a mixed-valent salt [PuCl 2 (thf) 5 ][PuCl 5 (thf)] (thf = tetrahydrofuran) was isolated, where the cationic plutonium species bears a structure similar to that of trans-chloride ligands and equatorially coordinated solvent molecules. 1For X = I, the cationic plutonium species [PuI 2 (thf) 4 (py)] + (py = pyridine) was isolated with an [I 3 ] − counterion, 1 with the equatorial positions of the plutonium center occupied by four THF molecules and one pyridine molecule.The most notable geometrical differences between the previously reported structures can be found in a more linear bond angle of X− Pu−X (176.36(8)°,X = Cl; 179.79(2)°,X = I) 1 when compared to the 161.15(1)°found in [PuCl 2 (dme) 3 ][GaCl 4 ] (Table S3).The difference between the X−Pu−X angles in these complexes likely arises from the increased equatorial coordination number.−28 Given the cationic nature of the Pu 3+ species, [PuCl 2 (dme) 3 ] + , the uranium product, was expected to show attenuated oxidation reactivity if its product is also cationic.The analogous reaction with uranium metal was performed over the course of 3 days at room temperature using thin, cleaned turnings of uranium metal.Within an hour, the solution turned from colorless to pale pink, with a darker, bright-pink color eventually forming along with a dark-green precipitate.Measuring the remaining metal was difficult as the metal was found as an amorphous gray material following the reaction.The isolation of darkgreen crystalline blocks was made challenging by the presence and coprecipitation of [GaCl 2 (dme) 2 ][GaCl 4 ].Careful layering or vapor diffusion of small quantities of n-hexane onto the pink solution generated large blocks, which were washed and isolated in low yields.In this way, gallium salts can be removed from the product.−31 SC-XRD experiments, combined with elemental analysis, led to the assignment of the green crystalline material as [UCl(dme) 3 ][GaCl 4 ] 2 (Figure 3).Initial ambiguity in assignment from SC-XRD experiments arose from the coordinated DME molecules, which were marked by the strong disorder of DME freely rotating about the uranium center.The resulting 3+ oxidation state is atypical for oxidative chlorination reactions with uranium that often result in U 4+ complexes, which is attributed to the stability imparted by the Cl − ligand on the U 4+ center.We hypothesize that the presence of GaCl 3 and the subsequent formation of cationic uranium species during the synthesis remove electron density from the uranium center, stabilizing the U 3+ product.This dicationic U 3+ species represents a unique coordination mode in nonaqueous uranium chemistry.
Attempted recrystallization of [UCl(dme) 3 ][GaCl 4 ] 2 for purification purposes led to the formation of a secondary product.Dissolution of [UCl(dme) 3 ][GaCl 4 ] 2 in DME was performed with difficulty, with approximately 0.020 g soluble in 5 mL of DME at 70 °C.Extended heating times resulted in the crystal formation of several different uranium complexes.The different crystal morphologies include trace quantities of red blocks comprising a complex with diglyme bound to the uranium (Figure S14), pale-green plates of UCl 4 (dme) 3 (Figure S15), and the predominant crystal morphology of thin, green overlapping plates.
These plates were found to be [{U(dme) 3 } 2 (μ-Cl 3 )]-[GaCl 4 ] 3 as determined by SC-XRD (Figure S17).Reducing the heating time to 10 min allowed for the isolation of SC-XRD experiments led to the assignment of the dimeric complex.However, strong disorder and twinning coupled with weak diffraction data limited the SC-XRD data.The obtained connectivity structure proved to be valuable for determining the chemical composition of this complex (Figure S17).The asymmetric unit cell of [{U(dme) 3 } 2 (μ-Cl 3 )][GaCl 4 ] 3 comprises two full and two half molecules of [{U(dme) 3 } 2 (μ-Cl 3 )] and nine [GaCl 4 ] units.Several examples of related bridged diuranium U 2 (μ-Cl 3 ) compounds have been reported and are typically found in the 4+ oxidation state, 32−35 including those of [AlCl 4 ] salts. 36,37Fewer U 3+ species are reported with three bridging chloride ligands. 38his work describes a new, mild method of plutonium and uranium metal oxidation using GaCl 3 .When plutonium metal is employed, the monocationic species [PuCl 2 (dme) 3 ] + is formed.The analogous uranium reactions led to the isolation of a dicationic species, [UCl(dme) 3 ] 2+ .The divergent products isolated from the plutonium and uranium reactions can be attributed to the difference in the Lewis acidities between the two metals.Uranium 3+ is a softer ion and is tolerant of the abstraction of two of the three chloride atoms.Pu 3+ is a stronger Lewis acid and outcompetes the excess GaCl 3 for an additional Cl − ligand, and the product is observed as the monocation.[UCl(dme) 3 ][GaCl 4 ] 2 exhibited limited stability as a monochloride; simple dissolution of [UCl(dme) 3 ]-[GaCl 4 ] 2 in DME resulted in the generation of [{U-(dme) 3 } 2 (μ-Cl 3 )][GaCl 4 ] 3 , and heating for extended periods led to a mixture of products.Further investigation will focus on methods for gallium removal for the purpose of generating PuCl 3 /UCl 3 starting materials; preliminary results employing pyridine are encouraging for the formation of both plutonium and uranium trihalide complexes.Alternatively, heating [UCl(dme) 3 ][GaCl 4 ] 2 demonstrates potential as a method of generating UCl 4 (dme) 3 .These results will be communicated in subsequent reports.The reactivity, structural motifs, and potential use as synthetic precursors presented for plutonium and uranium encourage long-term investigations using GaCl 3 as an oxidant for f-block metals.
Following decanting of the mother liquor, a DME extraction of the pale-purple material was performed at 40 °C.A pentane diffusion into the mother liquor and extracted DME solutions at −35 °C gave two batches of purple crystals of [PuCl 2 (dme) 3 ][GaCl 4 ], although the batch from the mother liquor was contaminated with crystals of [GaCl 2 (dme) 2 ]-[GaCl 4 ].
|
2023-05-25T06:17:09.075Z
|
2023-05-23T00:00:00.000
|
{
"year": 2023,
"sha1": "854c9692fb0e75ddb50242c4e5d5665092c1059d",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "daefae12488db77f45fe2e49982278cf0699cd8b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251068929
|
pes2o/s2orc
|
v3-fos-license
|
[1,2,4] Triazolo [3,4-a]isoquinoline chalcone derivative exhibits anticancer activity via induction of oxidative stress, DNA damage, and apoptosis in Ehrlich solid carcinoma-bearing mice
Despite the advances made in cancer therapeutics, their adverse effects remain a major concern, putting safer therapeutic options in high demand. Since chalcones, a group of flavonoids and isoflavonoids, act as promising anticancer agents, we aimed to evaluate the in vivo anticancer activity of a synthetic isoquinoline chalcone (CHE) in a mice model with Ehrlich solid carcinoma. Our in vivo pilot experiments revealed that the maximum tolerated body weight-adjusted CHE dose was 428 mg/kg. Female BALB/c mice were inoculated with Ehrlich ascites carcinoma cells and randomly assigned to three different CHE doses administered intraperitoneally (IP; 107, 214, and 321 mg/kg) twice a week for two consecutive weeks. A group injected with doxorubicin (DOX; 4 mg/kg IP) was used as a positive control. We found that in CHE-treated groups: (1) tumor weight was significantly decreased; (2) the total antioxidant concentration was substantially depleted in tumor tissues, resulting in elevated oxidative stress and DNA damage evidenced through DNA fragmentation and comet assays; (3) pro-apoptotic genes p53 and Bax, assessed via qPCR, were significantly upregulated. Interestingly, CHE treatment reduced immunohistochemical staining of the proliferative marker ki67, whereas BAX was increased. Notably, histopathological examination indicated that unlike DOX, CHE treatment had minimal toxicity on the liver and kidney. In conclusion, CHE exerts antitumor activity via induction of oxidative stress and DNA damage that lead to apoptosis, making CHE a promising candidate for solid tumor therapy. Supplementary Information The online version contains supplementary material available at 10.1007/s00210-022-02269-5.
Introduction
Cancer is a devastating disease, and, globally the second leading cause of death (Siegel et al. 2019). In 2020, 19.3 million patients were newly diagnosed with cancer and approximately 10 million deaths were cancer related (Sung et al. 2021). For several decades, chemotherapy has proven to be highly successful in improving the lives of patients with cancer and in eradicating many forms of tumors (Palumbo et al. 2013). Despite the increased effectiveness and endurance of current therapies, multidrug resistance and the adverse effects of the long-term use of anticancer chemotherapy remain major challenges (Schirrmacher 2019;Hussain et al. 2019). Therefore, there is an urgent need for the development of effective anticancer drugs whose toxicity to normal tissues as well as acute and long-term side effects are minimized.
The chalcones scaffolds are flavonoid and isoflavonoid precursors, and they are ubiquitous in natural products such as citrus fruits, vegetables, and spices (Sahu et al. 2012;Zakaryan et al. 2017). Chalcones, both natural and synthetic analogs, have anticancer, anti-inflammatory, and antimutagenic activities. They have the potential to target molecules that are implicated in the beginning and progression of cancer (Jandial et al. 2014). Synthetic chalcone analogs display various biological activities influenced by the functional groups of the chalcone derivative. As previously mentioned, the methoxy alterations, depending on their position on the aryl rings (A and B), seem to affect the anticancer activity of chalcones. Chalcones are found in two isomers: trans (E) and cis (Z). The E isomer is the most stable, and thus the most prevalent, structure among the chalcones (Evranos Aksöz and Ertan 2011). Trimethoxy chalcone exerts anticancer activity in different human cancer cell lines, namely, ACHN, Panc 1, Calu 1, H460, and HCT116. Another study by Srinivasan et al. has reported that (E) trimethoxy phenyl suppresses NF-ΚB activation in A549 lung cancer cells (Srinivasan et al. 2009). Hence, methoxylated chalcone provides an attractive scaffold to study its anticancer effect. The quinazolinone chalcone derivative (QC) demonstrated antitumor activity both in vitro and in vivo. It stopped cancer cell lines including PC-3, Panc-1, Mia-Paca-2, A549, MCF-7, and HCT-116 from proliferating. QC caused apoptosis in HCT-116 cells, as shown by the production of apoptotic bodies, increased G0 cell fraction, loss of mitochondrial membrane potential (m), decrease of Bcl-2/BAX ratio, and the activation of caspase-9, caspase-3, and PARP-1 (poly-ADP Ribose polymerase) cleavage. Additionally, QC inhibited both Ehrlich ascites carcinoma (EAC) and Ehrlich solid carcinoma (ESC). QC was determined to be nontoxic since no animals died due to the effect of QC therapy (Wani et al. 2016). Lophirones B and C, dimeric chalcones extracted from the stem bark of Lophira alata, have anticancer, antimutagenic, and antioxidant properties. Particularly, Lophirone C has the best anticancer, antimutagenic, and antioxidant properties against EAC cells (Ajiboye et al. 2014).
ESC, an aggressive and fast-growing carcinoma, is one of the in vivo experimental models used to investigate prospective anticancer therapies. EAC first appeared spontaneously in a female mouse as breast adenocarcinoma. Ehrlich cancer cells can develop both ascites and solid forms, whether implanted intraperitoneally (IP) or subcutaneously (Vendramini-Costa et al. 2010). Several earlier research has employed Ehrlich solid carcinoma as a model for anti-cancer drugs (El-Shorbagy et al. 2019;Elbialy and Mohamed 2020;Monem et al. 2020;Sharawi 2020;Barhoi et al. 2021).
CHE showed promising anticancer effects against cancer cell lines with different metastatic potentials, including MCF7, A549, HEPG2, and HCT116. Importantly, CHE showed no cytotoxic effects on normal melanocyte HFB4 cell lines. Gene expression analysis showed that CHE upregulated the BAX, p53, and caspase-3 genes and downregulated BCL2, MMP1, and CDK4. Also, flow cytometer analysis demonstrates that CHE induced cell growth arrest at the G1 phase, inhibiting cell cycle progression at the G1/S transition. However, because these promising anticancer effects of CHE were all in vitro, we designed this study to explore the in vivo anticancer activity of CHE. Using different doses of CHE on ESC-bearing mice, we explore the molecular mechanism(s) underlying the effects of CHE and compare them with those of the widely used chemotherapeutic doxorubicin (DOX).
Animals
A total of 54 female BALB/c adult mice (6-8 weeks of age, 25 ± 3 g) were obtained from the animal facility of the National Cancer Institute at Cairo University (Giza, Egypt). Upon arrival, the mice were placed in plastic cages with sawdust bedding (five randomly selected mice per cage). Mice were given 1 week to acclimate in a normal laboratory environment (temperature in the 22-25 °C range, humidity, and a 12 h light/dark cycle) and had unrestricted access to a standard laboratory diet and water. All procedures used in the experiments were fully compliant with international standards for the care and management of laboratory animals. The experimental animal protocol (CU/I/F/54/19) has been approved by the Cairo University Institutional Animal Care and Use Committee (CU-IACUC).
Determination of the maximum tolerated dose of CHE
Twenty-four healthy female mice, weighing 25 ± 3 g, were used to determine the maximum tolerated dose (MTD) and lethal dose (LD50) according to guideline no. 425 for the testing of chemicals of the Organization for Economic Cooperation and Development (OECD 2008). Four groups of six mice were housed and allowed to acclimatize for 7 days before the experimental procedure before they were injected with one dose of CHE IP. The body weight-adjusted doses were 2000 mg/kg (group 1), 1000 mg/kg (group 2), 550 mg/ kg (group 3), and 450 mg/kg (group 4). Mice were monitored for mortality, weight loss, activity, urine, and changes in stool rates for 48 h. A dose-response curve was generated, plotting the % mortality versus dosage, and the correlation between the two was assessed via regression analysis.
Induction of solid tumor in female mice and experimental design
EAC-bearing mice were sourced by the animal facility of the National Cancer Institute at Cairo University (Giza, Egypt). Mice were randomly divided into six groups of five mice. Each animal was intramuscularly implanted (in the thigh of the left hind limb) with a 200 µL tumor cell suspension in PBS containing approximately 2 × 10 6 cells. After 8 days of inoculation, when the tumor was palpable, mice were treated IP with CHE twice per week for 2 weeks. CHE was dissolved in a mixture of 5% DMSO, 5% tween 80, and 90% H 2 O. DOX (4 mg/kg), used as a reference drug, was IP injected twice a week for 2 weeks (Quwaydir et al. 2019).
The groups were assigned as follows: group I, the negative control mice IP injected with a vehicle; group II, Ehrlich solid tumor-bearing mice subjected to IP injection of the vehicle; group III, Ehrlich solid tumor-bearing mice treated with 4 mg/kg DOX as a positive control group; group IV, Ehrlich solid tumor-bearing mice treated with 107 mg/kg CHE; group V, Ehrlich solid tumor-bearing mice treated with 214 mg/kg CHE; group VI, Ehrlich solid tumor-bearing mice treated with 321 mg/kg CHE. At the end of the experiment, we measured tumor weight (g) and relative tumor volume (RTV), defined by the formula RTV = Vf/Vi, where Vf denotes final tumor volume and Vi denotes initial tumor volume. We also measured tumor growth inhibition according to the formula TGI = 100 − (T/C × 100), where T and C represent the RTV of the treated and control groups, respectively.
Sample collection
After 2 weeks of treatment, mice were euthanized by cervical dislocation under anesthesia. For histopathological examination, a part of tumor tissues, liver, and kidney was fixed in 10% neutral-buffered formalin. For molecular studies, another part of tumor tissues was preserved in RNAlater and stored at − 80 °C. The remaining tissues were snap-frozen in liquid nitrogen and preserved at − 80 °C.
Histopathological examination
The formalin-fixed part of the tissue was dehydrated by passing it through an ascending series of ethyl alcohol. After that, the alcohol was removed from the tissue using xylene, and the tissue was embedded in paraffin. Serial tissue sections of 5 μm thickness were stained with hematoxylin and eosin and examined under the microscope by an expert pathologist in a blind protocol.
Assessment of antioxidant capacity in tumor tissue
The total antioxidant capacity (TAC) of tumor tissue was determined using the Biodiagnostic test kit. Tumor tissue was homogenized in a cold potassium phosphate buffer (pH 7.4) composed of 5 mM potassium phosphate, 0.9% sodium chloride, and 0.1% glucose. The tissue lysate was centrifuged for 15 min at 4000 rpm, and the supernatant was mixed with the substrate (H 2 O 2 ) and incubated at 37 °C for 10 min. Afterward, the chromogen and enzyme buffer were added, and the mixture was incubated at 37 °C for 5 min. The relative absorbance of samples thus prepared and blanks was measured at 505 nm against distilled H 2 O using a microplate reader (Infinite®200 PRO NanoQuant, Tecan; Männedorf, Zürich, Switzerland). Finally, TAC was calculated, in units of concentration, using the equation where TAC is the total antioxidant capacity and A b andA sa are the absorbances of the blank and the sample against distilled H 2 O, respectively.
DNA fragmentation assay
Genomic DNA was extracted using the GeneJET Genomic DNA Purification Kit following the manufacturer's instructions. The extracted DNA samples were pooled across animals within groups and electrophoresed on a 2% agarose gel using a reduced voltage to avoid overheating that would cause heat-induced confounds in DNA fragmentation. Finally, the fragmented DNA was captured and visualized using the BioSpectrum 815 Imaging System (UVP, CA, USA).
Comet assay (single-cell gel electrophoresis)
We used an alkaline comet assay to evaluate and detect alkali labile sites and DNA double-and single-strand breaks, as previously described (Tice et al. 2000;Dhawan et al. 2003). Briefly, tumor tissue (approximately 50 mg) was minced in Hank's Balanced Salt Solution supplemented with 20 mM EDTA and 10% DMSO. Then, 10 µL of the clear layer was mixed gently by pipetting with 75 µL of 1% low melting point agarose and incubated at 37.5 °C for 5 min. The mixture was gently distributed on slides precoated with 1% agarose and incubated in freshly prepared lysing solution (2.5 M NaCl, 100 mM EDTA, and 10 mM Trizma base [pH 10], 1% Triton X-100, and 10% DMSO) for 24 h. Subsequently, the slides were incubated in electrophoresis buffer (pH > 13) for 30 min before the samples were electrophoresed by applying 0.74 V/cm and 300 mA for 30 min. Slides were removed from the buffer gently and drained on a tray and then neutralized using neutralization buffer (pH = 7.5) twice for 5 min each. Slides were dehydrated in absolute ethanol, stained with ethidium bromide, and visualized using a fluorescent microscope (Leica City, Germany). Microscopic images were analyzed using commercial software (CometScore V2.0). Tail length, tail DNA %, and tail olive moment scores were used to quantify DNA damage.
Gene expression using reverse transcription-quantitative real-time PCR (RT-qPCR)
Total RNA was isolated using the GeneJET RNA Purification Kit following the manufacturer's instructions. RNA concentration and purity were assessed by absorbances measured at 260 and 280 nm using the Infinite®200 PRO NanoQuant (Tecan). cDNA was synthesized using the RevertAid First Strand cDNA Synthesis Kit in the Veriti™ 96-Well Thermal Cycler following the manufacturer's instructions. To quantify gene expression levels, SYBR Green qPCR maxima were used to amplify sequences specific to the gene of interest using the StepOnePlus™ Real-Time PCR System (Applied Biosystem). The primer sequences used in this study were designed using Primer3 software and synthesized by Vivantis Technologies (Selangor, Malaysia). They are depicted in Table 1. Relative gene expression was calculated using the fold change formula 2 −ΔΔCT . Two housekeeping genes, B2m and β-actin, were evaluated for their stability using a web-based tool, and the most stable internal control was chosen according to the best keeper ranking as previously described (Pfaffl et al. 2004).
Immunohistochemical (IHC) staining for Ki67 and Bax
Formalin-fixed paraffin-embedded tissue sections were deparaffinized in xylene two times, 5 min each, followed by hydration by passage through a descending series of alcohol for 5 min each step. Heat-induced antigen retrieval was performed in a steamer chamber using citrate buffer (pH = 6.1). Slides were then allowed to cool and washed with distilled H 2 O three times for 5 min each, before being immersed in a hydrogen peroxide bath for 10 min to block the endogenous peroxidase activity. Then, the slides were rinsed off twice with distilled H 2 O and once with tris-buffered saline containing 0.1% tween (TBST, pH = 7.6) for 5 min. To inhibit nonspecific binding sites, the slides were treated in 1% bovine serum albumin (BSA)/TBST at room temperature for 1 h. Subsequently, slides were incubated with the primary antibodies against BAX diluted at 1:100 and ki67 diluted at 1:300 in 1% BSA/TBST at 4 °C overnight. Slides were washed three times in TBST, for 5 min each. Slides were washed with a chromogen solution, prepared by adding one drop of 3,3′ diaminobenzidine (DAB +) and substrate buffer, for 3 min, and then immersed in distilled water to stop the color reaction, followed by counterstaining with hematoxylin for 1 min. Finally, slides were cleared in xylene, dehydrated by passing through an ascending series of alcohol, and mounted with DPX. Slides were photographed using a light microscope (Olympus, Tokyo, Japan), and the fractional (%) stained area was calculated using Fiji ImageJ software version 1.53 h.
Statistical analysis
Data analyses were conducted using the Statistical Package for the Social Sciences (version 25). The sample size required for statistical power was calculated using Gpower 3.1 software. Tests for outlier values were conducted using Minitab (version 17). A normality test was conducted using skewness and kurtosis, and the normally distributed data were analyzed through parametric tests. The statistical significance levels for the difference between the two groups were determined using Student's t-test. To determine the significance of the difference between two groups where there were multiple comparisons, we used the one-way analysis of variance (ANOVA) followed by the Tukey post hoc test. The correlation of the two variables was evaluated with Pearson's correlation. Data were expressed as mean ± SEM; a p-value of ≤0.05 was considered significant for all tests. Graphs were generated using GraphPad Prism 8.
MTD and LD50 determination
First, we determined MTD and LD50 of CHE, by monitoring the animals for 48 h after a single IP injection of CHE. The dose-response relationship curve (Fig. 2) shows a linear dependence of % lethality on the dose administered, with a significant correlation coefficient (r = 0.9705, P < 0.05). MTD = 428 mg/kg and LD50 = 1142 mg/kg were determined by extrapolating the regression to 0 and 50% lethality, respectively. Based on this analysis, we determined the doses of 107, 214, and 321 mg/kg (at 25%, 50%, and 75% of MTD, respectively) to be administered in further experiments.
CHE treatment retards ESC growth and reduces RTV
At the end of the experiment, CHE and/or DOX treatment resulted in a significant reduction in tumor weight in all treatment regimens (Fig. 3A). Particularly, relative to tumors in the negative control group, treatment by ESC + DOX reduced tumor growth by 33.2% (P < 0.001), ESC + CHE 107 mg/kg by 46.2% (P < 0.001), ESC + CHE 214 mg/kg by 63.9% (P < 0.01), and ESC + CHE 321 mg/kg by 51.9% (P < 0.001) (Fig. 3B). RTV was also significantly decreased in all treatment groups (P < 0.05 for ESC + DOX, P < 0.01 for ESC + CHE 107 mg/kg and ESC + CHE 321 mg/kg, and P < 0.001 for ESC + CHE 214 mg/kg), in comparison with ESC + vehicle group (Fig. 3C). Notably, the body weight was increased normally in each group except for the DOXtreated group, which showed a decreased total body weight by the end of the 2-week treatment (Fig. 3D).
Histopathology
Evaluation of tumor tissues in the ESC + vehicle group revealed several pleomorphic cells with hyperchromatic nuclei penetrating between muscles and fat, a few scattered apoptotic cells, distributed mitotic figures, scattered giant cells, and small regions of perinodular necrosis. Relative to the ESC + DOX group, treatment with CHE led to a high regression of tumor growth associated with small, partially necrotic nodules of tumor cells composed mainly of ghost cells, marking apoptosis, and large areas of perinodular and intranodular necrosis with few karyorrhectic fragments (Fig. 4A).
The liver of the negative control group was found to have a normal structure, with normal portal veins, normal bile ducts, normal hepatocytes in the periportal region, and normal central veins surrounded by hepatocytes. The effect of CHE on the liver showed a dilation in the central vein with moderately dilated congested portal veins and normal hepatocytes and mild intralobular inflammatory infiltrate with few scattered apoptoses in the high CHE dose (ESC + CHE 321 mg/kg) group. In the ESC + DOX group, mild portal inflammatory infiltrates, moderately dilated congested portal veins, dilated bile ducts, markedly dilated central veins, mild intralobular inflammatory infiltrates, scattered apoptosis in the perivenular area, and microvesicular steatosis of hepatocytes more marked in the perivenular area were observed. Interestingly, the liver tissues from the ESC + CHE 107 mg/ kg animals showed a minimal effect of CHE (Fig. 4B).
In the treated groups, histopathological examination revealed normal kidney structure with normal renal capsule, normal glomeruli, normal Bowman's capsules, normal proximal tubules with preserved brush border, and normal collecting tubules, except for the ESC + DOX and ESC + CHE 107 mg/kg groups, in which glomeruli were mildly congested. This observation reflects the minimal harmful effect of CHE on kidney tissues (Fig. 4C).
CHE induces oxidative stress and DNA damage in tumor tissues
Since multiple chalcone derivatives have an influence on the oxidative status in different experimental models of cancer Huang et al. 2020;Guan et al. 2021;Khusnutdinova et al. 2021), we assessed the oxidative stress level in tumor tissue by measuring TAC upon CHE treatment. TAC in the ESC + Vehicle group was significantly increased (P < 0.05) in comparison with the negative control group. Furthermore, ESC + CHE 321 mg/ kg group showed a significant depletion in TAC compared with ESC + vehicle group (P < 0.001), ESC + DOX group (P < 0.01), ESC + CHE 107 mg/kg group (P < 0.01), and ESC + CHE 214 mg/kg group (P < 0.05) (Fig. 5A).
We next examined whether CHE treatment had the potential to induce DNA damage. For this purpose, an alkaline comet assay was conducted. DNA damage was assessed based on three parameters, namely, tail length, tail DNA %, and tail olive moment. Comet assay results revealed that CHE treatment significantly increased DNA damage in tumor tissue in a dose-dependent manner relative to the ESC + vehicle group ( Fig. 5B and C). Tail length was significantly increased (P < 0.001) in all treated groups compared with the ESC + Vehicle group. Notably, ESC + CHE 321 mg/kg group exhibited a significant increase (P < 0.001) in tail length compared Bars represent means ± SEM, n = 5. *P < 0.05, **P < 0.01, and ***P < 0.001, as determined via one-way ANOVA followed by Tukey's multiple comparison test with the ESC + DOX group. Tail DNA % was significantly increased over the levels in the negative control in the ESC + DOX (P < 0.001), ESC + CHE 107 mg/ kg (P < 0.01), ESC + CHE 214 mg/kg (P < 0.001), and ESC + CHE 321 mg/kg (P < 0.001) groups. Interestingly, ESC + CHE 321 mg/kg group showed a significant increase (P < 0.01) in tail DNA % compared with the ESC + DOX group. Similarly, the tail olive moment was significantly increased in each treatment group relative to the negative control (ESC + DOX, P < 0.001; ESC + CHE 107 mg/kg, P < 0.05; ESC + CHE 214 mg/kg, P < 0.001; ESC + CHE 321 mg/kg, P < 0.001). Moreover, the tail olive moment was significantly increased in the ESC + CHE 321 mg/kg group compared with the ESC + DOX group (P < 0.01).
For further confirmation of the DNA damage-inducing activity of CHE, DNA was extracted from tumor tissues from all groups and subjected to 2% agarose gel electrophoresis. Our results showed remarkable DNA fragmentation in all treated groups compared with the untreated control group (Fig. 5D).
CHE affects the expression of apoptosis-related genes in tumor tissues
Since CHE induced DNA damage, which in turn may affect apoptosis, qPCR was performed to quantify mRNA expression levels for four apoptosis-related genes (p53,Bax,Casp3,. B2m gene was chosen as the internal housekeeping gene according to best keeper analysis compared with β-actin, as shown in (Fig. A.1). The results of qPCR revealed that p53 mRNA expression levels were significantly (by 2.8-fold) upregulated in tumor tissues of the highest dose CHE treatment (ESC + CHE 321 mg/ kg) group relative to the vehicle-treated group (P < 0.05) (Fig. 6A). Bax mRNA expression levels were significantly upregulated in the ESC + DOX group (by 1.5-fold; P < 0.05) and ESC + CHE 107 mg/kg group (by 4.2-fold; P < 0.05) relative to the vehicle-treated group (Fig. 6B). Casp3 mRNA expression levels were upregulated in ESC + DOX, ESC + CHE 107 mg/kg, and ESC + CHE 321 mg/kg groups relative to the vehicle-treated group but did not reach significance (Fig. 6C). Bcl-2 mRNA expression levels were Fig. 5 The effect of chalcone derivative on total antioxidant capacity (TAC) and DNA damage in ESC tumor tissues. A CHE treatment reduces TAC in tumor tissues. B Representative images of damaged DNA induced by CHE or DOX treatments of ESC tumor tissue compared with intact DNA of the negative control group. C Quantification of DNA damage parameters (tail length, % tail DNA, and tail moment) in the treatment groups. In each sample, 50 or more cells were analyzed. CometScore software (V2.0) was used to assess DNA damage parameters. Bars represent means ± SEM, n = 3. *P < 0.05, **P < 0.01, and ***P < 0.001, as determined via one-way ANOVA followed by Tukey's multiple comparison test. D Representative photograph showing DNA fragmentation of DNA extracted from the tumor or healthy tissues of the different experimental treatment groups. Lanes M: DNA ladder, I: negative control group, II: ESC + vehicle group, III: ESC + DOX group, IV: ESC + CHE 107 mg/kg group, V: ESC + CHE 214 mg/kg group, VI: ESC + CHE 321 mg/kg group upregulated by 1.8-fold (P < 0.01) in the ESC + DOX group and 4.2-fold (P < 0.001) in the ESC + CHE 321 mg/kg group relative to the vehicle-treated group (Fig. 6D).
Discussion
Our study was conducted to investigate the anticancer properties of CHE, a newly synthesized chalcone derivative, which has a documented potent anticancer effect in vitro against different cancer cell lines MCF7, A549, HCT116, and HepG2 (Mohamed et al. 2018). This study was designed to further evaluate the anticancer activity of CHE in vivo using Ehrlich solid tumor. Our data demonstrate that CHE regressed tumor weight and induced oxidative stress and DNA damage, which may ultimately result in apoptosis.
One of the features of cancer cells is increased aerobic glycolysis, which is combined with high levels of oxidative stress (Cairns et al. 2011) caused by reactive oxygen species (ROS) that build up because of an imbalance between ROS production and removal. Changes in various signaling pathways that impact cellular metabolism lead to elevated ROS levels in cancer cells (Diehn et al. 2009;Sznarkowska et al. 2017). Cancer cells are aided in the reduction of ROS levels by enhanced antioxidant defense mechanisms that help them acclimatize to the redox imbalances caused by fast growth (Jones and Thompson 2009). Thus, oxidative stress, through the increased amounts of ROS that is the driver of cell damage, may inhibit tumor growth. The increased ROS causes cancer cells to activate their robust antioxidant systems to overcome such stress. TAC is a parameter to assess the ability of the cancer cells to counteract oxidative stress (Trachootham et al. 2009). This feature Fig. 6 Effect of CHE treatment on mRNA expression levels of the apoptosis-related genes p53, Bax, Casp3, and Bcl2 assessed by qPCR in tumor tissues of ESC-bearing mice. Bars represent means ± SEM, n = 3. *P < 0.05, **P < 0.01, and ***P < 0.001, as determined via Student's t-test provides an intriguing window of opportunity for therapeutic intervention since cancer cells may be more susceptible to drugs that induce increased ROS generation than normal cells (Gorrini et al. 2013). Indeed, CHE treatment caused a significant decrease in TAC in cancer cells, indicating an elevated ROS level associated with the induction of oxidative stress. This finding is in line with other reports that chalcone induced oxidative stress in chronic myelogenous leukemia (k562) cancer cells (Li et al. 2019), human glioma cell line U87-MG, and in a xenograft model in vivo (Loch-Neckel et al. 2015), and human colorectal HCT116 cells led to DNA damage and apoptosis (Takac et al. 2020).
Additionally, we proved the potency of CHE as an inducer of DNA damage. CHE treatment showed a remarkably significant increase in all DNA damage Fig. 7 Immunohistochemical evaluation of the proliferation marker ki67 and the proapoptotic marker BAX in tumor tissues obtained from animals in each treatment group. A Representative microscopic images (× 200 on the left and 400 × blow-up of marked areas on the right) of immunostaining (brown color) of ki67 in tumor tissues. B Quantitative analysis of the fraction of area containing positive ki67 immunostaining. C Same as A but immunostaining for BAX. D Quantitative analysis of the fraction of area containing positive BAX immunostaining. Area fraction was calculated using ImageJ software (Fiji). Bars represent means ± SEM, n = 3. *P < 0.05, **P < 0.01, and ***P < 0.001, as determined via Student's t-test parameters and DNA fragmentation patterns, as evidenced via DNA fragmentation and comet assays. Other studies using different chalcones compounds unveiled their DNA damaging potency in vitro and in vivo. As reported previously, chalcone derivatives induced apoptosis and DNA damage by raising ROS levels in melanoma cells (Li et al. 2020). Similar results were observed for trimethoxy chalcone in A549 human lung cancer cells (Gil et al. 2019). In addition to their effect on oxidative stress-induced DNA damage, chalcones can cause DNA damage by binding DNA strands through van der Waal forces and aromatic ring stacking interactions. The unsaturated carbonyl system in chalcone compounds supports stronger electrostatic interactions between the hydrogen and DNA bases, as evidenced via molecular docking experiments showing that chalcones are bound to a DNA dodecamer with many hydrogen bonds (El-Wakil et al. 2020). This may further explain DNA damage induction by chalcone derivatives.
It is known that p53 becomes active in response to DNA damage, with its capacity to bind DNA and induce transcriptional activation increasing as its expression levels are rising quickly (Lakin and Jackson 1999). p53 stimulates target genes, resulting in DNA damage repair, cell growth inhibition, and apoptosis. Particularly, when DNA damage is severe, p53 triggers the activation of proapoptotic genes such as Bax, resulting in programmed cell death (Crowe and Sinha 2006). Consistent with those findings, CHE in our study significantly upregulated the expression of p53 and Bax mRNA levels and BAX protein levels in response to DNA damage, eventually leading to apoptosis that we observed via histopathological examination in tumor tissue. Apoptosis induction by chalcones via upregulating p53 and Bax expressions was reported in previous studies using different experimental models (Hsu et al. 2005(Hsu et al. , 2006Singh et al. 2014;Loch-Neckel et al. 2015;Bagul et al. 2017;Cabral et al. 2017;Fong et al. 2017;Kim et al. 2017). Surprisingly, both DOX and CHE increased Bcl-2 mRNA expression in tumor tissues of treated mice groups. This may be attributed to ROS via promoting phosphorylation and ubiquitination of proteins in the Bcl-2 family, resulting in elevated proapoptotic protein levels and reduced antiapoptotic protein levels (Li et al. 2004). Also, docking studies revealed that chalcone compounds could inhibit the BH3 domain in BCL2 protein, thus inhibiting the antiapoptotic activity of BCL2 (Dey et al. 2020). Together, these findings suggest that CHE may inhibit Bcl-2 activity, although its mRNA expression levels were increased.
Interestingly, CHE treatment significantly decreased the proliferative marker Ki67. The Ki67 protein has been extensively studied at the molecular level, and it has long been used as a prognostic and predictive marker in cancer diagnosis and treatment. (Li et al. 2015). Decreased levels of Ki67 upon treatment with chalcones are reported in several studies. For example, Maioral M et al. reported a chalconeinduced decrease in Ki67 in K562 and Jurkat cells. Another investigation using human non-small-cell lung cancer found that cardamonin treatment resulted in a reduction in Ki67 expression, and another study found that Ki67 was decreased in murine B16 melanoma cells in C57/BL6 mice upon treatment by xanthohumol chalcone. Chalcone also decreased the Ki67 in an in vivo model of triple-negative breast cancer cells (Luo et al. 2021).
Likely acting via some or all of the above molecular action pathways, CHE eventually caused a significant decrease in tumor weight, relative tumor size, and a significant tumor growth inhibition in CHE-treated mice compared with mice in the vehicle-treated group and was more effective than the reference drug DOX. One limitation that could be associated with this study is that CHE induces DNA damage, which may have an influence on normal organs. However, our previous results showed that CHE had no cytotoxic effect on the normal melanocytes (HFB4) cells. In addition, using docking simulation studies, CHE has a high binding affinity for EGFR and DHFR, which are overexpressed in cancer cells. Consistently, our histopathological examination revealed a minimal cytotoxic effect of CHE on liver and kidney tissues. This minimal effect may be potentially fully eliminated in future studies using synergistic treatment with strong antioxidant compounds to reduce the elevated ROS levels in the liver and kidney. One example is N-acetyl cysteine, which was shown to reduce the ROS effect of Zn oxide nanoparticles in the liver and kidney tissues without affecting its antitumor effect (El-Shorbagy et al. 2019).
Conclusion
Based on the results of our study, we can infer that CHE treatment is effective against ESC in mice. CHE exerts a promising anticancer activity against ESC via the depletion of TAC with subsequent DNA damage, triggering the upregulation of the pro-apoptotic genes such as p53 and Bax. Moreover, CHE decreased the proliferative marker Ki67 and increased BAX protein in tumor tissues. Overall, CHE may emerge as a potential therapy for solid tumors with minimal toxicity to vital organs. Further studies are needed to improve the selective delivery of CHE to cancer cells using nanoparticle-based delivery systems that would enable lowering the dose and increasing the therapeutic potency of CHE.
Author contribution
The authors declare that all data were generated in-house and that no paper mill was used. AAW, HME, HMH, IAA, SS, and SAI conceived and designed research. AAW, HME, HMH, IAA, SS, and SAI conducted experiments. AAW, HME, HMH, IAA, SS, and SAI contributed new reagents or analytical tools. AAW, HME, HMH, IAA, SS, and SAI analyzed data. AAW, HME, HMH, IAA, SS, and SAI wrote the manuscript. All authors read and approved the manuscript.
Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Data availability Data available on request.
Declarations
Ethics approval All the techniques used in the experiments were fully compliant with international standards for the care and management of laboratory animals. The experimental animal protocol (CU/I/F/54/19) has been approved by the Cairo University Institutional Animal Care and Use Committee (CU-IACUC), Faculty of Science, Cairo University, Egypt.
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2022-07-27T06:17:51.683Z
|
2022-07-26T00:00:00.000
|
{
"year": 2022,
"sha1": "6626a1416e9e044eb929bab788c55a5449e743f0",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00210-022-02269-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "22fa093f3cac987aadf1d958c724aeab55b54770",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248156732
|
pes2o/s2orc
|
v3-fos-license
|
Factors influencing hesitancy towards adult and child COVID-19 vaccines in rural and urban West Africa: a cross-sectional study
Objectives This study aims: (1) to identify and describe similarities and differences in both adult and child COVID-19 vaccine hesitancy, and (2) to examine sociodemographic, perception-related and behavioural factors influencing vaccine hesitancy across five West African countries. Design Cross-sectional survey carried out between 5 May and 5 June 2021. Participants and setting 4198 individuals from urban and rural settings in Burkina Faso, Guinea, Mali, Senegal and Sierra Leone participated in the survey. Study registration The general protocol is registered on clinicaltrial.gov. Results Findings show that in West Africa at the time only 53% of all study participants reported to be aware of COVID-19 vaccines, and television (60%, n=1345), radio (56%; n=1258), social media (34%; n=764) and family/friends/neighbours (28%; n=634) being the most important sources of information about COVID-19 vaccines. Adult COVID-19 vaccine acceptance ranges from 60% in Guinea and 50% in Sierra Leone to 11% in Senegal. This is largely congruent with acceptance levels of COVID-19 vaccinations for children. Multivariable regression analysis shows that perceived effectiveness and safety of COVID-19 vaccines increased the willingness to get vaccinated. However, sociodemographic factors, such as sex, rural/urban residence, educational attainment and household composition (living with children and/or elderly), and the other perception parameters were not associated with the willingness to get vaccinated in the multivariable regression model. Conclusions Primary sources of information about COVID-19 vaccines include television, radio and social media. Communication strategies addressed at the adult population using mass and social media, which emphasise COVID-19 vaccine effectiveness and safety, could encourage greater acceptance also of COVID-19 child vaccinations in sub-Saharan countries. Trial registration number NCT04912284.
INTRODUCTION
Sufficient immunisation coverage against COVID-19 in particular also in low-income and middle-income countries (LMICs) is crucial in addressing the current pandemic. 1 In Africa, as elsewhere, reaching the necessary herd immunity threshold is jeopardised by factors, such as the emergence of new SARS-CoV2 variants, inequitable access to COVID-19 vaccines and vaccine hesitancy. 2 Vaccine hesitancy can be defined as a 'delay in acceptance or refusal of vaccination despite availability of vaccination services' and can vary 'across time, setting, and vaccines'. 3 4 In Africa, a recent survey conducted among 15 countries indicates that acceptance of adult COVID-19 vaccines varies from 94% and 93%, respectively, in Ethiopia and Niger to 65% and 59%, respectively, in Senegal and the Democratic Republic of Congo. 2 5-7 However, little is known about acceptance of child COVID-19 vaccines. Furthermore, there are concerns that without appropriate interventions even in settings with relatively high reported levels of willingness to get vaccinated compared with countries such as the USA and Strengths and limitations of this study ► The rural areas included in the study were located in the surroundings of the capital cities and may not be representative of more remote settings. ► Data are drawn from a cross-sectional survey meaning that conclusions cannot be made regarding the causality of relationships. ► The study relied on self-reported data, which can be susceptible to social desirability bias; however, the influence of this bias is likely to have had a minimal impact on this study's main findings. ► In Senegal, there was a limited number of observations due to particular ethical requirements in the country.
Open access
Russia, 8 those who are still hesitant may shift to completely refusing or maintain passive avoidance in seeking out immunisation. 9 High levels of COVID-19 vaccine hesitancy coupled with inequitable access to COVID-19 vaccines in LMICs represent a major problem in the global efforts to control the current COVID-19 pandemic. 10 Furthermore, vaccine hesitancy might also revert the tremendous successes LMICs have made in increasing overall immunisation against other (childhood) infectious diseases, 11 if hesitancy towards COVID-19 vaccines translates into a more generalised hesitancy towards other vaccines, such as routine childhood vaccinations. Therefore, to build confidence and trust in COVID-19 vaccines, it is important to understand and address the reasons for vaccine hesitancy and the motivations behind the decision making of whether to get vaccinated or not. However, context-specific studies, which investigate factors influencing vaccine hesitancy towards adult and child COVID-19 vaccines in sub-Saharan Africa, are still far and few between. 8 In this study, a communitybased survey was carried out in five West African countries (Burkina Faso, Guinea, Mali, Senegal and Sierra Leone) in order to: (1) identify and describe similarities and differences in both adult and child COVID-19 vaccine hesitancy and (2) to examine sociodemographic, perception-related and behavioural factors influencing vaccine hesitancy across a subregion of Africa, which shares major cultural and geopolitical characteristics 12 13 MATERIALS AND METHODS
Study area
The survey was conducted in the five West African countries Burkina Faso, Guinea, Mali, Senegal and Sierra Leone. In all study countries, study sites were selected in consultation with the local principal investigators from among urban and rural communities in and around the capital cities of the countries, namely Ouagadougou, Conakry, Bamako, Dakar and Freetown, respectively.
Sample size
The study size was calculated to estimate the proportion of the population willing to be vaccinated against COVID-19. Assuming a proportion of 0.5 (conservative estimate leading to the largest sampling size), 385 individuals had to be interviewed per study country to receive an estimate with 5% precision. These sample size considerations were met in all countries apart from Senegal, where a considerable proportion of respondents had to be excluded from the analysis as they had reported to have never heard of any COVID-19 vaccines.
Sampling strategy
Participants were selected from among the general population within predefined rural and urban study areas. Similar proportions of interviewees were selected from among rural and urban areas. The number of interviews to be conducted was based on the overall sample size and was proportionally allocated according to the population size within the sampling clusters. A random sample was drawn using an adjusted random walk procedure, a procedure used in previous immunisation coverage studies. 14 Within each cluster between 8 and 12 random walks were conducted, and an equal number of interviews were conducted per random walk. Each random walk started on a randomly assigned location mark. For this purpose, geographical maps of the selected clusters were drawn, for which random coordinates were marked using ascending numbers. Valid sampling points (eg, coordinates pointing to a house or in the proximity of a house) on each map were identified by the field teams. Coordinates were selected in consecutive order from these valid location marks in order to start the random walks. The random walk procedure was applied to select study participants as described in Lemeshow and Robinson. 15 Once the sample was saturated per each starting point, a new one was used until the defined sample size was reached. Inclusion criteria for the study were: to be at least 18 years old, live in the study area and willingness to provide written informed consent. All those who did not meet the inclusion criteria were excluded from the study. In Senegal, the ethical commission asked to exclude from the study those who had been already vaccinated; for this reason, an additional exclusion criterion was added: have been offered the COVID-19 vaccination.
Data collection
Survey data were collected between 5 May and 5 June 2021. Respondents were invited to take part in face-toface survey interviews using a 45-item questionnaire. The questionnaire uses measures as employed in other COVID-19 survey-based studies (eg, COSMO, COVID-19 Snapshot Monitoring, https://projekte.uni-erfurt.de/ cosmo2020/web/) and were guided by the survey design recommendations by the WHO SAGE Working Group on Vaccine Hesitancy. 16 Questions were discussed with all local PIs (Principal Investogators) and adapted as appropriate to the countries' context. The questionnaires were completed by trained local fieldworkers on tablets with KoBoToolbox software (V.2.0) installed. Questionaries were programmed to minimise data entry errors, for example, by applying predefined ranges for some variables. The questionnaire asked about respondents' sociodemographic background characteristics and their perceptions, experience, confidence and decision making in relation to COVID-19 and COVID-19 vaccines, as well as past acceptance and perceptions of other vaccines. Depending on the preference of the respondents, interviews were conducted in French, English or one of the local languages. At the time of data collection, the COVID-19 vaccination roll-out was starting in the study countries, and part of our study population had already been offered a vaccine. In Senegal, this part of the population, on specific request of the country's ethical commission, was excluded from the study analysis.
Open access
Analysis
The current study is a multicountry cross-sectional study. Descriptive statistics were used to apply plausibility checks, and no inconsistencies were found in the study data. Graphical and statistical methods were used to describe study data. Continuous variables were described using the median and the IQR, and categorical data were described using the frequency and percentages. Due to the exploratory nature of the study, no significance testing was applied. Some interviewees did not respond to all questions, and these missing data were excluded from the respective analysis. Thus, the denominator in some calculations may differ. Poisson regression models, with robust SEs, were calculated to analyse associations with vaccine hesitancy. For the model, vaccine hesitancy was dichotomised into no (definitely or probably do not want to be vaccinated) or yes (definitely or probably want to be vaccinated). Prevalence ratios (PRs) and the 95% CI were calculated. Categorical variables were dummy-coded to estimate PRs. This coding includes the categories yes, no and don't know (dk). Bivariable models (outcome and one predictor variable) and multivariable regression model (outcome with all predictor variables, without variable selection) were calculated. Multivariable regressions were calculated for each country. Multilevel models to calculate pooled effect estimates were not applied because of the small number of countries. All analyses were done in R (V.4.1.0) using the sandwich packages (3.0-1) to calculate robust SEs.
Institutional review board and ethical considerations
Alongside a general study protocol, which defined the general rules for sampling strategy, sample size, selection of the recruitment areas and the ethical principles on which the survey is based on, country-specific protocols were developed. Data were collected according to a standard GCP procedure. The general protocol is registered on clinicaltrial. gov).
Patient and public
The patients and public were not involved in the design of the study and the research instrument mainly due to time constraints since the first survey wave was meant to be conducted in the early phases of vaccine rollout in the partner countries. However, the public has been engaged in the dissemination of the results. Two webinars (one in French and one in English) were organised on the 30 June 2021 in order to make the findings available to the local stakeholders in order to inform vaccination strategies in a timely manner. Additionally, individual reports have been submitted to the ethical commission of those countries, which have requested them so far (ie, Guinea and Mali).
Study population characteristics
Among the 4198 study participants, 2242 (53%) were aware of COVID-19 vaccines, and data of these individuals were used for subsequent analyses. Figure 1A shows vaccine awareness across the study countries. In Senegal, only 19% (n=149) of the interviewees had heard about vaccines against COVID-19; however, in the other countries, awareness ranged between 50% (n=428) in Sierra Leone and 70% (n=598) in Mali. Respondents' background characteristics stratified by country are described in table 1. In total, 1240 (55%) interviews were conducted in urban areas. The median age of the interviewees was 36 years with an IQR of 28-49 years and 42% (951) were female. The majority of study participants (1832; 85%) lived together with children and 39% (n=840) lived together with people aged ≥65 years. In total, 22% (n=496) had not completed any formal education, 19% (n=417) had attended primary/middle school and 59% (1,329) secondary school or higher. At the time of the survey (May 2021), COVID-19 vaccination had already been offered to 480 (21%) of the interviewees, the majority of whom were in the Guinean study group (n=312; 56%). Half of the respondents who had already been offered a COVID-19 vaccination (n=240; 50%) had subsequently been vaccinated, again, with the largest number in the Guinean study group (n=181; (58%) (figure 1B). Study participants were asked about their main sources for information about COVID-19 vaccines (figure 2). Among all participants, the most important sources mentioned were television (60%, n=1345), radio (56%; n=1258), social media (34%; n=764) and family/friends/neighbours (28%; n=634). Governmental sources were only mentioned by 12% (n=262); however, 40% (n=172) of interviewees from Sierra Leone ranked this as an important information source.
Perceptions of COVID-19 and COVID-19 vaccines
Respondents' perceptions of COVID-19 and COVID-19 vaccines are summarised in table 1. While more than half of all participants reported to be worried about the risk of Figure 1A depicts the proportion of respondents who have ever heard of COVID-19 vaccines stratified by country, and figure 1B shows the proportion of those study participants who actually accepted the COVID-19 vaccination when offered. In alignment with the requirements of the Ethical Committee in Senegal, those participants in Senegal who had already been offered a COVID-19 vaccination had to be excluded from this study.
Open access
getting infected with SARS-CoV-2 (n=1303, 59%), there were variations between countries ranging from 71% (n=421) of respondents who reported to be concerned about getting infected in Mali to only 35% (n=177) and 36% (n=53), in Burkina Faso and Senegal, respectively. Almost half of the interviewees felt currently at risk of getting infected (n=1051; 47%) with Sierra Leone having the highest number of respondents who reported to feel currently at risk of getting infected (n=260; 61%). While 69% (n=1525) of the study participants believe that the vaccine protects against COVID-19, half of the interviewed individuals reported to be unsure whether the vaccine is safe. In fact, in Senegal, 41% of the respondents (n=61) said that they believe COVID-19 vaccines to be unsafe. A considerable proportion of all respondents (n=1429; 65%) voiced concern about vaccine side effects, with the highest levels of concern reported in Senegal (n=120; 81%) and Burkina Faso (n=395, 79%). About half of the participants (n=1017; 46%) think COVID-19 vaccines carry more risk than routine vaccines. This perception varies from 62% in Burkina Faso (n=307) who believe this to be the case to 28% in Guinea (n=156).
Vaccine acceptance, hesitancy and refusal in five West Africa countries
Overall, 39% (n=865) of the study population said they would definitely and 23% (n=514) would probably accept to get vaccinated against COVID-19, while 21% (n=465) of all participants would definitely and 13% (n=287) would probably refuse vaccination. COVID-19 vaccine acceptance ranged from 60% (n=330) in Guinea to 11% (n=16) in Senegal, whereas vaccine hesitancy ranged from 41% (n=58) in Senegal to 10% (n=58) in Guinea ( figure 3A). Similarly, when asked about their willingness to have their own children vaccinated against COVID-19 in case a vaccine would be licenced for that age group, 36% (n=765) responded that they would accept, 25% (n=532) that they would refuse, whereas the remainder reported either that they would probably vaccinate their children against COVID-19 (21%; n=448), or that they would probably not have their children vaccinated (11%; n=235). Again, COVID-19 vaccine acceptance for children was highest in Guinea (n=283; 53%) and Sierra Leone (n=179; 47%) and the lowest in Senegal (n=9; 7%) (figure 3B). Figure 3C shows the congruence of those who would accept, hesitate or refuse vaccination against COVID-19 for themselves, with those who would do so when it comes to their own children. Eighty per cent (n=1690) of the respondents show the same level of willingness in both cases.
Factors influencing acceptance, hesitancy and refusal
Of all respondents 1926 (86%) who were included in the Poisson regression models (figure 4), 22% came from Burkina Faso (n=433), 25% from Guinea (n=484), 27% from Mali (n=524), 7% (n=132) from Senegal and 18% (n=353) from Sierra Leone. Study participants with missing values in the independent variables had to be excluded from the regression analysis. Results from the bivariable (figure 4A) and multivariable (figure 4B) regression are summarised in figure 4. The multivariable regression (figure 4B) showed that the perceived effectiveness of a vaccine to protect from COVID-19 and safety of COVID-19 vaccines increased the willingness to get vaccinated. Strongest associations with the perception of vaccine protection were observed for Burkina Faso (PR=6.1; 95% CI 2.6 to 14.4), Sierra Leone (PR=4.3; 95% CI 1.5 to 12.2) and Senegal (PR=4.2; 95% CI 1.0 to 18.0). Strongest association with vaccine safety was shown for Senegal (PR=6.5; 95% CI 2.4 to 17.9), while for the other countries PRs about two or lower were observed. However, sociodemographic factors, such as sex, rural/urban residence, educational attainment and household composition (living with children and/or elderly), and the other perception parameters were not associated with the willingness to get vaccinated in the multivariable regression model. In the bivariable regression analysis (figure 4A), the belief that the vaccine has side effects or that the vaccine carries more risks compared with routine vaccines lowers the willingness to Figure 3A shows respondents' COVID-19 vaccine acceptance, refusal and hesitancy for themselves 8 (A) and for their children (B), respectively. Figure 3C shows a cross-tabulation 9 of those who would accept, hesitate or refuse to get themselves vaccinated against COVID-19, with those who would accept, hesitate or refuse to have their children vaccinated against COVID-19. get vaccinated. However, this effect was no longer present in the multivariable regression, which could indicate that associations were confounded. Overall, the findings were fairly consistent across countries.
DISCUSSION
This study presents findings from a multicountry survey on a thus far under-researched topic: factors influencing COVID-19 adult and child vaccine hesitancy in sub-Saharan Africa. Main findings from the survey, which was conducted in five West African countries (Burkina Faso, Guinea, Mali, Senegal and Sierra Leone) include, first, that at the time of data collection overall levels of COVID-19 vaccine awareness were strikingly low. Out of the 53% of respondents (n=2242) who reported to be aware of COVID-19 vaccines, levels of COVID-19 vaccine acceptance varied and ranged from 60% (n=330) in Guinea to 11% (n=16) in Senegal, conversely vaccine hesitancy ranged from 41% (n=58) in Senegal to 10% (n=58) in Guinea (figure 3A). One explanation for the lower levels of vaccine hesitancy in Guinea and Sierra Leone could be that these two countries have built on experiences from past epidemics, such as the devastating Ebola epidemic in 2014-2016 17 and greater exposure to Ebola vaccinations and vaccination campaigns. 18 It is possible that the major investments in community-based interventions 19 to increase the acceptability of a newly released vaccine might have a role in the greater acceptance of vaccines against COVID-19.
Second, to our knowledge, this study is the first to look into the relationship between acceptance of both adult and child COVID-19 vaccinations in sub-Saharan Africa. Our findings show that the adults' willingness to get vaccinated was largely congruent with the intention to have their own children Figure 4 Bivariable (A) and multivariable prevalence ratios (PRs) (B) for willingness to get vaccinated against COVID-19 (n=1926), 2021. Dots represent the estimated PRs, and the six whiskers represent the 95% CI. Vac., vaccine; y, yes; n, no; dk, don't know.
Open access
vaccinated against COVID-19 should an appropriate vaccine becomes available/accessible ( figure 3C). This stands in contrast to previous research, for instance in England, which shows that study participants were more likely to accept a COVID-19 vaccine for themselves than their child/children. 20 However, other studies in high-income countries have shown that adult vaccine hesitancy may further reduce parental intent to have their children vaccinated, through mechanisms such as distrust, and concerns around vaccine safety and efficacy. 21 This may suggest that as COVID-19 vaccination strategies are moving towards child immunisation 22 in our study region, communication and awarenessraising approaches targeting adults may also have a positive impact on COVID-19 vaccine coverage of children.
Third, consistent with other studies, vaccine hesitancy among the study countries is primarily explained by concerns over the safety and effectiveness of COVID-19 vaccines, 23-25 rather than age or educational attainment. 8 However, in contrast to other studies on vaccine hesitancy in LMIC, gender and rural versus urban setting did not explain the difference. 26 Furthermore, it is noteworthy that the most popular source of COVID-19 related information among the study population are television, radio and social media, rather than, for example, governmental sources and healthcare workers (figure 2), which is in line with recent literature. 27 Previous research has shown that individuals who inform themselves mostly relying on social media as primary source of information are more likely to be hesitant than those drawing more on professional sources of information. 28 Thus, as shown by research concerned with other health topics, such as reproductive health, HIV and other sexually transmitted infections, social media needs to be used more effectively as a tool to communicate correct and appropriate information about COVID-19 vaccinations. 29 30 Overall, only 39% of all participants included in the study reported that they would accept a vaccination against COVID-19, 21% in the group said they would refuse and 36% said they were still hesitant. Strikingly, 55% of those who had previously been offered vaccination against COVID-19 declined it when the opportunity arose (figure 1B). Considerable levels of COVID-19 vaccine hesitancy and refusal coupled with inequitable access to vaccines and suboptimal vaccination coverage represent a complex challenge in these countries. Going forward, the possibility of a detrimental knock-on effect of lack of confidence in COVID-19 vaccines on the uptake of, for instance, childhood routine vaccinations, should be considered. There is evidence to suggest that this could revert the tremendous successes African countries have had in terms of increasing access to immunisation and reducing child deaths. 31 Finally, while this study managed to conduct a baseline survey in a timely manner to capture the moment in time when COVID-19 vaccination campaigns-for both adults and children-had not yet or only just started to roll out in a region of Africa hat has a number of common historical, cultural and geopolitical characteristics, it is not without limitations. First, the study relied on self-reported perceptions and behaviour, and responses are therefore susceptible to social desirability bias. However, trained local fieldworkers experienced in administering survey questionnaires and fluent in local languages and dialects helped to minimise this risk. Furthermore, the survey included both urban and rural areas; however, the rural areas surrounding the capital cities may not be representative of more remote settings. The estimated target sample sizes were met in four out of the five study countries. However, in Senegal, there were particular ethical requirements that needed to be adhered to, and there was a particularly high number of respondents who reported to not be aware of COVID-19 vaccines, which led to a limited number of observations and decreased the power of the data collected for this country. Finally, data are drawn from a cross-sectional survey, meaning that conclusions cannot be made regarding causality of relationships. Going forward, longitudinal research is needed to monitor vaccine hesitancy and its determinants in this region over time.
CONCLUSION
High vaccination coverage represents one of the most effective measures to mitigate the impact of the COVID-19 pandemic 32 but is jeopardised by vaccine hesitancy. Addressing vaccine hesitancy is particularly relevant in countries, where access to vaccines is limited. Communication strategies addressed at the adult population using mass and social media and emphasising vaccine efficacy and safety could encourage greater acceptance also towards COVID-19 child vaccinations in the countries included in the study.
Contributors SF, RK, DIP and DF contributed to the conceptualisation and drafting of the manuscript; SF, RK, SD, MT, RS, HGO, TS, AMB, AKM, COD, JM, DIP and DF contributed to the conceptualisation of the study; SF, SD, MT, HGO, TLS, AMB, AM, COD, SD, KC, MH, PD, DIP and DF contributed to the set-up and implementation of data collection; RK performed data analysis; DIP designed the investigation tool; DF, RS and JM contributed to the financial aspects of the study; DF coordinated data collection, data analysis and drafting of the manuscript. DF is the guarantor of this publication. SF and RK equally contributed to the manuscript. DF and DIP equally contributed to the manuscript. Competing interests None declared.
Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details. Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available on reasonable request.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
|
2022-04-15T06:23:48.349Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "110393ddb089f1bb1d84ecb103ee05b3ac0f0c4d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "d8934c49d1b265ae813ea0d08dff27ee6b47efe7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4883098
|
pes2o/s2orc
|
v3-fos-license
|
Cost of hospitalised patients due to complicated urinary tract infections: a retrospective observational study in countries with high prevalence of multidrug-resistant Gram-negative bacteria: the COMBACTE-MAGNET, RESCUING study
Objective Complicated urinary tract infections (cUTIs) impose a high burden on healthcare systems and are a frequent cause of hospitalisation. The aims of this paper are to estimate the cost per episode of patients hospitalised due to cUTI and to explore the factors associated with cUTI-related healthcare costs in eight countries with high prevalence of multidrug resistance (MDR). Design This is a multinational observational, retrospective study. The mean cost per episode was computed by multiplying the volume of healthcare use for each patient by the unit cost of each item of care and summing across all components. Costs were measured from the hospital perspective. Patient-level regression analyses were used to identify the factors explaining variation in cUTI-related costs. Setting The study was conducted in 20 hospitals in eight countries with high prevalence of multidrug resistant Gram-negative bacteria (Bulgaria, Greece, Hungary, Israel, Italy, Romania, Spain and Turkey). Participants Data were obtained from 644 episodes of patients hospitalised due to cUTI. Results The mean cost per case was €5700, with considerable variation between countries (largest value €7740 in Turkey; lowest value €4028 in Israel), mainly due to differences in length of hospital stay. Factors associated with higher costs per patient were: type of admission, infection source, infection severity, the Charlson comorbidity index and presence of MDR. Conclusions The mean cost per hospitalised case of cUTI was substantial and varied significantly between countries. A better knowledge of the reasons for variations in length of stays could facilitate a better standardised quality of care for patients with cUTI and allow a more efficient allocation of healthcare resources. Urgent admissions, infections due to an indwelling urinary catheterisation, resulting in septic shock or severe sepsis, in patients with comorbidities and presenting MDR were related to a higher cost.
IntrODuCtIOn
Urinary tract infections (UTIs) are highly prevalent worldwide. UTIs that occur in a strengths and limitations of this study ► This is the first study to examine costs of hospitalised patients due to complicated urinary tract infection (cUTI) from a multinational point of view. ► It is focused on countries with a high prevalence of multidrug resistance bacteria where cUTI impose a significant burden. ► The study estimates the mean cost per case from a bottom-up perspective, which provided a high level of granularity and the basis for the assessment of sources of variation and drivers of healthcare costs. ► The design of the study did not include a control group to assess the extra length of stay and excess costs of patients who are admitted to hospital due to a different condition and develop urinary tract infection during their hospitalisation. ► Country-specific unit cost data were not appropriate for most countries, and therefore, we applied the same set of unit costs, as estimated in one country, Spain, to the rest of the countries.
Open Access normal genitourinary tract with no prior instrumentation are considered uncomplicated, whereas complicated UTIs (cUTIs) are associated with structural or functional abnormalities of the genitourinary tract or an underlying disease that interferes with host defence. 1 cUTIs are a frequent cause of hospitalisation as well as a common complication during hospitalisation and have shown a higher prevalence of antimicrobial resistance compared with uncomplicated UTI. 2 Due to the rapid emergence and dissemination of resistance to antimicrobial agents, leading in some cases to multidrug resistance (MDR), some patients with cUTI are left with few therapeutic options and may progress to more serious stages of the disease. 3 Currently, information about the burden of cUTI is scarce. Reports from the USA show that in the year 2000 cUTI accounted for more than 100 000 hospital admissions, often as a result of pyelonephritis. 4 Data from Europe are very limited, although the last point prevalence survey of European acute care hospitals estimated the prevalence of healthcare-associated infections to be 6%; of these, UTI was the third most common infection (19%). 5 Based on these point prevalence data, the annual health burden of hospitalised patients with UTI was estimated to be 81.2 disability-adjusted life years per 100 000 individuals in the general population. 6 Despite this high burden to healthcare systems and the increased pressure for cost containment in healthcare, few studies have examined the costs of cUTIs. Some papers have measured the cost of community-acquired UTIs 7-10 and nosocomial UTIs, 11 12 or both. 13 Most of these studies were conducted in the USA, 7 8 11-13 while studies undertaken in European countries have mainly focused on women visiting primary care settings with suspected UTIs. 9 10 Some papers have estimated the impact of extended-spectrum beta-lactamase (ESBL)-producing Escherichia coli on the cost of UTI episodes requiring hospitalisation. 14 15 Estimating the magnitude of the financial impact of this prevalent and potentially avoidable condition is particularly useful for measuring the potential cost savings from averting a case, thereby emphasising the importance of prevention and the sizeable economic consequences of MDR. In addition, cost estimates might inform cost-effectiveness analyses that require data on episode costs in order to compare alternative courses of treatment related to this condition. Therefore, there is a need for data on the economic burden imposed to healthcare systems due to hospitalised cUTI patients, especially in countries with high prevalence of MDR.
In this paper, we present an analysis of the economic burden of cUTI in seven European countries plus Israel, all of which have a high prevalence of MDR. The aims of this study are to estimate the cost per case of hospitalised patients due to cUTI and to investigate the factors associated with cUTI-related healthcare costs.
The analyses reported in this paper are part of a larger project, 'REtrospective observational Study to assess the clinical management and outcomes of hospitalised patients with Complicated Urinary tract INfection in countries with high prevalence of multidrug resistant Gram-negative bacteria (RESCUING study)', with an overall aim of providing information about the epidemiology, clinical management, outcomes and healthcare costs of patients hospitalised with cUTI.
MAterIAls AnD MethODs setting
This is a multinational observational, retrospective study conducted in 20 hospitals in eight countries (Bulgaria, Greece, Hungary, Israel, Italy, Romania, Spain and Turkey). Data were collected on patients who had a diagnosis of cUTI as the primary cause of hospitalisation and patients hospitalised for another reason but who developed cUTI during their hospitalisation from January 2013 to December 2014, based on International Classification of Diseases, 9th Revision (ICD-9) and ICD, 10th Revision (ICD-10) codes (ICD-9 Clinical Modification (CM) codes: 590.1, 590.10, 590.11, 590.2, 590.8, 590.80, 590.9, 595.0, 595.89, 595.9 and 599.0; ICD-10 CM codes: N10, N12, N13.6, N15.1, N15.9, N30.0, N30.8, N30.9 and N39.0). The study protocol has been published elsewhere. 16 In order to avoid selection bias, all consecutive patients who had ICD-9 or ICD-10 CM codes were reviewed at each site. All patients who met the inclusion criteria were selected for data collection. Inclusion criteria were patients with UTI and at least one of the following: indwelling urinary catheter, urinary retention, neurogenic bladder, obstructive uropathy, renal impairment caused by intrinsic renal disease, renal transplantation, urinary tract modifications, pyelonephritis and normal urinary tract anatomy, and at least one of the following signs or symptoms: chills or rigours associated with fever or hypothermia, flank pain (pyelonephritis) or pelvic pain (cUTI), dysuria or urinary frequency, or urinary urgency, costovertebral angle tenderness on physical examination and either urine culture with at least 105 CFU/mL or greater of a uropathogen (no more than two species) or at least one blood culture growing possible uropathogens (no more than two species) with no other evident site of infection. These inclusion criteria are in accordance to the definition of cUTI provided in ref 17 . The analysis presented in this paper focuses on patients admitted to hospital because of cUTI only; we do not include patients admitted for other reasons who developed cUTI during hospitalisation. The reason is that in the case of latter it is not possible to isolate the incremental cost of cUTI without a matched control group, that is, comparing similar patients with and without cUTI during their hospital stay (see, eg, ref 18). Our data indicate that the proportion of cUTI that are the cause of hospital admission is 65% versus a 35% that develop cUTI during hospitalisation. study data collection Data were collected retrospectively for all cUTI episodes at participating hospitals during the study period. For all patients, a standardised set of information was recorded.
Open Access
This consisted of demographics, comorbidities including those required to calculate a modified Charlson score, 19 place of acquisition of infection, infection source and severity, microbiological data, imaging test data, infection management, antibiotic therapy, outcomes, details of discharge and readmissions. The follow-up period was 2 months after discharge from the admitting hospital.
The perspective of the cost analysis was the hospital provider, as we focus on hospitalised patients with cUTI, and this is where the majority of the cost burden falls. 20 21 Study size was defined based on the primary outcome measure of the main study, that is. treatment failure rate between MDR bacteria and other pathogens. 16 estimating the cost per case of cutI We collected information on healthcare resource utilisation attributed to cUTI for each episode in the dataset. The healthcare components collected were: (1) length of hospital stay (LOS) (general ward and intensive care unit (ICU)), (2) diagnostic and follow-up tests, (3) urological interventions and haemodialysis, (4) antibiotic treatment before, during and after hospitalisation and (5) hospital readmissions and outpatient visits within 60 days of discharge. For each component, a comprehensive list of specific items was compiled and reviewed by a clinical expert so that it included only healthcare resources that could be attributed to cUTI.
For unit costs, we planned to use the tool developed by WHO-CHOICE health service delivery costs, 22 which provides information on the unit costs of bed-days and outpatient visits across 191 countries. Unfortunately, unit costs from this tool are only available for inpatient and outpatient visits, and for 2007-2008, and therefore they could not be used in our study. Instead, unit cost data for each cost item were collected for each country by means of a questionnaire sent to the principal investigators of all participating sites. The questionnaire was provided as an online and paper version and included the list of all healthcare services identified for the management of cUTI (see supplementary material 1). The response rate for the questionnaire was 90% (18 out of 20). We received at least one response from each country. However, despite efforts to facilitate the complete fulfilment and harmonisation of the questionnaires, responses from some of the sites had missing values for key healthcare costs items, such as the cost of a day in hospital and for the most frequent diagnostic tests and treatment procedures. Furthermore, some sites provided the data in terms of user charges instead of the cost incurred by the hospital in the provision of the services. As a result, we observed a large degree of variation in unit costs across sites that was not attributable only to differences in actual costs between regions. Therefore, we generated a single set of unit costs based on the mean values across three sites within the same country, Spain, which provided consistently estimated values reflecting hospital costs for all the items included in the questionnaire. Using a common set of unit costs across all patients means that any observed variation in costs is due to differences in healthcare resource use. We discuss the limitations of this approach in the discussion section.
For antibiotic therapy, we estimated the cost per mg for each drug for which unit cost data were available and applied the mean cost per mg to the remaining therapies. We estimated the cost per day with antibiotic therapy based on the dosage and frequency recorded for each drug, which was then combined with the duration of the treatment to estimate total antibiotic therapy costs. Patients might receive more than one antibiotic drug at the same time; in that case, they count as separate antibiotic therapy days. Patients with total hospital LOS >200 days were excluded (three observations) as these were deemed to be due to coding errors.
We computed means and SD as well as medians and IQRs for the cost per case, and we quantified the contribution of each cost item and overall healthcare component to the total cost per case. We also present variations in the overall cost per case by country and for different cost components. All costs were reported in 2016 euros.
Costs were calculated for each case of cUTI requiring a hospital admission. If a patient required a second hospital admission, then if this occurred within 60 days of discharge of the first admission, it was counted as a readmission and included in the cost of the first admission. If another admission occurred after 60 days postdischarge (either of the index admission or a readmission), then this was counted as a separate case (observation) in the data.
Factors associated with cutI-related healthcare costs The analysis of the factors associated with cUTI-related healthcare costs was undertaken using multivariate regression analysis using patient level cost data. The dependent variable was total cost per patient estimated as described above.
The explanatory variables were demographic factors (age and gender), comorbidities measured by the Charlson morbidity index, 18 admission characteristics (urgent vs elective; and admitted from home vs from another facility), infection severity (defined as septic shock or severe sepsis), MDR profile (defined as non-susceptibility to at least one agent in three or more antimicrobial categories 23 ), episode number and 30-day mortality. We categorised the source of infection using the following definitions: (1) UTI related to indwelling urinary catheterisation including long-term, short-term or intermittent catheterisation; (2) pyelonephritis, consisting of inflammation of the kidney tissue caused by bacterial infection in patients that have no other urinary tract modification; and (3) other sources, which includes UTI related to anatomical urinary tract modification, UTI related to obstructive uropathy and UTI related to other events that do not fulfil any other category. We ran three sets of models: (1) univariate regression models for each variable separately, (2) a multivariate model including all the covariates and (3) a reduced multivariate model including only significant variables (where in Open Access the case of categorical variables, at least one indicator was non-significant).
Analyses were undertaken using Stata V. 12. More details about the statistical methods used in the analyses are reported in the online supplementary material 2.
results study population characteristics Data were collected on 653 cUTI episodes in 637 patients (mean number of episodes per patient: 1.04). There were missing data on LOS for nine episodes, so mean costs per case were computed for 644 cases. Most common causative pathogens in this sample were E. coli (58%), Klebsiella sp. (14%), Proteus mirabilis (7%), Pseudomonas aeruginosa (6%) and Enterococcus sp. (5%). This is consistent with previous studies that have found E. coli to be the most commonly isolated organism, especially in cUTI acquired at the community, 24 which were the majority in our sample (69% vs 31% associated to healthcare facilities).
Fifty-seven per cent of the cohort were females, and the mean age was 65.7 years (table 1). Mean Charlson comorbidity score was 2.4. Ninety-one per cent of admissions were urgent (as opposed to elective), and 85% of the patients were admitted from home (as opposed from another facility). The infection source was indwelling urinary catheterisation in 20% of cases, pyelonephritis in 27% of cases and other sources (including anatomical urinary tract modification and obstructive uropathy) in the remaining 53%. Twenty-six per cent of the episodes were caused by MDR bacteria. The severity of the infection was categorised as severe sepsis or septic shock in 16% of cases. Five per cent of the sample died within 30 days of discharge. The proportion of cases collected by each country ranged from 5% in Bulgaria to 26% in Israel.
estimating the cost per case of cutI Table 2 presents unit costs, resource use and total costs separately for each healthcare item as well as for each set of overall cost components. The mean (median) length of Open Access stay in hospital was 9 (7) days, and a small proportion of the total stay was in the ICU. Most patients had urine cultures, urinary sediment analyses and blood cultures undertaken, while imaging tests were rarely performed. The urological intervention most often performed was the insertion of an indwelling bladder-catheter. The mean number of antibiotic therapy days before, during and after hospitalisation were 2, 12 and 6 days, respectively. Nearly 10% of patients
Open Access
were readmitted to hospital due to a cUTI recurrence, with a mean readmission stay across the full sample of 1 day (11 days among the subsample of readmitted patients). The mean number of outpatient visits per patient within 60 days of hospital discharge was 0.8. The mean (median) costs per case were: (1) including costs incurred during the first hospital admission: €5064 (€3627); (2) plus antibiotic therapy before and after discharge: €5091 (€3651); and (3) plus outpatient visits and hospital readmissions within 60 days of discharge: €5705 (€3919).
The cost per case was largely driven by the cost due to the length of stay in hospital, which accounted for nearly 80% of the total cost. This was followed by the contribution of the cost of readmissions and outpatient visits after discharge (11%), treatment procedures (4%), antibiotic therapy (4%) and diagnostic tests (3%).
There was variation in the mean cost per cUTI case by country, with a largest mean (median) value of €7740 (€5962) in Turkey and a lowest value of €4028 (€3159) in Israel (table 3). Note that variations in total costs shown in this table are only due to variations in the management of patients with cUTI, including LOS, as unit costs of healthcare services are held constant across all countries. Table 3 also shows variations by cost components between countries. This suggests that differences in LOS are the main reason of the observed differences in total costs between countries; the mean stay in hospital in a general ward varies from 6 days in Israel to 14 days in Italy.
Factors associated with cutI-related healthcare costs
The statistically significant drivers of cUTI-related healthcare costs were (table 1): type of admission (with urgent admissions exhibiting a higher cost than elective admissions); source of infection (with catheterisation associated to higher costs compared with other sources); the infection severity (septic shock and severe sepsis showing a larger cost); the Charlson comorbidity index (with larger values associated to a higher cost); MDR profile (episodes presenting MDR showing a higher cost; only significant at 10% significance level); and country (with most countries exhibiting a significant lower cost than Turkey).
DIsCussIOn
In this study we have measured the cost per episode of patients hospitalised due to cUTI in eight countries with high prevalence of MDR and explored the factors that explained variations in cUTI-related healthcare costs. The mean cost per hospitalised cUTI case in our data was estimated as €5700, corresponding to the costs of a hospital stay of 9 days on average and including the costs of specific diagnostic and treatment procedures, as well as antibiotic therapy, readmissions due to cUTI reoccurrence and outpatient visits after discharge. As expected, the largest cost component was LOS, but it is also worth noting that the cost of antibiotic treatment exceeded that incurred to perform diagnostics tests, and it was also larger than the costs due to any other treatment received by these patients. The cost per case varied across Open Access countries, mainly due to differences in LOS in hospital among patients with cUTI. These differences in LOS do not appear to be related to the models of healthcare in each participating country-the countries with longest LOS, Turkey, Italy and Greece, have different healthcare systems, that is, social insurance system, national health system and mixed system, respectively. Several factors might explain these cross-country variations, including financial incentives inherent in hospital payments methods, availability of beds and the expansion of early discharge programmes that allow patients to return to their homes to receive follow-up care. 25 Over and above differences across countries, our analysis also identifies a series of factors associated with higher cUTI-related healthcare costs. Urgent admissions, for infections due to an indwelling urinary catheterisation, resulting in septic shock or severe sepsis, in patients with a higher comorbidity index and presenting MDR were related to a higher cost. The presence of catheter on admission and the Charlson comorbidity index have also been found in the literature to increase costs of adult patients hospitalised with UTI, together with time to appropriate therapy. 13 Another study found males, patients with chronic renal failure, ESBL production and outpatient parenteral antibiotic therapy to be associated with higher costs in patients with UTI admitted to hospital. 15 Our cost estimates are in line with previous studies that have focused on similar patient groups. Esteve-Palau et al 15 estimated a mean cost per patient hospitalised with symptomatic UTI caused by ESBL-producing E. coli of €4980 in one hospital in Spain, excluding readmissions. The cost was significantly lower, €2612, among patients with UTI due to non ESBL-producing E. coli. Cardwell et al 13 analysed data on adults patients with a discharged diagnosis code for UTI in one hospital in the USA and found a mean hospitalisation cost of $7586. The costs of nosocomial UTI infections and UTI infections seen in primary care have been shown to be lower. For instance, Saint 12 estimated the incremental cost of nosocomial UTIs of $676 and catheter-related bacteraemia of $2836 per case. Tambyah et al 11 reported that the mean incremental hospitalisation cost attributable to nosocomial catheter-associated UTI was $589. However, studies that focused on UTI infections treated in primary care have reported a mean cost between €70 9 and €236 10 per episode.
This is the first study to examine costs of hospitalised patients due to cUTI from a multinational point of view. Moreover, it is focused on countries with a high prevalence of MDR bacteria where cUTI impose a significant burden. In addition, the study estimated the mean cost per case from a bottom-up perspective, which provided a high level of granularity and the basis for the assessment of sources of variation and drivers of healthcare costs. However, the study also has a number of limitations. The design of the study did not include a control group to assess the extra length of stay and excess costs of patients who are admitted to hospital due to a different condition and develop UTI during their hospitalisation. Therefore, we focused in this paper on the analysis of patients who are admitted because of a cUTI. This is to avoid the overestimation that would result among cases admitted for other reasons for whom we cannot isolate the incremental costs that are due to cUTI only. A second limitation of the analysis is that, as discussed in the Methods section, country-specific unit cost data were not appropriate for most countries, and therefore we applied the same set of unit costs, as estimated in one country, Spain, to the rest of the countries. While this approach allowed us to explore variations in healthcare costs that are due to differences in the management of patients with cUTI across countries rather than due to differences in the unit costs of services, it limits the validity of the country-specific estimates. To further explore the heterogeneity of country-specific estimates, we planned to use the tool developed by WHO-CHOICE health service delivery costs, 22 which provides information on the unit costs of bed-days and outpatient visits across 191 countries. The information from this dataset indicates that variations in cost estimates across countries would be enhanced if country-specific unit costs were used. The countries with the highest unit costs according to this tool, that is, Spain, Italy and Greece, are among the countries with higher episode costs based on healthcare utilisation in our analysis, while the country with the lowest unit cost, Bulgaria, has an estimated episode cost among the lowest in this study. Unfortunately, unit costs values from this tool are only available for inpatient and outpatient visits, and for 2007-2008, and therefore they could not be used to construct country-specific estimates. In addition, we acknowledge that the theoretical proper unit cost for a resource is its opportunity cost (the value of the foregone benefits because the resources are not available for their next best alternative use). We take, as most previous studies, a pragmatic approach of using market prices and accounting costs. However, it is worth noting that, especially for inpatient day cost, these values might overestimate their opportunity costs. This is because most hospital costs are fixed and cannot be recouped even if the admission is avoided. 26 We also acknowledge that the number of observations included in the study for some countries is low, ranging from 31 to 170, which might restrict the generalisability of country-specific findings. The explanatory power of our models was also found to be low, which might suggest that there are other factors not captured by the observed variables included in our models that explain variation in healthcare costs, such as hospital policy on LOS. Finally, the perspective of the analysis was that of the hospital provider; however, if a societal perspective had been considered wider costs related to cUTI should had been taken into account, such as patients' costs and productivity losses due to illness, as well as cost incurred by primary care settings, including these costs would increase the costs of cUTI.
In conclusion, this study showed the costs of patients hospitalised due to cUTI are substantial but identified wide differences between countries, especially due to differences in length of stay in the hospital. These findings suggest that a Open Access better knowledge of the reasons for longer length of stays in some countries could facilitate a better standardised quality of care for patients with cUTI and to allow a more efficient allocation of healthcare resources. The factors associated with higher cUTI-related healthcare costs identified by this study also shed light onto some implications for policy and planning. Prompting preventive measures to minimise cost of hospitalisation might be aimed at increasing the population's knowledge of symptoms and signs of infection, in order to encourage patients to attend primary care facilities earlier, especially those with comorbidities or indwelling urinary catheters, and thus to avoid the development of severe forms of illness after the onset of symptoms and avoid the need for urgent admissions.
|
2018-04-27T01:50:13.136Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "7b31bcbedc3283bcc980e6a72cb4f896ceacf0e8",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/4/e020251.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "7b31bcbedc3283bcc980e6a72cb4f896ceacf0e8",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245537293
|
pes2o/s2orc
|
v3-fos-license
|
Multifractal conductance fluctuations in high-mobility graphene in the Integer Quantum Hall regime
We present the first experimental evidence for the multifractality of a transport property at a topological phase transition. In particular, we show that conductance fluctuations display multifractality at the integer-quantum-Hall $\nu=1 \longleftrightarrow \nu=2$ plateau-to-plateau transition in a high-mobility mesoscopic graphene device. We establish that to observe this multifractality, it is crucial to work with very high-mobility devices with a well-defined critical point. This multifractality gets rapidly suppressed as the chemical potential moves away from these critical points. Our combination of multifractal analysis with state-of-the-art transport measurements at a topological phase transition provides a novel method for probing such phase transitions in mesoscopic devices.
Since its discovery, the integer quantum Hall (IQH) effect, a continuous quantum-phase transition in a twodimensional electron gas (2DEG) [1], has provided us with a paradigm for topological phase transitions. In the presence of a large magnetic field B, applied perpendicular to the surface, the density of states (DOS) of a noninteracting 2DEG breaks into discrete, quantized Landau levels. Disorder broadens these degenerate Landau levels into bands of extended states that are separated by localized states. When the Fermi level E F , which we can tune by changing either B or the charge-carrier density n, lies in the part of the spectrum with localized states (cf. Fig. 1(a)), the Hall conductance G XY is quantized in units of e 2 /h, and the transverse conductance G XX becomes vanishingly small, with G XX = 0 at temperature T = 0 [2]. In this regime, transport takes place through chiral edge modes, whose number is dictated by the topological Chern number of the system [3][4][5][6]. If, by contrast, E F lies in the range of energies at the center of the Landau levels with extended states, transport proceeds through the bulk with G XX = 0 and a non-quantized G XY . The localization-delocalization transition occurs in a 2DEG in the IQH regime as the system crosses the mobility edge, which separates localized and extended states [7][8][9][10]. The eigenstates at the mobility edge are critical and different from both localized and extended states [7]. As the Landau-level filling factor ν approaches its critical value, the localization length ξ diverges algebraically as ξ ∝ |ν − ν C | −γ . Theoretical studies have shown that observables like the distribution of the local density |ψ(r)| 2 [7,11] or the equilibrium current density |j(r)| 2 [12] display multifractality of the density fluctuations that leads to anomalous diffusion [10] and, consequently, a power-law decay of the density correlations, a slow decay of temporal wave-packet auto-correlations [13] and, most significantly for our purpose, multifractal conductance fluctuations [14][15][16][17][18].
In this Letter, we present the first experimental evidence for the multifractality of a transport property at a topological phase transition [3][4][5][6]. In particular, we show that, in high-mobility-graphene at the first IQH plateau-toplateau transition, the conductance shows multifractal fluctuations as a function of ν.
Multifractality was initially introduced to characterize the statistical properties of fluid turbulence [19,20] and thereafter studied, not only in turbulent flows [21][22][23], but also in a variety of fields like the analysis of DNA sequences [24], atmospheric science [25,26], econophysics [27], heartbeat dynamics [28,29], and cloud structure [30], and many other parts of physics. In condensed-matter science, most investigations of multifractality, which manifests itself at some phase transitions, employ a combination of theoretical and numerical techniques [15][16][17][31][32][33][34][35]. The experimental characterizations of multifractality in such condensed-matter settings require high-precision experiments in good-quality samples, often at low temperatures and at high magnetic fields. Two recent examples of such measurements are the study of multifractal conductance fluctuations at low magnetic fields [14] and the study of multifractal superconductivity in the weak-disorder regime [36,37]. By using high-mobility-graphene and tuning ν, we demonstrate that conductance fluctuations display multifractality in the vicinity of the first IQH plateau-to-plateau transition.
Our electrical-transport measurements were carried out on hexagonal-boron-nitride (hBN) encapsulated graphene devices with one-dimensional ohmic contacts (details in Supplemental Materials). The electrical transport measurements were carried out in a dilution refrigerator, with a base temperature of 20 mK, by using low-frequency lock-in-measurement techniques in a multi-probe configuration at a low bias current (≤1 nA) to avoid Joule heating. We focus on our data from a particular bilayer-graphene device, 1DC8. This sample was thermally cycled multiple times; the data we present did not change significantly after this thermal cycling.
In Fig. 1 and different values of T . With the charge-neutrality or Dirac point at V D = −2.91 V, we find that R is as low as 30 Ω at V G 30 V. The field-effect mobility, estimated at T = 20 mK, is µ = 1, 28, 000 cm 2 V −1 s −1 . In the inset of Fig. 1(c), we show magnified plots of R versus V G near the Dirac point. Close to the Dirac point (|∆V G | = |V G − V D | ≤ 1 V), we observe that R increases with decreasing T ; this suggests an insulating state. However, for |∆V G | ≥ 1 V, R decreases with decreasing T , indicating metallic behavior. Clearly, the effect of impurity scattering is significantly suppressed in our sample; indeed, this is a precondition for observing the number-density-induced insulator-metal transition in graphene [38]. spectra [39,40]. For any one of the transitions between two adjacent plateaux, the plots of G XY versus ν, measured at different temperatures, intersect at a point, in the (ν, G XY ) plane. We identify each such intersection as a critical point: (ν Ci , G XY Ci ), with i a positive integer, is the critical point for the i → (i + 1) plateau-to-plateau quantum phase transitions from localized to delocalized states [8].
We now focus on the mesoscopic conductance fluctuations in the vicinity of these critical points. In Figs. 2(a-b), we show plots of G XY versus ν, for the ν = 1 ←→ ν = 2 quantum-Hall transition, measured at T = 20 mK. We tune ν either by changing n, at B = 16 T (Fig. 2(a)), or by changing B, at n = 4.75 × 10 11 cm −2 ( Fig. 2(b)). These plots show that G XY has significant fluctuations across the plateau-to-plateau transition. In Fig. 2(c), we present plots of two traces of G XY measured at T = 20 mK -these data sets have been shifted vertically for clarity. The fluctuation profiles are the same in the two traces; this establishes that they are mesoscopic fluctuations with a unique magnetofingerprint. These fluctuations remain reproducible over a particular thermal cycle; however, the detailed profile changes if we thermally cycle the device to T > 10 K and back.
We obtain the conductance fluctuations G (x) from these measurements by subtracting a smooth background from the measured data as follows: In Eq. 1, x can be B or V G , G stands for G XX or G XY , and the function F [G (x)] is the smooth background in G (x) (see the Supplemental Material). In Fig. 2(d), we show representative plots of G XY versus B, from our measurements at the plateau-to-plateau transition ν = 1 ←→ ν = 2, for different values of T and at a fixed value of n = 4.75 × 10 11 cm −2 . As we increase T , the mean amplitude of the fluctuations in G XY decreases, but the plots of G XY retain their principal features because of the magnetofingerprint of mesoscopic fluctuations; these features go away finally below the measurement-noise level for T > 1 K. Although these fluctuations in G XY disappear for T > 1 K, the plateaux of G XY , at e 2 /h and 2e 2 /h, survive until much higher temperatures [ Fig. 1(b)]. Thus, the disappearance of these conductance fluctuations is not a consequence of the disappearance of the quantum-Hall effect because of thermally induced level broadening.
Having established the mesoscopic origin of the fluctuations in the conductance across the plateau-to-plateau transitions, we now analyze the multiscaling behavior and statistics of the fluctuations in the vicinity of the ν = 1 ←→ ν = 2 critical point. Our multifractal analysis of these fluctuations is akin to the analysis in our low-field study of universal conductance fluctuations in single-layer graphene [14] (for details of the analysis, see the Supplementary Material). Briefly, we divide the G XY data-series into several segments, each centered at different values of the filling factor ν, and we compute the multifractal spectrum as follows. We detrend each such segment and sub-divide it into N s overlapping segments, indexed by j, and containing s data points, with 1 ≤ j ≤ N s . We obtain the generalized Hurst exponents h(q) from the power-law-scaling behavior of the order-q moment of the fluctuations F q (s) by using the following relations: We obtain h(q) for a range of values of q. A q-dependent h(q) indicates multifractality. As an example, in Fig. 3(a), we show representative plots of log[F q (s)] versus log[s], from one of our data-series for G XY , for q = ±4; the circles represent the data-points, and the thick lines are linear fits to the data. The difference in the slopes of the plots suggest that h(−4) > h(4); this is borne out by the plot of h(q) in Fig. 3(b), for −4 ≤ q ≤ 4. Multifractality can be represented by the singularity spectrum, which is a plot of f (α) versus α; this is obtained by the Legendre transformation of h(q) as follows: In Fig. 3(c), we show a plot of f (α), obtained at 20 mK near the ν = 1 ←→ ν = 2 critical point. The width of f (α), in Fig. 3(c), is ∆α 1.1. This indicates significant multifractality of the conductance fluctuations at this plateau-to-plateau transition. The maximum of f (α) is located at α 0 = 2.21 (marked by an arrow in the figure), with f (α 0 ) = 1. The maximum of f (α) provides the support dimension, of the data series, which is one here. In the Supplementary Material, we show the standard deviations of the small-amplitude fluctuations that we analyze; they are at least ten times more than the noise level measured at the ν = 1 plateau. Hence, our plot of f (α) is not contaminated significantly by measurement noise.
We note that f (α 0 ) is asymmetrical around α 0 . To understand the origin of this asymmetry, recall that, in the Legendre transformation [Eq. 4], the regions q > 0 and q < 0 map, respectively, onto the α < α 0 and α > α 0 regions of the spectrum. Hence, because of the summation procedure involved in computing h(q) [Eq. 3], small-amplitude fluctuations in G XY dominate the α > α 0 part of f (α), whereas large-amplitude fluctuations in G XY dominate the α < α 0 part. This asymmetry of f (α) suggests, therefore, a difference between the correlations for small-and large-amplitude fluctuations.
We have also obtained the singularity spectra from plots of G XY versus V G at B = 16.0 T and T = 20 mK in the vicinity of the ν = 1 ←→ ν = 2 transition (cf. Fig. S1(b)). We obtain the spectral width ∆α and plot it in Fig. 4(a) versus ν − ν C . At |ν − ν C | = 0, ∆α 1.3, which decreases sharply to ∆α ∼ 0.2 at |ν − ν C | ∼ 0.15. The peak in ∆α at ν ν C implies that the multifractality in G XY increases sharply near the ν = 1 ←→ ν = 2 plateau-to-plateau critical point. The maximum of ∆α lies within the error bars of our determination of ν C from the crossing points in Fig. 1(d).
The plot of ∆α versus ν has a small but finite width away from ν = ν C ; this indicates that G XY displays multifractality not only at this IQH critical point but also in a region around ν = ν C . We note that the critical states are confined to E = E C only in the thermodynamic limit. For a finite system, all states with localization length ξ larger than the system size appear to be extended, and the distribution of physical observables (including conductance fluctuations) is multifractal [41]. The divergence of ξ, away from ν = ν C , is suppressed only algebraically, and is governed by γ; so the critical states, for a finite-sized system can be observed away from ν C , with algebraically reduced probability. Thus, our observation of a finite amount of multifractality away from critical point ν C can be attributed to finite-size effects.
In Fig. 4(b) we plot ∆α versus T . We note that ∆α reduces from 1.4 to less than 0.2 as T increases from 100 mK to 1.0 K. To understand this T dependence of ∆α, note that the effects of quantum-interference-induced localizationdelocalization is most prominent at low temperatures. As the T increases, decoherence induced by inelastic thermal scattering reduces quantum interference and results in delocalization [42]. Thus, the multifractality we observe at the ν = 1 ←→ ν = 2 transition is expected to disappear with increasing T . As we have mentioned earlier, this multifractality of the fluctuations in G XY decreases well before the IQH plateaux disappear.
Earlier studies on high-mobility GaAs/AlGaAs heterostructures mesoscopic samples have shown that the integer quantum Hall transitions are accompanied by large, reproducible fluctuations in both G XX and G XY as functions of B and n [43][44][45]. The amplitudes of these fluctuations grow as the size of the sample is reduced. Despite an expectation that multifractal analysis of these fluctuations is essential for a complete description of the criticality in the IQH regime [7,13], in particular, and for topological phase transitions in general, experimental confirmation of this multifractality has been missing hitherto. We have presented the first experimental evidence for the multifractality of a transport property at a topological phase transition [3][4][5][6]. Our combination of multifractal analysis with state-of-the-art transport measurements at a topological phase transition provide a novel method for probing topological phase transitions in mesoscopic devices. And our study resolves an outstanding question in nanoscale devices, namely, the multifractality of conductance fluctuations at such transitions in a high-mobility 2DEG. In particular, we have shown that conductance fluctuations display multifractality at the integer-quantum-Hall ν = 1 ←→ ν = 2 plateau-to-plateau transition in a high-mobility mesoscopic graphene device. At this transition, we have demonstrated reproducible mesoscopic fluctuations in G XY (see the Supplemental Material), with clear multifractal spectra. This multifractality gets rapidly suppressed as ν moves away from ν C or as T is increased. We have established that, to observe this multifractality, it is crucial to work with very high-mobility devices, with a well-defined critical point. Our results show that the multiscaling of conductance fluctuations provide a new and clear signature for the IQH ν = 1 ←→ ν = 2 plateau-to-plateau transition. Although theoretical studies have shown the multifractality of eigenfunctions at this transition (see, e.g., Ref. [7,41,43,[46][47][48]), there has been no study hitherto of the multifractality of transport coefficients here. We conjecture that similar multifractality of conductance fluctuations should also be present in (a) all IQH plateauto-plateau transitions, (b) the fractional-Quantum-Hall transitions, and (c) in single-layer graphene devices. Our preliminary results support the conjectures (a) and (c).
We than S.S. Ray for fruitful discussions. AB acknowledges funding from DST (DST/SJF/PSA-01/2016-17). KRA thanks CSIR, MHRD, Govt. of India for financial support. RN thanks MHRD, Govt. of India, for financial support. The authors thank NNfC, CeNSE, IISc for the device fabrication facilities and MNCF, CeNSE, IISc for the device characterization facilities. RP acknowledges support from CSIR, SERB, and the National Supercomputing Mission (India).
SUPPLEMENTARY MATERIALS
Appendix A: Characteristics of the device B15D4 In Fig. S1, we present the characteristics of the device B15D4. Fig. S1(a) shows a plot of the resistance measured as a function of the gate voltage at T = 20 mK. The Dirac point is located at V D = −0.22 V. Fig. S1(b) shows the quantum Hall plateau ν = 1 to ν = 2 transition obtained by sweeping the gate voltage at T = 20 mK and B = 16 T.
Appendix B: Analysis of multifractality
We explain the analysis of the conductance fluctuations to obtain the multifractal spectrum in this supplementary material. Fig. S2(a) shows the plot of conductance G XY versus ν for the quantum Hall plateaus ν = 1 to ν = 2 transition (blue line). The fluctuations observed have been confirmed to be mesoscopic fluctuations, as mentioned in the main paper. The background is marked by the solid orange line, obtained by averaging over the data. On subtracting the background, we obtain the conductance fluctuations G XY as shown in Fig. S2(b).
We divide G XY versus ν data into overlapping segments, centered at different ν with a fixed span in ν for further analysis. This center value of ν for each such segment is used to label the multifractal spectrum of that given data set. To give an example, the multifractal spectrum in Fig. 3(b-c) in the main text is the spectrum for such a dataset of G XY versus ν centered at ν = 1.47. We now explain, in brief, the method to obtain multifractal singularity spectrum for each such dataset.
We carry out the data analysis using the multifractal detrended fluctuation analysis (MFDFA) method [49].
1. The dataset is divided into N s overlapping segments (indexed by j) with s data points each; {g i }, i = 1, 2, ..., s.
2.
From each N s segment, local trend is removed by fitting a polynomial to the data. We have used a polynomial of order 1 to treat our data. Then we obtain the variance for each such segment: 3. The order-q moment of the fluctuations F q (s) is obtained: The scaling exponent is obtained from the slope of log(F q (s)) versus log(s) plot, for each values of q, and we obtain h(q).
The spectrum h(q) versus q characterizes the multifractality of a data series. Multifractality is conveniently represented using the singularity spectrum f (α) versus α which is defined as: The singularity spectrum is related to τ (q) via a Legendre transform: The spectral width ∆α quantifies the multifractality of the data.
|
2021-12-30T02:16:00.528Z
|
2021-12-28T00:00:00.000
|
{
"year": 2021,
"sha1": "8485f727a83a05fdfed3668dc701cf57102141d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8485f727a83a05fdfed3668dc701cf57102141d4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
164305457
|
pes2o/s2orc
|
v3-fos-license
|
Shifts in Ectomycorrhizal Fungal Communities and Exploration Types Relate to the Environment and Fine-Root Traits Across Interior Douglas-Fir Forests of Western Canada
Large-scale studies that examine the responses of ectomycorrhizal fungi across biogeographic gradients are necessary to assess their role in mediating current and predicted future alterations in forest ecosystem processes. We assessed the extent of environmental filtering on interior Douglas-fir (Pseudotsuga menziesii var. glauca (Beissn.) Franco) ectomycorrhizal fungal communities across regional gradients in precipitation, temperature, and soil fertility in interior Douglas-fir dominated forests of western Canada. We also examined relationships between fine-root traits and mycorrhizal fungal exploration types by combining root and fungal trait measurements with next-generation sequencing. Temperature, precipitation, and soil C:N ratio affected fungal community dissimilarity and exploration type abundance but had no effect on α-diversity. Fungi with rhizomorphs (e.g., Piloderma sp.) or proteolytic abilities (e.g., Cortinarius sp.) dominated communities in warmer and less fertile environments. Ascomycetes (e.g., Cenococcum geophilum) or shorter distance explorers, which potentially cost the plant less C, were favored in colder/drier climates where soils were richer in total nitrogen. Environmental filtering of ectomycorrhizal fungal communities is potentially related to co-evolutionary history between Douglas-fir populations and fungal symbionts, suggesting success of interior Douglas-fir as climate changes may be dependent on maintaining strong associations with local communities of mycorrhizal fungi. No evidence for a link between root and fungal resource foraging strategies was found at the regional scale. This lack of evidence further supports the need for a mycorrhizal symbiosis framework that is independent of root trait frameworks, to aid in understanding belowground plant uptake strategies across environments.
Large-scale studies that examine the responses of ectomycorrhizal fungi across biogeographic gradients are necessary to assess their role in mediating current and predicted future alterations in forest ecosystem processes. We assessed the extent of environmental filtering on interior Douglas-fir (Pseudotsuga menziesii var. glauca (Beissn.) Franco) ectomycorrhizal fungal communities across regional gradients in precipitation, temperature, and soil fertility in interior Douglas-fir dominated forests of western Canada. We also examined relationships between fine-root traits and mycorrhizal fungal exploration types by combining root and fungal trait measurements with next-generation sequencing. Temperature, precipitation, and soil C:N ratio affected fungal community dissimilarity and exploration type abundance but had no effect on α-diversity. Fungi with rhizomorphs (e.g., Piloderma sp.) or proteolytic abilities (e.g., Cortinarius sp.) dominated communities in warmer and less fertile environments. Ascomycetes (e.g., Cenococcum geophilum) or shorter distance explorers, which potentially cost the plant less C, were favored in colder/drier climates where soils were richer in total nitrogen. Environmental filtering of ectomycorrhizal fungal communities is potentially related to co-evolutionary history between Douglas-fir populations and fungal symbionts, suggesting success of interior Douglas-fir as climate changes may be dependent on maintaining strong associations with local communities of mycorrhizal fungi. No evidence for a link between root and fungal resource foraging strategies was found at the regional scale. This lack of evidence further supports the need for a mycorrhizal symbiosis framework that is independent of root trait frameworks, to aid in understanding belowground plant uptake strategies across environments.
INTRODUCTION
Shifts in the taxonomic and functional structure of mycorrhizal communities across plant host distributions underpin changes in biogeochemical processes, such as modification of carbon (C) and nitrogen (N) cycles (Clemmensen et al., 2013;Koide et al., 2014;Cheeke et al., 2017;Wurzburger and Clemmensen, 2018;Jassey et al., 2018). Therefore, identifying the biotic and abiotic factors that shape mycorrhizal fungal communities is a prerequisite for understanding terrestrial ecosystem processes and predicting the impacts of global change on plant communities (Hazard and Johnson, 2018;Hoeksema et al., 2018;van der Linde et al., 2018). As climate changes, mycorrhizal fungi will likely respond to a range of environmental factors, and not necessarily the same factors as their hosts (Pickles et al., 2012), placing a premium on large-scale studies that examine communities across multiple environmental gradients (Lilleskov and Parrent, 2007;van der Linde et al., 2018).
Ectomycorrhizal fungi (EMF) play a dominant role in temperate and boreal forest ecosystems, where they control plant acquisition of soil resources (e.g., inorganic and organic forms of N and phosphorus, P; Read and Perez-Moreno, 2003;Heijden and Horton, 2009) and soil C dynamics (Simard and Austin, 2010). Biogeographic patterns in EMF diversity are now being studied (Tedersoo, 2017;Hu et al., 2019). However, there is a lack of baseline information on patterns of EMF community composition and functional trait distribution, especially at large spatial scales such as the regional (i.e., scale of a country), continental or global scales (Tedersoo, 2017;van der Linde et al., 2018;Hu et al., 2019). Yet, biogeographic data on EMF community structure are necessary to assess their role in mediating current and predicted alterations in the C cycle (Jassey et al., 2018), the hydrologic cycle (Bjorkman et al., 2018) or plant productivity (Coops et al., 2010;Richardson et al., 2018).
At the continental scale, patterns of EMF community composition of Pinus sylvestris, Picea abies, and Fagus sylvatica have been investigated in Europe, where host plant family and N deposition had the predominant filtering effects Põlme et al., 2013;Suz et al., 2014;Rosinger et al., 2018;van der Linde et al., 2018). Within Europe, at the regional scale, N deposition, rainfall and soil moisture were also found to drive shifts in the P. sylvestris EMF community structure (Jarvis et al., 2013), whereas other European studies highlighted the filtering effects of temperature and soil fertility on EMF communities (Sterkenburg et al., 2015;Pena et al., 2017). In Western North America, Pickles et al. (2015a) have also inferred from a common-garden greenhouse study on interior Douglas-fir (Pseudotsuga menziesii var. glauca (Beissn.) Franco; hereafter Douglas-fir) seedlings that temperature and soil fertility may drive habitat filtering in EMF communities.
Across environments, variation in EMF functional traits may relate better to ecosystem processes than variation in EMF species composition because it informs how groups of species function and the extent that there is functional redundancy in species diversity (Koide et al., 2014;Hazard and Johnson, 2018). For instance, EMF functional traits such as enzymatic activity (Courty et al., 2016), N preference (Leberecht et al., 2015;Haas et al., 2018), mycelial hydrophobicity or the differentiation of extraradical hyphae (i.e., exploration type; Agerer, 2001Agerer, , 2006Jarvis et al., 2013;Pickles et al., 2015a;Fernandez et al., 2017;Ostonen et al., 2017;Pena et al., 2017;Köhler et al., 2018;Rosinger et al., 2018) have been shown to impact ecosystem processes (Koide et al., 2014). Exploration type is a functional trait that connects the morphology and differentiation of EMF hyphae to differences in nutrient acquisition strategies. From a functional perspective, exploration type determines the ability of EMF to colonize new roots, form common mycorrhizal networks, or forage, acquire and transport resources (Agerer, 2006). Fungi with contact, short-and medium-distance smooth exploration types, for example, may preferentially use soluble, inorganic N forms (Lilleskov et al., 2002;Hobbie and Agerer, 2010). Alternatively, long-distance explorers may be more effective in capturing patchily distributed organic N (Koide et al., 2014), and are more likely to be resistant to decay due to their hydrophobicity. Hence, EMF fungi of the long-distance exploration type may drive soil C storage and C:N ratio (Suz et al., 2014), although some short-range EMF, including Cenococcum geophilum and Cadophora Finlandia, are also resistant to decay (Agerer, 2006;Fernandez et al., 2016).
Shifts in EMF exploration type may compensate for changes in fine-root structure. For example, across 13 temperate tree species, the abundance of larger absorptive fine roots, whose large diameter and associated high construction costs limits efficient resource foraging, was positively correlated with the proportion of longer distance exploration types, thus resulting in functional complementarity between fine roots and EMF with respect to soil resource capture (Chen et al., 2018a,b). This is because plants with coarser roots are less able to forage for and absorb soil resources, thus they should benefit the most from medium-or long-distance explorers that can acquire and transport resources well beyond root depletion zones (Chen et al., 2018a,b). To our knowledge, only three studies have linked root and EMF functional traits (Ostonen et al., 2011;Cheng et al., 2016;Chen et al., 2018a). Yet, studies connecting commonly measured economic fine-root traits (e.g., morphological, chemical and architectural traits) and mycorrhizal functional traits are essential for broadening root trait frameworks .
Root density is also an important factor to consider when linking fine-root traits and exploration types because exploration type assemblage may be well predicted by root spacing (Peay et al., 2011) and conversely, EMF species influence root density (Pickles et al., 2010). This is especially important when working at the scale of meters or in primary succession settings (Peay et al., 2011). In this study however, we focused on economic traits commonly measured at the individual fine-root level.
To assess the extent of abiotic environmental filtering on EMF community taxonomic and functional structure, and to examine the relationship between fine root and EMF exploration type, we investigated patterns of belowground trait variation across five regions that differed in precipitation, temperature and soil fertility (pH, cation exchange capacity (CEC), total N and available P) in an area of c. 25, 300 km 2 (49.6 to 51.7 • N) in British Columbia, Canada. We focused on Douglas-fir in interior Douglas-fir dominated forests which are widely distributed from the Rocky Mountains of Canada and the United States to the mountains of central Mexico (Lavender and Hermann, 2014). Defrenne et al. (unpublished) have explored the variation in morphological, chemical and architectural traits among fine roots across the same biogeographic gradient and revealed that Douglas-fir trees from colder/drier climates had fine roots with higher diameter, lower root tissue density (RTD), and lower C:N, compared to trees from milder climates. In this study, we first hypothesized that temperature and soil fertility would be the main drivers of EMF diversity and community composition. Our second hypothesis was that medium-distance fringe or long-distance explorers would be more abundant in colder climates (see the third hypothesis) or in soils with high C:N ratio, because fungi with rhizomorphs that preferentially use insoluble, organic N may be more competitive for plant nutrition under these conditions. Building on the results of Defrenne et al. (unpublished), our third hypothesis was that EMF traits compensate for changes in fine-root structure, and especially root diameter, where colder/drier climates with larger diameter roots are dominated by EMF with medium-distance fringe or long-distance exploration types.
Biogeographic Gradient
Fine roots and EMF root tips were collected from five regions in three naturally regenerated, mature, closed-canopy forest stands (30 × 30 m) per region in summer 2016 (Figure 1).
We selected the five regions to obtain a biogeographic gradient with substantial precipitation and temperature ranges ( Table 1). Mean annual temperature (MAT) ranged from 3.4 to 7.3 • C and was lowest in Williams Lake, followed by Revelstoke, Kamloops, Salmon Arm and Nelson. The driest region was Kamloops (average mean annual precipitation-MAP, 441 mm) and the wettest was Revelstoke (average MAP 1200 mm). We picked stands that were at least 400 m apart and ecosystems that best reflected the regional climate (namely, zonal site series; Meidinger and Pojar, 1991). Average stand age ranged from 98 years (Revelstoke) to 143 years (Salmon Arm), and the proportion of Douglas-fir by basal area ranged from 49% in the mixed, even-aged forest stands of Salmon Arm to 100% in the pure, uneven-aged forest stands of Kamloops. For the basal area estimates, all the trees with a DBH > 10 cm were measured (this included only mature trees; for further details on site and stand characteristics, see Supplementary Table 1).
The southernmost stands (Nelson, Revelstoke and Salmon Arm) occurred predominantly on Brunisolic soils that were characterized by lower CEC, soil pH and soil N compared to the northern most stands (in Williams Lake and Kamloops) which occurred on Luvisolic soils (British Columbia Ministry of Forests and Range and British Columbia Ministry of Environment, 2010). Climatic variables for the period 1981-2010 were obtained from ClimateNA (Wang et al., 2016) and soil samples were analyzed for total soil carbon (C) and N concentrations, available phosphorus (PO 4 -P; orthophosphate as phosphorus) and CEC (for further details on soil sample collection, see the next section and for details on soil sample analyses, see Supplementary Text 1A).
FIGURE 1 | Geographical distribution of study regions (rectangles) and forest stands (3 triangles per study region) across the current natural range of interior Douglas-fir (Pseudotsuga menziesii var. glauca; green shading) in British Columbia, Canada.
Fine Root and Ectomycorrhiza Sampling and Processing
In each stand, single soil blocks (20 × 20 × 20 cm) were extracted from five Douglas-fir trees (200 cm from the trunk) in a manner that avoided clumping of sampling location (i.e., trees were at least 5 m apart). The soil blocks encompassed organic layers (L, F and H) and mineral horizons (A and B) to obtain a more complete vertical representation of the EMF community (Rosling et al., 2003). In addition, one organic (L, F, H layers) and one mineral soil sample (upper mineral horizons A and B, from the bottom of the organic layer to 10 cm depth) were collected using a trowel from each target tree (5 trees) to evaluate relationships between EMF communities and soil properties. A total of 75 sample sets were collected (5 regions, 3 stands per region, 5 trees per stand) and stored at 4 • C until processing (up to 3 months).
To recover Douglas-fir fine roots and ectomycorrhizas, each soil block was soaked in water overnight before being washed over a 4 mm screen. All fine-root branches (<2 mm diameter) and fragments (>3 cm length) were collected from the sieve and sorted by tree species (based on the morphological key described in Supplementary Table 2). To guarantee random selection of EMF root tips, root fragments from each soil block were laid out on a numbered grid, and grid cells were selected using a random number generator. We examined and cleaned root fragments with a soft brush under the microscope until c. 50 live fine-root tips/soil block were collected. Excised fine-root tips were classified as individual morphotypes (based on the presence of a fungal mantle and according to Goodman et al., 1996) or uncolonized (root hairs present, or no visible mantle, usually unbranched). All tips were frozen at -80 • C but only 5-10 tips per morphotype, across all soil blocks for the entire study, were used for later DNA analysis.
To assess the effect of fine-root traits on EMF taxonomic and functional diversity (exploration type), a total of 365 Douglas-fir fine-root branches was divided into individual root orders following the morphometric classification approach of Pregitzer et al. (2002). In this classification, the most distal, unbranched roots are first order and second-order roots begin at the junction of two first order roots, and so on. First-order roots were either colonized by EMF or were unbranched and the root tips uncolonized. We avoided thicker, longer pioneer first-order roots (Zadworny and Eissenstat, 2011). Each first-order group (i.e., all first-order roots of a given branch) was scanned separately and analyzed for morphological features using WinRHIZO (total length, total surface area, average diameter and total volume; 400 dpi, 165 -level gray scale, EPSON Perfection V800 Photo, STD 4800; WinRHIZO pro 2016 software, Regent Instruments Inc., Quebec City, Canada). For each first-order group, we determined specific root length, SRL (m g −1 ), specific root area, SRA (cm 2 g −1 ), and RTD (mg cm −3 ). In addition, a subsample of 180 first-order roots were randomly selected and analyzed for C and N concentration (%) (see Supplementary Text 1B). These traits were selected for analysis due to their expected relationships with plant investment into root construction and maintenance and their benefits for soil resource foraging and acquisition.
Molecular Analyses of Ectomycorrhizas
Five to ten frozen EMF root tips for each of the 97 putative morphotypes (across all soil blocks for the entire study) were pooled and ground in liquid nitrogen before extracting fungal DNA using DNeasy R PowerSoil R kit, according to the manufacturer instructions (Qiagen R , 2017, ON, Canada). Fungal DNA extracts were sent to the Centre for Comparative Genomics and Evolutionary Bioinformatics (CGEB) at Dalhousie University, Halifax, Canada. High-throughput sequencing (Illumina MiSeq v3 chemistry, 600 cycles; Illumina, San Diego, CA, United States) was used to identify the target EMF OTU. For the library preparation, amplicon fragments were PCR-amplified with fungal specific primers (ITS86F, GTGAATCATCGAATCTTTGAA; ITS4R, TCCTCCGCTTATTGATATGC) targeting the internal transcribed spacer region 2 (ITS2) (variable length, avg. c. 350 bp) of ribosomal DNA (White et al., 1990;Turenne et al., 1999;Vancov and Keen, 2009). Primers contained Illumina barcodes and overhang adaptors, allowing for a one-step PCR preparation of sequence libraries.
DNA sequencing results were analyzed using the QIIME2 TM bioinformatics platform (Caporaso et al., 2010). The software package DADA2 was used to assemble bidirectional reads while filtering for quality and dereplicating sequences (Callahan et al., 2016). Prior to taxonomic assignment, representative sequences were exported from QIIME2 into fasta format and then ITS2 regions were extracted, chimeras were detected and non-ITS2 sequences were screened out using the software tool ITSx (Bengtsson-Palme et al., 2013). Extracted ITS2 sequence data were imported back into QIIME2 and the corresponding QIIME2 feature table was filtered to remove non-ITS sequence data. Demultiplexed, quality-controlled ITS2 sequence data were further screened for chimeras and then clustered into operational taxonomic units (OTUs) at 99% sequence similarity using a de novo clustering method with VSEARCH (Westcott and Schloss, 2015;Rognes et al., 2016).
For fungal species identification, we used the Basic Local Alignment Search Tool (BLAST) against the National Center for Biotechnology Information (NCBI) GenBank and UNITe public sequence databases (Abarenkov et al., 2010). We used two criteria to assign species or genus names to each morphotype: (i) Only EMF OTUs were considered (no consideration of root-associated fungi such as saprotrophs, root endophytes, molds or pathogens), and (ii) where pairwise identity (i.e., the amount of nucleotide that matches exactly between two different sequences) corresponding to the indicated EMF species was higher than 97%. In addition, morphotype characteristics were compared to reference photos from the Ectomycorrhizae Descriptions Database 1 and the DEEMY database 2 . Using this method, 82% of the morphotypes were identified to the species or genus level. For all but ten morphotypes, the assigned EMF species or genus corresponded to the EMF OTU with the highest number of reads; for the remaining ten morphotypes (morphotype IDs 81, 72c, 50, 55, 49, 34b, 31, 25, 3 and 2, see Supplementary Table 3), the EMF OTU with the highest number of reads had a low pairwise identity (<94) or was likely a contaminant. In this case, the EMF OTU with the second or third highest number of reads was chosen, but only if the morphology of the morphotype corresponded to the photos from the databases.
For each species or genus, exploration type (contact, short-distance, medium-distance smooth, medium-distance fringe and long-distance) was assigned after Agerer (2001Agerer ( , 2006) and compared to the published data from Ostonen et al. (2017), Pena et al. (2017), and Fernandez et al. (2017). For the genus Sistotrema, we referred to Ostonen et al. (2017) because it was not included in Agerer (2001Agerer ( , 2006. We assumed that exploration type was conserved within a genus as we did not study EMF genotypic trait variation. This assumption has been made in many other studies although plasticity of EMF mycelium can be substantial and should be taken into account more consistently (Hazard and Johnson, 2018;van der Linde et al., 2018). Hydrophobicity was assigned based on Lilleskov et al. (2011) andFernandez et al. (2017).
Data Analyses
All statistical analyses were conducted in R version 3.5.1 (R Core Team, 2018) and results were considered statistically significant at P < 0.05. Fungal richness was examined across regions by estimating components of α-diversity: (i) observed species richness in each soil block and (ii) richness estimators: Chao1 (Chao, 1984), first-and second-order Jackknife (Burnham and Overton, 1979). Diversity patterns were examined by calculating the following diversity indices: Shannon-Wiener Diversity Index H = − pi ln pi , Shannon's Evenness (E), and Simpson's Index of Diversity 1 − D; D = 1 − pi 2 , where pi is the proportion of species i relative to the total number of species in a sample. We assessed the effect of region on EMF richness, evenness and diversity using a nested ANOVA (linear mixed effect model) with region as a fixed effect and site nested within region as a random effect using the function "lmer" from the lme4 package. We did not rarefy to the lowest sampling depth (i.e., we only collected 12 root tips in a soil block from Revelstoke) because rarefaction curves indicated that our sampling effort was sufficient as it resulted in EMF species saturation (Supplementary Figure 1).
To investigate the effects of environment and fine-root traits on the taxonomic and functional structure of EMF communities, we first calculated the variance inflation factor (VIF) for each environmental (MAT, MAP, soil C:N, CEC, pH, soil available phosphorus) and root trait (SRL, SRA, diameter, RTD, root C:N) predictor, to avoid multicollinearity among predictive variables, using the "vif " function from the usdm package (Naimi et al., 2014). Predictors with the highest VIF were sequentially dropped until all VIF values were below three (Zuur et al., 2010). This process removed CEC and soil N from the environmental predictors and SRL from the root trait predictors. Second, unidentified morphotypes (from which DNA was not extracted and for which a sequence was not found) were removed before all analyses on the basis that reconsideration of photographic evidence suggested that they were most likely to have been dead root tips. A Hellinger transformation was applied to species and exploration type data matrices. We used a distance-based redundancy analysis (db-RDA; Legendre and Anderson, 1999) to examine β-diversity based on Bray-Curtis dissimilarities using the "capscale" function in vegan. The best model was chosen utilizing forward model selection with permutation tests (P-value for variable retention = 0.05). The general form of the models was: EMF Species or Exploration Type -Environmental Factors (e.g., MAT, MAP) or Root Traits (e.g., SRA, RTD) Models were tested for significance using permutational multivariate analysis of variance (PERMANOVA, function "adonis" in vegan, 999 permutations), after assessing the multivariate homogeneity of regions dispersions (function "betadisper" in vegan; Oksanen et al., 2018; Supplementary Figure 2). Significant PERMANOVA effects were assessed using post hoc pairwise contrasts (function "multiconstrained" from the package BiodiversityR; Kindt, 2018).
The db-RDA /PERMANOVA approach was not developed to account for nested data (here, sites are nested within regions). Thus, to complement these analyses, we used a two-way approach, that only worked with two nested factors: (i) For the effect of sites within regions, we used PERMANOVA with permutations constrained within sites (strata = site in "adonis") and (ii) for the effect of regions, we ran a nested analysis of variance with the function "nested.npmanova" from the package BiodiversityR. These two complementary analyses were run on the models:
EMF Species or EMF Exploration Types -Regions/Sites
In addition, to account for the presence of mean-variance relationships in multivariate community analyses, we built multivariate generalized linear models using the package "MVABUND" (Wang et al., 2012). An offset (log of row sums) was added to the models to standardize the response variables and account for the unequal sample size. Models were run twice: with all the species and without the unresponsive species (coefficient < |5| ). Model significance was tested with a likelihood-ratio test and univariate P-values were adjusted for multiple testing using a step-down resampling procedure.
Identification and Taxonomic Diversity
In this study, 3914 fine-root tips were extracted from 75 soil blocks and sorted into 97 putative EMF morphotypes (on average, 4.0 ± 0.2 morphotypes per soil block) of which 82 EMF morphotypes were successfully sequenced. The sequencing of the 82 morphotypes resulted in 6,322,065 sequences that were clustered into 1,901 OTUs. These OTUs comprised the following guilds: EMF, saprotrophs, root endophytes, molds or pathogens. Considering all the guilds, the average number of OTUs per morphotype was 69.0 ± 3.6 (Supplementary Table 3). Out of these 69 fungal OTUs per morphotype, on average, 10 OTUs were EMF (highlighted in red in Supplementary Table 3). After using the criteria described in the section "Molecular analyses of ectomycorrhizas, " we assigned a unique EMF OTU to each of the 82 morphotypes. We then obtained 54 unique EMF OTUs because some morphotypes were assigned the same OTU (Supplementary Table 4). Of these 54 EMF OTU, 46 and 54% were identified to genus and species, respectively. Of the 54 EMF taxa, 91% were Basidiomycota and 9% were Ascomycota.
In addition, 33% were resupinate fungi, 13% were hypogeous and 54% were mushroom-forming fungi. Tips from the 15 EMF morphotypes for which no sequences were obtained (most likely representing dead root tips) represented 10% of all the mycorrhizal root tips.
Species accumulation curves showed that almost the entirety of species richness was recovered for Kamloops. Alternatively, we recovered c. 80% of the estimated species richness for the remaining regions (Supplementary Figure 1). Richness estimators (Chao1, Jack1, Jack2) were similar to the observed species richness, which confirm that only a small number of species were not accounted for with our sampling scheme (Supplementary Table 5A). Revelstoke had the highest species richness per region with 25 species compared to 24 species in Salmon Arm and Williams Lake, followed by Nelson with 23 species and Kamloops with 20 species. We did not detect any differences in α-diversity among the five regions, where species richness averaged four EMF species/soil block for each region ( Supplementary Table 5B). Similarly, species evenness and diversity estimated with Shannon and Simpson's diversity indexes were low, averaging 0.8 and 0.5, respectively, and did not vary by region.
Taxonomic Composition
Across the environmental gradients, the most abundant OTUs identified at the species level with >2% root tip abundance were C. geophilum (8.3%), Lactarius rubrilacteus (8%), Russula mordax (4%), Lactarius cf. resimus (2.5%), and Cortinarius cedriolens Table 4). The five regions had five species/genera in common: C. geophilum, Rhizopogon sp., Wilcoxina sp., Piloderma sp. and Russula sp. The vast majority of fine-root tips colonized by Russula sp. and Lactarius resimus were found in the wettest region (Revelstoke; Figure 2), whereas the majority of root tips colonized by C. geophilum, Tomentella FIGURE 2 | Relative abundance (%) of ectomycorrhizal taxa on interior Douglas-fir among five regions in western Canada. Only the species/genera representing >2% of root tip abundance were included. For a given ectomycorrhizal species, numbers represent the percentage of root tips colonized by this species in each region. sp., Russula mordax and Cortinarius cedriolens were found in the driest region (Kamloops). Almost half (43%) of the occurrence of Wilcoxina sp. and half of the occurrence of Suillus sp. were in the coldest region (Williams Lake). Alternatively, 24 and 11% of the occurrence of Wilcoxina sp. were in the warmest regions of Kamloops and Nelson, respectively, while the other half of Suillus sp. was also in warm regions (Salmon Arm and Kamloops). 75% of the occurrence of Lactarius rubrilactus was in the warmest region (Nelson).
Differences in climatic and edaphic conditions among the regions explained 12.2% (adjusted R 2 = 0.07) of the variation in the Douglas-fir EMF community composition ( Table 2A). The first axis of the db-RDA was mostly explained by differences in MAT (score = -0.89) and soil C:N ratio (score = -0.55) and separated the EMF communities of the warmest region (Nelson, MAT = 7.3 • C) from those in colder regions (William's Lake, MAT = 3.4 • C; Kamloops, c. 5.6 • C; Revelstoke, MAT = 5.5 • C; Figure 3A). The second axis of the analysis was best explained by the gradients in precipitation (score = 0.53) and soil acidity (score = -0.79) and separated fungal communities in drier (Williams lake) and weakly calcareous soils (Salmon Arm) from those in wetter, strongly acidic Brunisolic soils (Revelstoke). Genera such as Tomentella, Cenococcum and Sebacina, classified as the short-distance exploration type, were commonly associated with low MAT, soils richer in N and mid to low MAP ( Figure 3B) and more generally, the short-distance exploration type taxa clustered in colder/drier regions. Fungal species such as Rhizopogon sp. and Suillus sp. with the long-distance type were exclusively found in drier climates while medium and medium-fringe explorers such as Hydnum sp., Cortinarius sp. and Piloderma sp. clustered in wet climates. Contact explorers such as Russula sp. and Lactarius sp. had a broader environmental range compared to the other exploration types but tended to be more abundant in wet climates. Uncolonized root tips occurred exclusively in regions with low MAT, low MAP and rich soils (high total N). Sites nested within regions did not have a significant effect on the EMF community composition, as revealed by the PERMANOVA with permutation restricted within sites (Supplementary Table 6A). However, the nested PERMANOVA confirmed the significant effect of regions (P-value = 0.001; Supplementary Table 6B) Species-specific responses to the environmental gradients were obtained using multivariate generalized linear models (Figure 4; only the most responsive species, with a coefficient > |5| were included in this model, see Supplementary Figure 3 for results with all the species). In agreement with the PERMANOVA model (except for soil pH), MAT, MAP and soil C:N ratio were all significantly related to shifts in EMF species across regions (Table 2B), yet EMF species were more responsive to MAP and MAT than soil C:N ratio (larger effect size; Figure 4). Generally, most taxa in the Russulaceae responded in a similar fashion, with species increasing in abundance with MAT and MAP (except for Russula benwooii). Response to climate was not similar in the Cortinariaceae and Sebacinaceae. For instance, Cortinarius renidens and C. decipiens expressed opposite responses to MAT, positive and negative, respectively, but had matching responses to MAP. Similarly, the genera Sebacina responded moderately but positively to MAP and negatively to MAT which was opposite to the response of the related genus Helvellosebacina.
Exploration Types
Considering all morphotypes, 36.4% of mycorrhizal root tips were contact-distance type, 25.5% were short-distance type, 20.0% medium-distance fringe type, 7.3%, medium-distance smooth type, and 7.3% long-distance type ( Supplementary Figure 4 and Supplementary Table 4). Precipitation and temperature explained 14% (adjusted R 2 = 0.14; Table 2C) of the variation in the dominant exploration types across the gradient. Only the first axis of the db-RDA was significant and represented the variation in MAP (score = -0.83; Figure 5A) and to a lesser extent MAT (score = -0.56). This axis separated longand short-distance exploration types occurring in drier/colder regions from contact and medium-distance smooth exploration types in wetter regions ( Figure 5B). We found no significant effect of site on exploration type abundance but found an FIGURE 3 | Distance-based redundancy analysis (db-RDA) sample (A) and species (B) ordinations based on ectomycorrhizal fungal abundance on interior Douglas-fir roots across five regions in western Canada. Only the variation explained by environmental variables is visualized. The ectomycorrhizal species are color-coded by fungal exploration type and are sized according to their relative abundance. The species epithet (when known) was removed to improve readability. MAP, mean annual precipitation; MAT, mean annual temperature; CN, soil carbon-to-nitrogen ratio. To aid comparison, predictor variables were standardized (z-scores) prior to analysis and are therefore unitless. Contact-, short-and medium-distance smooth exploration types are hydrophylic while medium-distance fringe and long-distance exploration types are hydrophobic. effect of region (P-value = 0.02; result not shown), whereas multivariate generalized linear models did not yield significant results (result not shown).
Fine-Root Traits and Fungi
Douglas-fir fine-root morphological and chemical traits explained 5% (adjusted R 2 = 0.02; Table 3) of shifts in EMF species community structure. Only the first axis of the db-RDA was significant and was represented by the variation in first-order root C:N ratio (score = -0.72) and RTD (score = 0.83; Figure 6A). This axis separated the symbionts associated with fine roots of low RTD (in Williams Lake and Nelson) from those associated with fine roots of low C:N ratio (mainly in Kamloops). Fungal taxa such as Wilcoxina, Tomentella and Sebacina that were classified as contact and short exploration types tended to cluster together and were related to fine roots with low RTD (Figure 6B). Similarly, uncolonized root tips were all associated with fine roots of low RTD. Alternatively, medium-fringe explorers such as Cortinarius, Piloderma and Amphinema, as well as the short distance explorer, Cenococcum tended to be more abundant on fine roots with high RTD. The multivariate generalized linear model did not yield significant results (result not shown).
DISCUSSION
The wide gradient in climate and soil fertility across southern British Columbia was ideal for investigating the extent of environmental filtering on EMF community taxonomic and functional structure (exploration type) across populations of Douglas-fir. Our first hypothesis was partly rejected because climate and soil fertility were not related to either EMF species richness or diversity. However, abiotic factors (MAT, MAP and soil C:N) did filter EMF community composition and the abundance of exploration types. As predicted, medium-fringe, and also contact explorers were more abundant in less fertile environments (as defined by lower pH, CEC, and available P), however, our second hypothesis was only partly confirmed because these exploration types were also associated with warmer or wetter environments. We did not find evidence for a functional connection between root diameter and EMF exploration type within Douglas-fir populations, which contradicts our third hypothesis.
Ectomycorrhizal Fungal Richness and Diversity
Contrary to our first hypothesis, we found no evidence that EMF diversity and richness varied across environmental gradients. However, most of the studies that found temperature or soil fertility to have an impact on EMF diversity used experimental treatments or covered continental gradients (Deslippe et al., 2011;Suz et al., 2014;Haas et al., 2018;Köhler et al., 2018;Rosinger et al., 2018). It is then possible that results from our regionalscale biogeographic gradient may not directly compare to these studies with regard to EMF diversity. Nonetheless, EMF richness was not affected by climatic transfer in a genecological study in temperate rainforests of coastal British Columbia (Kranabetter et al., 2015b) or by experimental warming in boreal forests of Minnesota Mucha et al., 2018).
FIGURE 4 | Ectomycorrhizal fungal species-specific response to environmental factors based on multivariate generalized linear models. Only species that were responsive were added to the model (coefficient > |5| ). Circles (•) represent species coefficients and lines, 95% confidence intervals. Species were grouped by exploration type indicated at the right-hand side of the plot, MAT, mean annual temperature; MAP, mean annual precipitation; CN, soil carbon-to-nitrogen ratio. To aid comparison, predictor variables were standardized (z-scores) prior to analysis and are therefore unitless. Hi, hydrophylic; Ho hydrophobic.
In our study, EMF communities were dominated by host-generalist taxa such as C. geophilum, L. rubrilactus and Russula, whereas taxa in Rhizopogon and Suillus lakei that are specific to the Pinaceae and Douglas-fir, respectively, represented only 8% of the total colonized root tips. This pattern could explain the lack of changes in richness and diversity because the host-generalist taxa tend to be less sensitive to environmental changes Mucha et al., 2018). Alternatively, if rare or specialist taxa were to dominate EMF communities in our study, we could have observed a change in richness and diversity Mucha et al., 2018). It is important to mention that root tips communities in our study provide a measure of the investment of the host and fungus in nutrient exchange sites and enable assessment of fungal species abundance. However, it does not necessarily represent the community of extraradical hyphae, especially in the case of long-distance colonized root tips that have higher mycelial space occupation than medium-distance smooth and short-distance colonized root tips (Weigt et al., 2012). In addition, shifts in Douglas-fir rooting depth across regions may impact EMF diversity estimates (Supplementary Table 1; Pickles and Pither, 2014). Our sampling scheme was consistent along the gradient, which may have hindered the detection of changes in EMF richness and diversity deeper in the soil profile.
Abiotic Drivers at the Regional Scale
In our study, temperature, precipitation and soil C:N ratio moderately but significantly explained some of the changes in EMF community assembly, despite the relatively small FIGURE 5 | Distance-based redundancy analysis (db-RDA) sample (A) and exploration type (B) ordinations based on ectomycorrhizal fungal exploration type on interior Douglas-fir across five regions in western Canada. The ectomycorrhizal fungal exploration types are sized according to their relative abundance. MAP, mean annual precipitation; MAT, mean annual temperature; CN, soil carbon-to-nitrogen ratio. To aid comparison, predictor variables were standardized (z-scores) prior to analysis and are therefore unitless. Contact-, short-and medium-distance smooth exploration types are hydrophylic while medium-distance fringe and long-distance exploration types are hydrophobic. ranges in temperature and soil fertility encompassed here. Hence, our first hypothesis was partly confirmed. We found that EMF communities varied from communities dominated by Tomentella and Sebacina in the colder, more fertile regions (higher soil pH, CEC and total N) of Douglas-fir's natural range, to communities dominated by Hydnum sp., Cortinarius sp. or Russulaceae members in the warmer, less fertile regions of the range. These results add to those of Pickles et al. (2015a), who compared variation in EMF community primarily between inside and outside the range of Douglas-fir when studying EMF communities on Douglasfir seedlings. Our finding that temperature, precipitation and soil C:N ratio appeared to act as filters explaining part of the regional variation in EMF assemblages was similar to that of Pena et al. (2017). However, large-scale studies in Europe have shown different responses. For example, EMF community composition varied with temperature, pH and soil nutrients but not with precipitation in some European forests (Rosinger et al., 2018;van der Linde et al., 2018), whereas elsewhere precipitation, but not temperature, influenced EMF community structure (Jarvis et al., 2013;Suz et al., 2014).
In our study, the effect of temperature and soil fertility on EMF community structure could be related to co-evolutionary history between Douglas-fir populations and fungal symbionts (Gehring et al., 2017;Pither et al., 2018;Strullu-Derrien et al., 2018) because local adaptation of Douglas-fir populations is driven by temperature and soil N availability but can also be mediated by EMF (Rehfeldt et al., 2014;Kranabetter et al., 2015b;Pickles et al., 2015b). Temperature directly influences tree growth potential and may thus impact host C supply to fungal taxa. In turn this could induce a shift in EMF community structure across our study regions as EMF taxa differ in their C cost. Alternatively, temperature may have indirectly affected EMF assemblage through its impact on soil fertility such as availability of NO 3 − and NH 4 + (Kranabetter et al., 2015b). In addition to temperature, fitness and growth of Douglas-fir populations have been shown to relate to soil N availability (Kranabetter et al., 2015b), and close affiliation of Douglas-fir populations with local EMF symbionts may maximize tree nutritional adaptations (Kranabetter et al., 2015a;Leberecht et al., 2015). In turn this may reinforce FIGURE 6 | Distance-based redundancy analysis (db-RDA) sample (A) and species (B) ordinations based on ectomycorrhizal fungal abundance on interior Douglas-fir across five regions. Only variation explained by Douglas-fir fine-root traits is visualized. The ectomycorrhizal species are color-coded by fungal exploration type and are sized according to their relative abundance. The species epithet (when known) was removed to improve readability. CN, fine-root carbon-to-nitrogen ratio; RTD, fine-root tissue density. To aid comparison, predictor variables were standardized (z-scores) prior to analysis and are therefore unitless. Contact-, shortand medium-distance smooth exploration types are hydrophylic while medium-distance fringe and long-distance exploration types are hydrophobic. the filtering effect of soil C:N ratio on EMF assemblage observed in our study.
Taxonomic and Morphological Responses
We hypothesized that medium or long-distance explorers would be more abundant in colder climates or in soils with a high C:N ratio. Our results partly confirm this hypothesis as the hydrophobic, medium-fringe explorers Cortinarius sp., Piloderma sp., or Amphinema sp. and taxa in the Russulaceae classified as contact explorers, were more abundant in the warmer, less fertile environments of our study area, whereas the hydrophilic, short-and medium-distance smooth type, were more frequent and abundant in colder and more fertile conditions. In our study system, this pattern of longer distance explorers associated with warmer climates can been linked to higher host photosynthetic capacity that can sustain more C demanding mycorrhizal symbionts (Jarvis et al., 2013;Fernandez et al., 2017;Köhler et al., 2018;Mucha et al., 2018;Rosinger et al., 2018). Furthermore, the positive response to temperature of the genera Cortinarius (except C. decipiens) and Lactarius are potentially related to the increased genetic capacity within these taxa for mobilization of N from organic matter (Bödeker et al., 2014;Kyaschenko et al., 2017). This may also hold true for the genus Russula as Jones et al. (2010) and Kyaschenko et al. (2017) highlighted the positive correlation between Russula taxa and enzymes mobilizing N and P from organic matter. Lilleskov et al. (2002Lilleskov et al. ( , 2018 further classified Cortinarius and Russula as "nitrophobic" taxa. However, Looney et al. (2018) suggested that some members in the Russulaceae have lost the capacity to access C from organic matter.
The supposition that taxa associated with warmer climates tend to have competitive advantages in low N environments is supported by our data. Russulaceae and Cortinariaceae (with the exception of C. decipiens) were positively related to both C:N ratio and temperature. Consequently, in less fertile environments, fungi with proteolytic abilities such as Cortinarius or hydrophobic fungi with rhizomorphs such as Piloderma may be more competitive because they preferentially use insoluble, organic N. The latter fungi are likely less beneficial in richer soils (high total N concentration) where extensive exploration is not required Koide et al., 2014;Suz et al., 2014). Similarly, Douglas-fir trees in the colder, more fertile environments of our study area may favor hydrophilic symbionts which potentially cost the plant less C, such as EMF with short emanating hyphae or "nitrophilic" EMF such as Tomentella (Nilsson and Wallander, 2003;Tedersoo and Smith, 2013;Haas et al., 2018).
Ascomycetes such as Wilcoxina sp., Tuber sp. and the drought-tolerant C. geophilum, exclusively occurred in the drier environments of our study area. This is in agreement with studies showing positive shifts in Ascomycetes abundance from mesic to xeric conditions (Allison and Treseder, 2008;Fernandez et al., 2017). It has been hypothesized that Ascomycetes have a lower C cost to their host due to their relatively thin mantles and contact or short-distance exploration types . This may be beneficial in the drier regions of southern British Columbia where water and carbon availability for growth is reduced and where lower basal area increment of Douglas-fir is accompanied by lower fine-root carbohydrate reserve concentration (Wiley et al., 2018). The long-distance explorer Rhizopogon also exclusively occurred in the drier climates of our study area. This likely represents host preference for droughttolerant EMF as this taxon can transport water over long distances (e.g., Parke et al., 1983). As drier soils limit the diffusion rate of resources, this pattern of spatial niche separation could be an adaptation to stressful conditions (Pickles et al., 2015a). In addition, regions with drier soils were also phosphorus-limited, yet C. geophilum and Rhizopogon, both have a competitive advantage for plant nutrition in these conditions because the former possess acid phosphatase for P hydrolysis and mobilization, while the latter can forage for P more efficiently. This is because long-distance explorers have enhanced capacity for soil exploration and may therefore exploit soil resource such as P more completely (Kyaschenko et al., 2017;Köhler et al., 2018).
Association Between Fine-Root and Mycorrhizal Traits
We expected fine-root diameter to be correlated with abundance of exploration types along the biogeographic gradient, yet we found RTD and fine-root C:N, but not diameter, to be significantly related to EMF community structure and patterns of exploration type frequency. Fine roots with lower tissue density occurred predominantly in colder regions (Defrenne et al., unpublished) and were more frequently uncolonized or colonized by EMF with short emanating hyphae. As we do not provide evidence for a functional connection between root diameter and mycorrhizal exploration types, EMF traits might not compensate for changes in fine-root structure.
Fungi with short emanating hyphae in colder conditions may instead serve a function to protect roots from environmental stresses (e.g., frost, pathogens; Marx, 1972). This would increase root persistence without investing as much in short hyphae construction and maintenance as in hyphae for long-distance exploration. In our study area, colder environments (excluding Revelstoke) were also poorer in available phosphorus, therefore, resistance to root pathogens, potentially conferred by short-distance type EMF, could be at the expense of efficient P exploitation, for which long-distance exploration types are thought to be better adapted (Köhler et al., 2018).
Alternatively, Zadworny et al. (2017) argue that lower RTD in absorptive roots could be due to increased percentage of mycorrhizal mantle area in the root, which would then relate to enhanced capacity for resource uptake. The cost of producing new root tips with low RTD is also lower than producing roots with high RTD, this potentially leads to increased efficiency in nutrient acquisition and thus to a more precise foraging strategy. In addition to contributing to the RTD, the mycorrhizal mantle can contribute to the chemistry of first-and second-order roots (Ouimette et al., 2013). For example, a significant proportion of the fine-root N of the Kamloops region (higher root N concentration) could be from fungal origin, particularly from the mantle formed by medium-fringe taxa such as Cortinarius cedriolens (Figures 2, 6) which colonized fine-root tips from Kamloops and had a slightly negative response to soil C:N (Figure 3).
In any case, selection for complementarity in foraging strategy was not a major mechanism within ectomycorrhizal tree species in a study by Chen et al. (2018a) but could be more common in arbuscular mycorrhizal tree species Liu et al., 2015;Zhang et al., 2019). Chen et al. (2018a) proposed bet hedging as a potential explanation because EMF traits selected for root pathogen protection may be at odds with those selected for resource foraging. Finally, the absence of a relationship between root diameter and exploration type abundance could be associated with the design of our study compared to that of Chen et al. (2018a). We used a regional scale biogeographic gradient and selected a ectomycorrhizal tree host that further expressed moderate intraspecific variation in root diameter compared to variation in RTD or root C:N, whereas Chen et al. (2018a) surveyed several ectomycorrhizal tree species with large differences in mean root diameter and investigated links between roots and ectomycorrhizal traits at the level of the nutrient patch.
CONCLUSION
We combined fine-root and EMF trait measurements with next-generation sequencing across a biogeographic gradient. Douglas-fir EMF communities were dominated by host-generalist taxa which potentially explains the low variation in EMF α-diversity across environments. We did find, however, that temperature, precipitation and soil C:N ratio affected EMF community dissimilarities and exploration type abundance. Fungi with rhizomorphs (e.g., Piloderma sp.) or proteolytic abilities (e.g., Cortinarius sp.) dominated EMF communities in warmer and less fertile environments, whereas Ascomycetes (e.g., C. geophilum) or shorter distance explorers, which potentially cost the plant less C, were favored in colder/drier climates and richer soils (higher total N concentration). This pattern might be associated with co-evolutionary history between Douglas-fir populations and fungal symbionts, suggesting that the success of Douglas-fir as climate changes and stress increases may be dependent on maintaining strong associations with local communities of mycorrhizal fungi. At the regional scale, we did not find evidence for a functional connection between root diameter and EMF exploration types within Douglas-fir populations. Whether this implies no complementarity in resource foraging between fine roots and EMF is difficult to say, but this suggests that incorporating mycorrhizal symbiosis or at least EMF symbiosis into broader root trait frameworks may not be a suitable option if we are to represent the diversity of below-ground resource strategies. We thus encourage future research to simultaneously examine both root and fungal traits as separate entities.
DATA AVAILABILITY
The datasets generated for this study are available on request to the corresponding author.
AUTHOR CONTRIBUTIONS
CD, WR, BP, and SS designed the study. CED wrote the manuscript. WR and CD collected the fine-root and mycorrhizal trait data. SG and CD carried out the molecular analyses. TP and CD carried out the data analyses. TP, BP, and SS contributed to the data interpretation, and drafted and edited the manuscript. All authors contributed critically to the drafts and gave final approval for publication.
|
2019-05-26T13:46:52.125Z
|
2019-05-22T00:00:00.000
|
{
"year": 2019,
"sha1": "33b05358252f9512b1bae44323230e7c0938709f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.00643/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33b05358252f9512b1bae44323230e7c0938709f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
208019358
|
pes2o/s2orc
|
v3-fos-license
|
Improving continuity of patient care across sectors: study protocol of the process evaluation of a quasi-experimental multi-centre study regarding an admission and discharge model in Germany (VESPEERA)
Introduction Hospital stays are critical events as they often disrupt continuity of care. This process evaluation aims to describe and explore the implementation of the VESPEERA programme (Improving continuity of patient care across sectors: An admission and discharge model in general practices and hospitals, Versorgungskontinuitaet sichern: Patientenorientiertes Einweisungs- und Entlassmanagement in Hausarztpraxen und Krankenhauesern). The evaluation concerns the intervention fidelity, reach in targeted populations, perceived effects, working mechanisms, feasibility, determinants for implementation, including contextual factors, and associations with the outcomes evaluation. The aim of the VESPEERA programme is the development, implementation and evaluation of a structured admission and discharge programme in general practices and hospitals. Methods and analysis The process evaluation is linked to the VESPEERA outcomes evaluation, which has a quasi-experimental multi-centre design with four study arms and is conducted in hospitals and general practices in Germany. The VESPEERA programme comprises several components: an assessment before admission, an admission letter, a telephonic discharge conversation between hospital and general practice before discharge, discharge information for patients, structured planning of follow-up care after discharge in the general practice and a telephone monitoring for patients with a risk of rehospitalisation. The process evaluation has a mixed-methods design, incorporating interviews (patients, both care providers who do and do not participate in the VESPEERA programme, total n=75), questionnaires (patients and care providers who participate in the VESPEERA programme, total n=475), implementation plans of hospitals, data documented in general practices, claims-based data and hospital process data. Data analysis is descriptive and explorative. Qualitative data will be transcribed and analysed using framework analysis based on the Consolidated Framework for Implementation Research. Associations between the outcomes of the program and measures in the process evaluation will be explored in regression models. Ethics and dissemination Ethics approval has been obtained by the ethics committee of the Medical Faculty Heidelberg prior to the start of the study (S-352/2018). Results will be disseminated through a final report to the funding agency, articles in peer-reviewed journals and conferences. Trial registration number http://www.drks.de/DRKS00015183. Trial status The study protocol on hand is the protocol V.1.1 from 18 June 2018. Recruitment for interviews started on 3 September 2018 and will approximately be completed by the end of May 2019.
Introduction Hospital stays are critical events as they often disrupt continuity of care. This process evaluation aims to describe and explore the implementation of the VESPEERA programme (Improving continuity of patient care across sectors: An admission and discharge model in general practices and hospitals, Versorgungskontinuitaet sichern: Patientenorientiertes Einweisungs-und Entlassmanagement in Hausarztpraxen und Krankenhauesern). The evaluation concerns the intervention fidelity, reach in targeted populations, perceived effects, working mechanisms, feasibility, determinants for implementation, including contextual factors, and associations with the outcomes evaluation. The aim of the VESPEERA programme is the development, implementation and evaluation of a structured admission and discharge programme in general practices and hospitals. Methods and analysis The process evaluation is linked to the VESPEERA outcomes evaluation, which has a quasi-experimental multi-centre design with four study arms and is conducted in hospitals and general practices in Germany. The VESPEERA programme comprises several components: an assessment before admission, an admission letter, a telephonic discharge conversation between hospital and general practice before discharge, discharge information for patients, structured planning of follow-up care after discharge in the general practice and a telephone monitoring for patients with a risk of rehospitalisation. The process evaluation has a mixedmethods design, incorporating interviews (patients, both care providers who do and do not participate in the VESPEERA programme, total n=75), questionnaires (patients and care providers who participate in the VESPEERA programme, total n=475), implementation plans of hospitals, data documented in general practices, claims-based data and hospital process data. Data analysis is descriptive and explorative. Qualitative data will be transcribed and analysed using framework analysis based on the Consolidated Framework for Implementation Research. Associations between the outcomes of the program and measures in the process evaluation will be explored in regression models. Ethics and dissemination Ethics approval has been obtained by the ethics committee of the Medical Faculty Heidelberg prior to the start of the study (S-352/2018). Results will be disseminated through a final report to the funding agency, articles in peer-reviewed journals and conferences. trial registration number http://www. drks. de/ DRKS00015183. trial status The study protocol on hand is the protocol V.1.1 from 18 June 2018. Recruitment for interviews started on 3 September 2018 and will approximately be completed by the end of May 2019.
IntroduCtIon
Insufficient communication between hospitals and physicians in the outpatient sector Interventions in the general practice after discharge: (E) assessment for planning of followup care (F) telephone monitoring, depending on the risk for rehospitalisation X X X X may jeopardise the recovery process, lead to avoidable rehospitalisations 1 2 and induce adverse events. 3 These outcomes also affect health-related patient satisfaction and healthcare costs. 4 The legislator in Germany responded to this care problem by obligating hospitals to offer discharge management measures to all patients (Rahmenvertrag über ein Entlassmanagement beim Übergang in die Versorgung nach Krankenhausbehandlung nach § 39 Abs. 1 s.9 SGB V). The VESPEERA programme aims to support the implementation of this regulation. It develops, implements and evaluates a structured hospital admission and discharge programme between general practices and hospitals to avoid interruptions in the hospital admission and discharge process. An overview on the intervention components and the outcomes evaluation is given down below and is described in detail elsewhere. 5 Subsequently, we first summarise the patient-directed interventions in the VESPEERA programme (Improving continuity of patient care across sectors: An admission and discharge model in general practices and hospitals, Versorgungskontinuitaet sichern: Patientenorientiertes Einweisungs-und Entlassmanagement in Hausarztpraxen und Krankenhauesern), the VESPEERA outcomes evaluation and the implementation strategies. Then we elaborate on the process evaluation in the remaining of this paper.
VESPEErA programme
Legislation in Germany is focused on hospital discharge and does not address admission management. The VESPEERA programme supports the implementation of structured discharge management and, among others, adds admission management procedures and further outpatient care after discharge in general practices . If admitted to the hospital electively, the general practitioner (GP) will conduct an assessment with the patient in order to generate an admission letter for the hospital, providing medical and social information on the patient before hospital admission. Intervention components in the hospital include a telephonic discharge conversation for defined high-risk patients between the hospital and the general practice as well as a patient discharge information. After discharge, another assessment will be conducted in the general practice to facilitate planning of follow-up care (such as medication plans, referrals to specialists, prescriptions for medication and medical products and devices) and to identify patients with an increased risk for rehospitalisation based on the HOSPITAL Score (a score to determine risk of 30-day rehospitalisation 6 ). These patients will be enrolled in a 3-month telephone monitoring. Patients who had an emergency admission will receive the assessment for planning of follow-up care and, if eligible, the telephone monitoring. Table 1 gives an overview on the intervention components and study arms.
VESPEErA outcomes evaluation
The VESPEERA programme is 'expected to reduce the number of avoidable rehospitalisations and emergency care contacts, to improve patient safety and patient involvement, to reduce overuse, underuse and misuse of health care, to improve the continuity of care and to improve interprofessional and cross-sectoral communication between patients, hospitals, general practices and the sickness fund "Allgemeine Ortskrankenkasse (AOK) Baden-Wurttemberg"'. 5 The intervention is evaluated in a quantitative outcomes evaluation with a quasi-experimental design. The primary outcome is the number of rehospitalisations due to the same indication (three-digit ICD-10-GM code (International Classification of Diseases, German Modification)) Open access within a time frame of 3 months (90 days) to the outpatient sector. The following indicators have been defined as secondary outcomes: rehospitalisation due to the same indication within 30 days; hospitalisations due to ambulatory care-sensitive conditions; delayed prescription of medication and medical products/devices and referral to other health practitioner/s after discharge; utilisation of emergency or rescue services within 3 months; average care cost per year and patient participating in the VESPEERA programme.
Using AOK claims data, patient data from the Care-Cockpit and data collected in a questionnaire-based patient survey, a difference-in-difference model is applied for the primary analysis. The change of the primary outcome (before vs after the intervention) of each intervention group will be pairwise compared with the control group. A detailed description of the outcomes evaluation can be found in the corresponding study protocol. 5
Implementation strategies
Several strategies were applied to support the implementation of structured hospital admission and discharge management. The strategies are named according to the ERIC compilation (Expert Recommendations for Implementing Change) by Powell et al 7 and are reported using the recommendations by Proctor et al 8
as follows:
First, consensus discussions with representatives of all stakeholders, thus physicians, GPs, patients, sickness funds and researchers, have been conducted. All intervention components were thoroughly discussed in the developmental period concerning the relevance of items, wording of items and design of documents such as the patient discharge information. By involving users in the development of the intervention, acceptance and attractiveness of the programme are expected to increase.
Second, formal commitments are obtained by participating hospitals. Adaptability is promoted in order to facilitate the integration of study components into clinical processes. Therefore, each hospital will provide information on how they will ensure the identification of study patients, the use of the admission letter, the execution of the telephonic discharge conversation, the dissemination of the patient discharge information and the transmission data to calculate the HOSPITAL Score. These formal commitments are obtained within 4 weeks after signing the participation agreement. Thereby, intervention fidelity as well as acceptance and attractiveness of the VESPEERA programme is expected to increase.
Third, the record system is changed by enhancing the PraCMan-Cockpit software that is routinely used in Baden-Wuerttemberg within the PracMan case management programme. 9 The resulting CareCockpit includes the additional VESPEERA module, which assists general practices with organising patient information, conducting the assessments and care planning, generating the admission letter and other documents, and administrating telephone calls within the telephone monitoring. The CareCockpit is software that works independently from the practice information system and is used by the Care Assistant in General Practice (Versorgungsassistentin in der Hausarztpraxis, VERAH) and the GP. Furthermore, the CareCockpit works as an electronical case report form for data analysis within the outcomes evaluation.
Fourth, train-the-trainer strategies are used in order to instruct GPs and VERAHs in software utilisation and study processes. Trainers are teams of two (GP and VERAH) who are experienced in training the PraCMan-Cockpit and who were instructed in handling the CareCockpit by the study central office. GPs and VERAHs who are interested in participating in the VESPEERA programme sign up for a one-time 2.5 hour training. GPs and VERAHs learn the handling of the software in a role-play format.
Fifth, in order to support GPs and VERAHs with implementation of all intervention components, educational materials are developed. Investigator site files are provided after participation in the training by the study central office. Investigator site files contain instructions and background information on the following: obtaining informed consent by patients, installation of the CareCockpit-software, an overview on frequently asked questions concerning the handling of the software, conduction of the intervention components and conduction of the patient survey. Furthermore, general practices are continuously provided with instructional video tutorials on handling the software by the study central office. Along with the trainings, educational materials are expected to increase intervention fidelity.
Sixth, both participating general practices and hospitals are provided ongoing consultation with the study central office and other consortium partners to support implementation. General practices and hospitals are repeatedly called by employees of the study central office and asked for the status of implementation and any problems that arise within the implementation process. General practices are offered refreshers on topics of the training such as the procedure for obtaining informed consent by patients, handling of the software and instruction of the intervention components. Thus, intervention fidelity is expected to increase.
Seventh, hospitals and general practices are provided feedback in the form of three benchmarking reports in September 2018, June 2019 and December 2019. The feedback reports are based on structured, quantified data sources (claims data, patient data from the CareCockpit and patient survey data), and are aggregated on a hospital or general practice level. These will be discussed in three moderated feedback meetings during the intervention period with care providers, where options for potential improvement will be developed. Feedback meetings are planned for September 2018, September 2019 and March 2020. Feedback meetings are moderated by the study central office with support by the other project partners. Care providers will have an active role in the meetings in a workshop format and report their perspective and experiences. Audit and feedback is a strategy to improve professional practice, which has mixed and overall moderate b. Which contextual factors on system, hospital and practice level influenced the adoption of intervention components and outcomes of the programme? c. Which practices concerning admission and discharge management have been implemented in non-participating hospitals during the intervention period (for example in consequence of the new regulation on hospital discharge management)? 6. Dose-response associations Which associations exist between the outcomes (as disclosed by the outcomes evaluation) and findings of the process evaluation?
impacts on professional performance. 10 11 In this context, feedback provided is expected to enhance intervention fidelity.
In addition, hospitals and general practices will receive fee-for-service for conducting patient-related care services as well as lump sum reimbursement for study organisation and participation in workshops and feedback meetings. General practices can invoice the care services as part of their usual invoice process, which is carried out at the end of each quarter year. Hospitals invoice the sickness fund 'Allgemeine Ortskrankenkasse' (AOK) Baden-Wurttemberg at the end of each quarter year. Lump sums are paid after participating in the feedback meetings. Fee-for-service gives an incentive to provide the different interventions components and thereby is expected to increase intervention fidelity. 12 VESPEErA process evaluation The VESPEERA programme is a complex intervention which intends to impact on a range of outcomes. The impact on outcomes depends not only on the effectiveness of planned interventions but also on the degree of implementation of these interventions, the reach in relevant healthcare providers and patient populations and the moderating impacts of the organisational and societal context in which the interventions are applied. As described by the Medical Research Council, complex interventions are characterised by multiple, mutually interacting intervention components; multiple targeted groups of individuals and organisations; multiple outcomes and mediating factors; high impact of the organisational and societal context on outcomes; and a 'degree of flexibility or tailoring of the interventions'. 13 These features largely apply to VESPEERA. A large number of interventions are applied; various organisations in different care sectors are involved, each with structural conditions specific to the sector (e.g. remuneration systems). The effects of the interventions cover a range of domains. 5 Furthermore, hospitals are involved in the implementation within their organisation to tailor it to their local processes and structures.
We planned a process evaluation to provide insight into how well the intervention was implemented, why it did or did not work (ie, did or did not have an effect on outcomes), [13][14][15] what context factors had an influence on the implementation and outcomes and thereby allow to improve 'transferability of potentially effective programs to other settings'. 16 Investigation of implementation outcomes such as reach (whether the targeted population participated as intended/the degree to which the targeted population participated) or intervention fidelity (whether the intervention was delivered as planned) can help to better understand the results of the outcomes evaluation. 17 objectives This process evaluation aims to examine the intervention fidelity, reach in targeted populations, perceived effects, working mechanisms, feasibility and determinants for implementation, including contextual factors, as well as associations with the outcomes evaluation, so that programme outcomes can be better interpreted. The research questions that are of interest within this process evaluation are illustrated in Box 1. Figure 1 shows the hypothesised working mechanisms of the VESPEERA programme and the primary areas of interest of the outcomes and the process evaluation, respectively. The planned procedures for the process evaluation will be described in detail below.
MEthodS of ProCESS EVAluAtIon Study design
The process evaluation has an observational mixedmethods design, incorporating qualitative data from interviews and implementation plans with a description of the implementation in participating hospitals as well as quantitative data from questionnaires that are filled in for each patient in hospital, surveys and data collected through the CareCockpit software in general practices. This process evaluation is part of the VESPEERA study that lasts from October 2017 until March 2021. The planned time frame for the process evaluation started in July 2018; evaluations will be complete by the end of March 2021.
Study setting
The VESPEERA programme is implemented in 25 hospital departments and 115 general practices in a defined region in southern Germany. The process evaluation is carried out by the Department of General Practice and Health Services Research at the Heidelberg University Hospital.
Eligibility criteria
Patients who take part and gave their informed consent to the VESPEERA study participation and outcomes evaluation can participate in the process evaluation. GPs and VERAHs who participate in the VESPEERA study can participate in the process evaluation. Hospital staff from participating hospitals has to work in one of the departments selected for VESPEERA implementation or have to be involved in the implementation process of the VESPEERA intervention components on a higher hierarchical level (such as hospital management). Physicians, nursing staff and hospital management from nonparticipating hospitals as well as GPs and VERAHs from non-participating general practices are included if they can provide insight into their regular admission and discharge processes and the implementation of the new legislation on hospital discharge management.
Above that, all participants have to be 18 years and older, have written and spoken German language skills and have to be able to give their informed consent into study participation in the process evaluation. Persons who are unable to give their consent are excluded from study participation.
outcomes of the process evaluation and data sources The process evaluation uses data from a mix of sources, which in the following are described in detail (an overview on the research questions phrased, outcomes and data sources used can be found as a online supplementary file).
Interviews
Qualitative interviews will be conducted with nursing staff, physicians and management staff from participating and non-participating hospitals, GPs and VERAHs from participating and non-participating general practices as well as participating patients after hospital stay. The interview guide addresses the intervention fidelity, perceived effects and factors influencing implementation (barriers, facilitators, contextual factors) as well as acceptance and attractiveness of the intervention.
Questionnaires
In addition, quantitative data result from structured surveys with participating GPs, VERAHs, physicians, nursing staff, management staff and patients after a hospital stay. The questionnaire will be designed based on the results of the qualitative interviews as well as other studies on process evaluations and will be piloted before use. This pseudonymised questionnaire will not contain any data that allows identification of participants' identity. Concepts addressed in the questionnaires will be, among others, reach (see research question 1), unintended effects (see research question 2), added value (see research question 3) and barriers and facilitators for implementation (see research question 5).
Hospital process data survey As part of the VESPEERA programme, hospitals are asked to collect the HOSPITAL Score for patients to determine their risk of rehospitalisation. This questionnaire is expanded by questions used for the process evaluation. These include sociodemographic questions and questions on processes that are part of the study interventions that are implemented within hospitals (identification of VESPEERA patients, utilisation of the VESPEERA admission letter, telephonic discharge conversation with the general practice). Data from the hospital process data survey will be used to analyse intervention fidelity for intervention components within hospitals.
Hospital implementation plans
In order to facilitate the integration of study components into clinical processes, different approaches are suitable for different hospitals. Therefore, each hospital will provide information on how they will ensure the identification of study patients, the use of the admission letter, the execution of the telephonic discharge conversation, the dissemination of the patient discharge information and determination of the HOSPITAL Score. Hospital implementation plans will be used to analyse intervention fidelity for intervention components within hospitals.
Patient data
For the outcomes evaluation, patient data from the Care-Cockpit are linked with claims-based data from AOK Baden-Wurttemberg and data from the hospital process data survey. This data set will be provided for the process evaluation. These data provide information on the study arm that the patient belongs to as well as patient characteristics, the pseudonym generated in the CareCockpit for data linkage, diagnoses, the medical question for admission, information on previous antibiotic prescriptions, living situation, long-term care-related items (such as scales for activities of daily living and instrumental activities of daily living), medical information (such as pain, wounds, alarming symptoms for medical emergencies, PHQ-2 (Patient Health Questionnaire) instrument for mental disorders screening), compliance to medicinal therapy, the items of the HOSPITAL Score as well as process data (provision of information to patients, information on whether any follow-up care has been initiated and successfully executed). The patient data set will be used for the analysis of reach and intervention fidelity as well as dose-response associations. The following indicators are used as outcomes for the analysis of reach and intervention fidelity: ► Proportion and description of patients who participated in VESPEERA compared with all targeted persons who meet the inclusion criteria. ► Proportion of persons enrolled in the GP centredcare programme (HZV, hausarztzentrierte Versorgung) who have been admitted to a participating hospital by a participating practice, for whom a new patient account has been created in the CareCockpit and for whom a complete admission letter including a medication plan was generated and was given to the patient to take along, compared with all participating HZV-insured persons in participating practices with planned hospital admissions. ► Proportion of participating patients who have been discharged from a participating hospital to their GP, for whom at the time of discharge the HOSPITAL Score has been determined, compared with all participating patients who have been discharged from a participating hospital. ► Proportion of participating patients for whom the assessment for planning of follow-up treatment has been conducted compared with all participating patients. ► Proportion of participating patients who have been enrolled in the follow-up telephone monitoring due to an intermediate or high risk for rehospitalisation and for whom at least two phone calls have been conducted within the given timeframe of 3 months, per all participating patients. ► The degree to which the intervention components in hospitals have been implemented and offered as compared with the intention.
Sample size
The sample for the qualitative study is planned to reach saturation of data; the planned numbers are expected to be sufficient. The study sample for interviews on a hospital level consists of management staff, physicians and nursing staff and will be stratified by region and hospital size. On a practice level, GPs, VERAHs and patients will be recruited from participating practices, stratified by practice size, region and gender. In addition, staff from non-participating hospitals and general practices will be interviewed. This is important as interventions on a systems level can influence the effects of the evaluated care model. Table 2 gives an overview on the planned sample size for interviews. The sample for the quantitative survey study comprises of all participating practices and hospitals (full study population) and a sample of n=200 patients for explorative data analysis (see table 3). The sample size of patients was restricted out of feasibility reasons.
recruitment
Within the process evaluation, participants will be recruited for interviews and written surveys.
Recruitment for qualitative interviews
Personnel from non-participating hospitals will be recruited by contacting the hospital management. A purposeful sample of hospitals will be selected, among others based on region, top-level versus basic care and Open access previous interest to participate in VESPEERA. GPs and VERAHs from non-participating general practices will be recruited based on a list of all GPs who participate in GP-based care outside of the intervention region.
A purposeful sample will be selected based on region, practice size and gender. All participating general practices are asked to recruit eligible patients, as they are not known to the study central office. By using a response coupon eligible interview participants from all stakeholder groups can declare their interest in participating in an interview. They will then be contacted by the study central office, be provided with an information letter and the written consent form.
Recruitment for the survey Personnel from participating hospitals will be recruited by the contact person at the hospitals. The contact persons will be provided with information letters, written informed consent forms and paper-based questionnaires and will be asked to hand it out to eligible personnel as defined by the study central office. All participating general practices will be sent the information letters, informed consent forms and paper-based questionnaires for GPs and VERAHs and will be asked to fill it in. Patients will be recruited by the general practices, as they are not known to the study central office. GPs will be provided with information letters, informed consent forms and paper-based questionnaires and will be asked to hand it out to eligible patients. data collection and management Interviews Interviews will be conducted as face-to-face or telephone interviews by researchers of the study central office. Interviews will last 30 min maximum and will be conducted using a semi-structured interview guide. In exceptional cases, for instance if problems within the recruitment process arise, written qualitative interviews consisting of open-end questions might be used. All interviews will be audio-recorded, transcribed verbatim and stored on a secured server of the study central office. Transcripts will contain pseudonymised data only.
Questionnaires
Paper-based questionnaires are mailed to physicians, VERAHs, nursing staff and management staff from participating hospitals, GPs and patients. The filled in questionnaires will be sent by mail using an enclosed post-paid envelope to the study central office, where they will be scanned and digitally stored on a secured server. Reminders for data collection of both interviews and questionnaires will be sent out to all potential participants one to two times via fax, mail or post.
Hospital process data survey Hospitals fill in the hospital process data survey on the conduction of all intervention components for each case at the time of the patients' discharge, using the form they use to collect data for the HOSPITAL score used in the VESPEERA study.
The hospitals can either integrate the questionnaire into their hospital information system as an electronic questionnaire (transfer to the aQua-Institut via secure file transfer protocol servers) or fill in paper-based questionnaires that are sent to the aQua-Institut via mail using enclosed post-paid envelopes.
Hospital implementation plans Participating hospitals will hand in a description of their individual implementation plan to the study central office.
Patient data
During the intervention period, patient data from the CareCockpit are continuously collected for the purpose of data analysis. Data from the CareCockpit are transferred along with claims-based data each quarter year. data analysis Data analysis for the process evaluation is descriptive and explorative. Qualitative data will be transcribed according to established standards and will be analysed with regard to the research questions with framework analysis using the software MAXQDA. 18 The framework used for data analysis is the Consolidated Framework for Implementation Research (CFIR). 19 A deductive approach is chosen to assign paraphrases from the interviews to the themes and subthemes of the CFIR. Then, inductive coding within the CFIR themes is carried out and subthemes specific to the project are generated. The CFIR was chosen as it is a comprehensive framework that takes into account many of the aspects that need to be considered when evaluating the implementation of a complex intervention in healthcare organisations.
Quantitative survey data and the indicators for the intervention fidelity will be analysed descriptively. Correlations between the outcomes of the process evaluation and the outcomes evaluation will further be analysed using multilevel regression models. Using patient data, response Open access (eg, rehospitalisations within 30 days after discharge) will be related to dose of the implementation interventions (eg, transmission of an admission letter to the hospital), taking clustering of patients in primary care practices into account. As the analysis is explorative, we refrain from a detailed pre-specified analysis plan.
Patient and public involvement
Patients were actively involved in the conduction of all intervention components, as described in the 'Implementation Strategies' section. With the 'Gesundheitstreffpunkt Mannheim e.V.' as consortium partner, an organisation representing patient interests is involved in all stages of the study (funding application, design of the study, conduction of intervention components, interpretation of results, dissemination of results).
dISCuSSIon
This process evaluation aims to provide insight into the implementation process of the VESPEERA programme in the participating general practices and hospital departments as well as the determinants influencing the degree of implementation. The results will contribute to adjusting the VESPEERA programme after the completion of all evaluations for a possible implementation into routine care. By relying on the GP as a gatekeeper to further healthcare and by proposing communication structures, the VESPEERA programme is expected to improve continuity of care.
Continuity of care is a complex concept with no clear definition. 20 However, recurring components of continuity of care include the first contact with a primary care provider, ie gatekeeping, information continuity ('the capacity of that information to travel with the patient and throughout the health system, between providers and over time' 21 ) and longitudinal care provider continuity. 2 20 By improving continuity of care patient outcomes are supposedly improved. In a systematic review, Huntley et al found that continuity of care, ie seeing the same GP, reduced utilisation of emergency departments and emergency hospital admissions. 22 Furthermore, in another systematic review by an Loenen et al the authors showed that aspects of primary care such as a gatekeeping role and provider continuity are associated with a lower risk of avoidable hospitalisations due to ambulant care sensitive conditions. 2 Huntley et al 22 and van Loenen et al 2 included mostly observational studies in their reviews on the effects of organisational features of primary care on hospitalisations and emergency care use. With a quasi-experimental approach and a thorough process evaluation, the VESPEERA programme is expected to contribute to the literature on the effects of continuity of care and care coordination on several patient outcomes.
Within this process evaluation, perspectives of a broad range of stakeholders are considered. Furthermore, interviews allow for gaining in-depth understanding of experiences with the VESPEERA programme and communication processes, whereas questionnaires allow for a higher sample size. Thus, this serves to understand the broad implementation of a complex intervention.
However, no linkage between interview and questionnaire data with data sources of the outcome evaluation is intended. The intervention fidelity and barriers and facilitators to implementing the intervention therefore cannot be linked with patient-individual outcomes.
EthICS, dAtA ProtECtIon And SECurIty, And dISSEMInAtIon A data protection concept is part of the VESPEERA contractual agreement between consortium partners and has been approved by a data security officer. The regulations of the European General Data Protection Regulation are met.
Dissemination of the results of this study is planned through the final report to the funding agency, articles in peer-reviewed journals as well as relevant national and, if relevant, international conferences. Acknowledgements Furthermore, we thank all consortium partners of the VESPEERA study 'AOK Baden-Württemberg' for overall project organisation and consortium leadership, 'University Hospital Heidelberg, Department for General Practice and Health Services Research' for project coordination, execution of the study and all study central office-related issues, 'aQua-Institut' for data management and preparation and execution of the patient survey, 'HÄVG Hausärztliche Vertragsgemeinschaft AG' for organisation of train-the-trainer events, 'University Hospital Heidelberg, Institute for Medical Biometry and Informatics, Dept. for Medical Biometry' for statistical expertise and statistical analyses and 'Gesundheitstreffpunkt Mannheim e.V.' for involvement of patients in the development of intervention components. Moreover, we thank participating hospitals, general practitioners and patients. We would like to thank Annika Baldauf and Marion Kiel for organisation and support of all study central office-related issues.
Contributors JF, AK and MW drafted the original manuscript. CS, MW, JF, AK and JS have planned the study, planned the data collection and have designed all instruments for data collection. LU provided statistical expertise. SK is involved in data collection of patient data. All authors read and approved the final manuscript.
funding This work was supported by the Federal Joint Committee (G-BA), Innovation Fund, grant number 01NVF17024.
Competing interests JS holds stocks of the aQua-Institut.
Patient consent for publication Not required.
Ethics approval The study protocol has been submitted to and approved by the ethics committee of the Medical Faculty Heidelberg prior to the start of the study (S-352/2018).
Provenance and peer review Not commissioned; externally peer reviewed.
open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
|
2019-11-14T17:08:06.599Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "3611fdeecc797465ea3675cffb7e8ec4facd9e2d",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/11/e031245.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abfc103358d774c67ebea49df3dfb36ab33b6c8e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236418579
|
pes2o/s2orc
|
v3-fos-license
|
4.0. In vitro cytotoxic activity of Phyllanthus amarus Schum. &
https://doi.org/10.30574/wjbphs.2021.6.2.0050 Abstract Some bioactive compounds from plants are excellent sources of anticancer drugs. These natural phytochemicals are used in active research for cancer prevention and treatment. In our present study invitro anticancer activity was evaluated using dimethylformamide leaf extract of Phyllanthus amarus as its GC-MS analysis revealed many active principles which exhibited good antimicrobial and antioxidant properties. There were reports that anti-proliferative activity is always coupled with antioxidant activity. Anti-cancer activity of the P. amarus leaf extract was tested against HCT 15 and T47D cell lines and inhibitory effect on HCT 15 cell line was found to be greater than T47D cell line. With the increasing concentration of extract, the percentage of viability of cell lines was found to be decreased for both the cell lines. The anticancer activity of leaf extract of P. amarus is comparable to positive control drug doxorubicin. N-Hexadecanoic acid, lignans and polyphenol compounds in leaf extract may be responsible for the anticancer activity. These phytochemicals block cancer cell propagation by controlling cancer stem cells and can influence all the stages of cancer development
Introduction
Cancer is a serious disease caused by invasive growth of cells which tend to proliferate rapidly causing malignancies in body. Cancer cells are formed as a result of imbalance in body metabolism and destroy healthy cells of our body [1]. Cancer cells overlook the signals that normal cells take; thereby disturbing the process of programmed cell death (apoptosis). After cardiovascular disease, cancer is the leading cause of death [2] as it is related with complex mechanisms both at cellular and molecular level. Natural sources such as plants, micro-organisms and marineorganisms serve about 60% of the total anti-cancer agents [3]. Medicinal herbs exhibit anti-cancer activity because of their excellent anti-oxidant and immunomodulatory properties. Phytochemical rich diet lessens cancer risk by 20%. These phytochemicals are generally natural plant derived secondary metabolites. Many challenges were faced during cancer treatment as patients undergo various types of therapies such as radiation, chemotherapy etc. In low and middle income countries (LMIC), over 20 million new cancer cases are expected annually as early as 2025 [4]. Anti-cancer agents avoid or repress the growth of cancer. Plant derived compounds are gaining insight for exploiting novel pathways in cancer therapy. Plant based drugs have fewer side effects and research in plants is in continuous progress for isolating the active principles for curing various types of cancers in an effective way. Medicinal plants such as Podophyllum peltatum, Taxus brevifolia, Camptotheca acuminate, Cephalotaxus harringtonia, Catharanthus roseus etc. have been reviewed and compounds such asbetulinic acid, combrestatin and silvesterol were found to be responsible for anticancer activity [5]. Lignan compounds present in plants proved to play an important role in reducing risk of many cancers such as breast cancer, uterus cancer, ovarian cancers and other estrogen-related cancers [6].
P. amarus is a well-known medicinal herb belonging to family Euphorbiaceae. It is commonly known as Bhui amla and contains many active constituents which cure wide range of diseases. Alkaloids, terpenoids and other secondary metabolite compounds of plant are found to exhibit anti-cancer properties [7]. Lignan compounds such as phyllanthin, hypophyllanthin, nirtetralin, phyltetralin have been identified from extracts of P. amarus [8]. This plant contains an array of flavonoids such as quercetin, rutin, kaempferol, astragalin and quercitrin [9]. Tannin precursors such as Gallic acid, ellagic acid, simple tannins such as 1,6-digalloylglupyranose, 4-O-galloylquinic acid and complex tannins such as geraniin are also present in this plant [8]. Plant parts of Phyllanthus amarus were studied using different solvents for their metabolic fingerprinting studies [9] [10], antimicrobial activity [11] [12] and anti-oxidant activity [13] [14]. Gallic acid and quercetin inhibited cell cycle in G1 phase by inactivating cdc25A phosphorylation thereby inducing apoptosis in tumor cells. This is done by activating caspase activity and reducing cyclin D production [15] [16]. Phytochemical investigation revealed the presence of lignan and polyphenol compounds [17] [12]. The presence of gallic acid, gereniin, quercetin and rutin in P. amarus which exhibit anticancer properties were reported [18]. Plant derived bioactive compounds exert anticancer effects by different mode of action such as interferons induced cell death, cell death by DNA damage, autophagy induction by activation of tyrosine kinase and proteosome inhibitors [19]. Induction of programmed cell death is by use of glucocorticoid hormones and disconnection of cellular metabolism by limiting tumor cell growth. Other mechanisms include triggering apoptosis by inducing mutations in cancer cells, interfering with DNA replication by use of alkylating agents, cross linking in DNA strands by use of heavy metals, blocking of nucleic acid synthesis in cell cycle by use of antimetabolite compounds, DNA fragmentation by using mixture of glycopeptides, preventing reunion of DNA double helix during replication by stabilizing the DNA tomoisomerase II complex, preventing DNA replication by Topoisomerase inhibitors and inhibiting mitotic spindle formation by blocking tubulin synthesis [20]. Phyllanthus species were also reported to arrest cell cycle at different phases. The mechanism of action is due to interference of protein synthesis and DNA synthesis machinery. The property of uncontrolled proliferation is lost when cells cycle is arrested [21]. P. amarus was found to treat breast cancer by diminishing the potential of mitochondrial membrane, increasing reactive oxygen species intracellularly, upregulation of caspase -3 expression and down-regulation of Bcl-2 expression [18]. Four species of Phyllanthus viz. P. amarus, P.niruri, P.urinaria and P.watsonii were found to exhibit antiangiogenic effect by inhibiting capillary tube formation and anti-metastatic effect by decreasing the ability of cancer cell invasion and migration [22]. Hepatocarcinoma was treated using Phyllanthus urinaria by inducing the production of TNF by inhibiting the expression of antiapoptotic genes and secondary tumour development [23]. GC-MS analysis of dimethylformamide leaf extract revealed the presence of a compound N-Hexadecanoic acid that is responsible for anticancer activity. Our study reports the effect of dimethylformamide extract of P. amarus leaf on cancer cell lines (HCT 15 and T47D).
Anti-cancer activity of Dimethyl formamide leaf extracts of P. amarus
The cytotoxic effect of Dimethyl formamide leaf extract was tested against Human colorectal adeno carcinoma cancer cell line (HCT 15) and Human breast cancer cell line (T47D). Dulbecco's Minimal Essential Medium (DMEM) was used for cell culture studies.
Cancer cell line development and maintenance
The Human colorectal adeno carcinoma (HCT 15) and Human breast cancer (T47D) cell lines were obtained from the National Center for Cell Sciences (NCCS), Pune (Table 1). HCT 15 and T47D cancer cell lines were maintained in Dulbecco's Modified Essential Medium (DMEM) supplemented with 4.5 g/l glucose, 2mM L-glutamine and 5% fetal bovine serum (FBS) at 37 0 C in 5% CO2 incubator (Thermo scientific, USA). Cells from exponentially growing culture were used for experimental purpose. The cancer cell lines were maintained successfully in required laboratory conditions and used for further studies i.e., Trypan blue staining and MTT assay.
Cell viability assay using Trypan blue dye
Trypan Blue is an essential blue coloured acid dye, consisting of two azochromophore group widely used for studying the number of viable cells present in a population. This dye does not penetrate into the cell wall of live cells which are grown in culture.
Procedure:
The cells were suspended in a known quantity of PBS and the cell count was adjusted to 1x10 6 cells/ml. The P. amarus leaf extract at various concentrations (0-0.2 mg/ml) were prepared from the stock solution (10 mg/ ml) in phosphate buffer saline solution (PBS). The PBS solution was added to all the tubes containing plant extract and the final volume was made to 800 µl with PBS. 100µl of HCT 15 and T47D cell lines in phosphate buffered saline was added to the tubes. A control having solvent alone was also prepared. They are then incubated at 37 0 C for 3 hours and 100 µl of trypan blue was added to all test tubes. Cell counts were done using trypan blue dye exclusion method on haemocytometer by counting stained (non-viable) and unstained (viable) cells. Cell viability assay results were expressed as percentage of cell viability [24].
Calculation:
Viability percentage = live cell count total cell count × 100
2.4.
Cytotoxic assay of cell lines by MTT assay-(4, 5-diMethyl Thiazol-2-yl) -2, 5-diphenylTetrazolium bromide MTT assay is based on the capacity of Mitochondrial succinate dehydrogenase enzyme in living cells to reduce the yellow water soluble substrate 3-(4, 5-dimethyl thiazol-2-yl)-2, 5 diphenyltetrazolium bromide (MTT) into an insoluble, purple blue colored formazan product which can be measured. Since reduction of MTT can only occur in metabolically active cells, the level of activity is a measure of the viability of the cells [25]. The trypsinized cells from T-25 flask were seeded in each well of 96-well flat bottom tissue culture plate at a density of 5x10 3 cells/well in growth medium and cultured at 37 0 C in 5% CO2 adhere. After 48hrs incubation, the supernatant was discarded and the cells were pre-treated with growth medium and were subsequently mixed with different concentrations of leaf extract (0-0.2mg/ml) using dimethylformamide solvent and then incubated for 48 hrs at 37 0 C in Co2 incubator. The supernatant growth medium was removed by aspiration. Each well is then added with 5 µl of fresh MTT (0.5 mg/ml in PBS) followed by incubation for 2hr at 37 0 C in dark. Formazan crystals formed after incubation were solubilised with 100µl of DMSO and incubated for 30min. The absorbance (OD) of the colored product in culture plate was read at a wavelength of 570nm on an ELISA reader (Thermo scientific multiscan, USA). Optical density is directly correlated with cell quantity. Culture medium along with DMSO without plant extract was used as control. The absorbance values which are lower than the control cells indicate a reduction in the rate of cell proliferation. Anticancer drug doxorubicin was used as a reference compound for determination of anticancer activity of cell lines. IC50 values were calculated from the graph of percentage inhibition against sample concentration.
Statistical analysis
The triplicate data were analyzed as mean + standard deviation. Data was statistically analyzed by Graph pad prism (ver. 7.0.1).
Anti-cancer activity of dimethylformamide leaf extract of Phyllanthus amarus against HCT 15 and T47D cell lines
The cytotoxic effect of plant extract was tested by the MTT assay, which showed the effect of its secondary metabolites on the cell viability in HCT 15 and T47D cancer cell lines. The percentage viability of the HCT 15 and T47D cancer cells were calculated before treating with various concentrations of plant extract. 97.87% of viability was shown by HCT 15 cell lines and 98.44% of viability was shown by T47D cell lines which are most suitable to perform MTT cytotoxicity studies ( Table 2). Percentage of cell viability of cell lines were carried out by Trypan blue staining using haemocytometer. After treating the cells with different concentrations of leaf extract (0-0.2mg/ml), it was found that with the increasing concentration of leaf extract, the percentage viability of cancer cell lines was decreased (Fig.1,2).
MTT Assay results of Phyllanthus amarus dimethyl formamide leaf extract
A plotted graph of anticancer activity of dimethylformamide leaf extract of P. amarus on HCT 15 and T47D cell lines using MTT assay was shown in Fig.5 and Fig.6 respectively. With the increasing concentration of dimethylformamide leaf extract from 10 to 200 µg/ml, the HCT 15 cancer cell line growth inhibition increased from 8.86% to 87.22 % (Table 3, Fig.3) and the T47D cancer cell line growth inhibition increased from 8.39 % to 86.01% (Table 4, Fig.4). This shows that the leaf extract inhibit the growth of the cancer cells and the inhibitory effect on HCT 15 cell line is comparatively greater than T47D cell line. The IC50 value was found to be 106.7 µg/ml for HCT 15 cancer cell line and 90.3 µg/ml for T47D cancer cell line, which shows that the anticancer activity of leaf of P. amarus is comparable to positive control drug doxorubicin and can be used as good anticancer agent.
Figure 3
Cytotoxic activity of various concentrations of leaf extract of P. amarus against HCT15 cell line by MTT assay Table 3 Inhibition percentage of HCT15 cell line at various concentrations of leaf extract by MTT assay. In a previous report, the compounds n-Hexadecanoic acid, N-Methoxy-N-methyl acetamide, Ursa-9 (11), 12-dien-3-ol, gammasitosterol were found to be responsible for biological activity [26]. There is no report in literature regarding testing the cytotoxic potential of dimethylformamide leaf extract of P. amarus against Human colorectal adeno carcinoma (HCT 15) and Human breast cancer (T47D) cell lines by MTT assay. n-hexane and chloroform extracts of heart wood of Albizia adianthifolia [27] and ethanolic extract of whole plant of Aristolochia krysagathra [28] were reported to have trans-13-octadecanoicacid and 9,12-Octadecadienoic acid compounds respectively, which belong to methyl ester group and display anti-cancer properties. The presence of a diterpene compound called 2-Cyclopenten-1one, 2-hydroxy in ethanolic leaf extract of Bruguiera cylindrica showing antimicrobial and anticancer properties have been reported [29]. They also reported phenolic compounds possessing antimicrobial, antioxidant and anticancer properties. Whole plant extracts of Calanthe triplicate in ethyl acetate extracts revealed the presence of 4H-Pyran-4one, 2,3-dihydro-3,5-dihydroxy-6-methyl which is a flavonoid compound exhibiting antimicrobial, anticancer and antiinflammatory properties [30]. Phytol, which is a diterpene compound, is reported to have antioxidant, antiinflammatory and anticancer properties in ethanolic leaf extract of Cyperus rotundus [31]. Antioxidant, antiinflammatory and anticancer properties in methanolic leaf extract of Eupatorium triplinerve have been reported [32]. An Isoprenoid compound named Squalene showing antioxidant and anticancer properties and a ketone compound named 3, 7, 11, 15-Tetramethyl-2-hexadecen-1-ol exhibiting anti-inflammatory and anti-cancer properties were reported in methanolic leaf extract of Eupatorium triplinerve [32]. These extracts also have 2, 6, 10-trimethyl, 14ethylene-14-pentadecane, which exhibit anti-fungal, antibacterial and anti-cancer properties and 5-Hydroxymethyl furfural which exhibits anti-oxidant and anti-cancer properties. Tetradecanoic acid, which is a Myristic acid, is found to have antioxidant and anticancer properties in ethanol extract of bark of Hugonia mystax [33]. Tetradecanoic acid, which is a Myristic acid, is found to have antioxidant and anticancer properties in ethanolic leaf extracts of Hyptis lanceolata Poir. [34]. Steroid compounds 7-dehydrodiosgenin and lupeol were also found to have anticancer properties in nhexane and chloroform extracts of stem bark of Pterocarpus angolensis [27]. An ester, 10 Dotriacontylpentafluoropropionate is found to have cytotoxic properties in wild and mutant strains of Schizophyllum commune [35]. All the above reports show that various esters, acids, phenols, flavonoids, isoprenoids, ketones, hydrocarbons, steroids compounds present in plants are responsible for various types of activities. All the above reports confirm that esters, acids, and flavonoid compounds which occur in P. amarus were the major cause for its anti-cancer activity and other medicinal properties.
Conclusion
These findings report Phyllanthus amarus as a potential plant exhibiting anticancer properties. Scientific study of the plant for its chemical constituents helps in understanding of their functional properties for development and designing of effective anticancer drugs. Safe and effective use of this plant in prevention of cancer helps to recommend it as a dietary supplement. The mechanism of action against cancer cell proliferation and the regulation of apoptotic pathway are yet to be investigated using preliminary in vitro data for removing the barriers towards in vivo applications.
|
2021-07-27T00:05:10.447Z
|
2021-05-30T00:00:00.000
|
{
"year": 2021,
"sha1": "0eaafeece038df9ba571e71472d963fa35a0e120",
"oa_license": "CCBY",
"oa_url": "https://wjbphs.com/sites/default/files/WJBPHS-2021-0050.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8491d54830d6eed59e0f4af81461d0938f507cb1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
271228075
|
pes2o/s2orc
|
v3-fos-license
|
Hearing Loss and Oxidative Stress: A Comprehensive Review
Hearing loss is a prevalent condition affecting millions of people worldwide. Hearing loss has been linked to oxidative stress as a major factor in its onset and progression. The goal of this thorough analysis is to investigate the connection between oxidative stress and hearing loss, with an emphasis on the underlying mechanisms and possible treatments. The review addressed the many forms of hearing loss, the role of reactive oxygen species (ROS) in causing damage to the cochlea, and the auditory system’s antioxidant defensive mechanisms. The review also goes over the available data that support the use of antioxidants and other methods to lessen hearing loss brought on by oxidative stress. We found that oxidative stress is implicated in multiple types of hearing loss, including age-related, noise-induced, and ototoxic hearing impairment. The cochlea’s unique anatomical and physiological characteristics, such as high metabolic activity and limited blood supply, make it particularly susceptible to oxidative damage. Antioxidant therapies have shown promising results in both animal models and clinical studies for preventing and mitigating hearing loss. Emerging therapeutic approaches, including targeted drug delivery systems and gene therapy, offer new possibilities for addressing oxidative stress in the auditory system. The significance of this review lies in its comprehensive analysis of the intricate relationship between oxidative stress and hearing loss. By synthesizing current knowledge and identifying gaps in understanding, this review provides valuable insights for both researchers and clinicians. It highlights the potential of antioxidant-based interventions and emphasizes the need for further research into personalized treatment strategies. Our findings on oxidative stress mechanisms may also affect clinical practice and future research directions. This review serves as a foundation for developing novel therapeutic approaches and may inform evidence-based strategies for the prevention and treatment of hearing loss, ultimately contributing to improved quality of life for millions affected by this condition worldwide.
Introduction
Over 1.5 billion individuals globally suffer from hearing loss, and estimates suggest that number will rise to 2.5 billion by 2050 [1].Beyond only making it difficult to hear, hearing loss can have a significant negative influence on a person's quality of life and cause social isolation, depression, and cognitive deterioration [2,3].Although several variables contribute to the development of hearing loss, oxidative stress has been identified as a key role in the pathophysiology of hearing loss [4].When the body's capacity to eliminate reactive oxygen species (ROS) through antioxidant defenses is out of balance, oxidative stress occurs [5].Cellular damage, malfunction, and finally cell death are the outcomes of this imbalance [6].Numerous illnesses, such as cancer, neurological diseases, and cardiovascular disorders, have been linked to oxidative stress [7,8].Studies have shown that oxidative stress has detrimental effects on the auditory system and is linked to age-related hearing loss, noise-induced hearing loss, and ototoxicity [9,10].Because of its high metabolic requirements and exposure to a variety of external stressors, including noise and ototoxic medications, the auditory system is particularly susceptible to oxidative stress [9].Hearing loss is mostly caused by oxidative damage to the inner ear, specifically to the cochlea [11].Sound waves must be converted into electrical signals by sensory hair cells, which are found in the highly specialized cochlea [12].Due to their low ability to regenerate, these hair cells are especially vulnerable to oxidative damage [13].Research has repeatedly demonstrated that oxidative stress markers are more prevalent in hearing-impaired people than in normal-hearing people [14,15].In comparison to healthy controls, individuals with sensorineural hearing loss had far greater amounts of malondialdehyde, a marker of lipid peroxidation, according to a study by Karlidag et al. [16].In a similar vein, Neri et al. [17] showed that patients with age-related hearing loss exhibited decreased antioxidant enzyme activity and elevated levels of oxidative stress indicators.Considering the significant effects that hearing loss has on people and society at large, it is imperative to comprehend the connection between oxidative stress and hearing loss in order to create preventative and therapeutic measures that will work.The objectives of this thorough study are to investigate the mechanisms that lead to hearing loss caused by oxidative stress, the protective effect of antioxidants against this damage, and possible future paths for clinical research and treatment.Examining the causes of reactive oxygen species (ROS) in the auditory system, as well as the antioxidant defense mechanisms that prevent them, is essential.Additionally, it is important to explore the specific pathways through which oxidative stress results in hearing loss, such as inflammation, ischemia-reperfusion injury, hair cell death, and mitochondrial dysfunction.The potential of antioxidant therapies, including pharmaceutical drugs, gene therapy, and dietary antioxidants, in the prevention and treatment of hearing loss will also be covered in this review.Although there have been several evaluations on the connection between oxidative stress and hearing loss, our work makes several unique contributions to the area.This review offers a comprehensive understanding of oxidative stress in hearing loss by synthesizing recent developments from several fields, including audiology, otolaryngology, biochemistry, and molecular biology.We extend the body of research by providing a thorough examination of novel interventions not fully addressed in earlier reviews, such as gene therapy, stem cell treatments, and drug delivery methods based on nanotechnology.By bridging the gap between fundamental research and practical applications, our method offers guidance on how current discoveries in oxidative stress processes can impact clinical practice and direct future treatment approaches.We evaluate the field's present research approaches objectively, noting their advantages, disadvantages, and potential areas of development for further research.By conducting this critical evaluation, we are able to pinpoint important knowledge gaps and suggest particular topics for further study, which may serve as a roadmap for the subsequent round of studies in this area.Additionally, we provide a distinct viewpoint on how the knowledge of oxidative stress in hearing loss might be converted into useful clinical guidelines for treatment, early intervention, and prevention.Our review differs from the previous literature due to its translational focus, which makes it beneficial for researchers and clinicians alike.By addressing these facets, this study offers a forward-looking perspective on the function of oxidative stress in hearing loss while also consolidating present knowledge.Our goal is to provide a thorough, insightful, and clinically applicable analysis that sets our work apart from previous evaluations and makes a significant contribution to the field's advancement.
Methods
Several electronic databases, including PubMed, Scopus, Web of Science, and Google Scholar, were used to perform an extensive literature search.Several key terms were used in the search strategy, including "hearing loss", "oxidative stress", "reactive oxygen species", "antioxidants", "cochlea", "hair cells", "mitochondria", "inflammation", "ischemia-reperfusion", "noise-induced hearing loss", "age-related hearing loss", and "ototoxicity".To guarantee that pertinent research was included in the search results, boolean operators (AND, OR) were employed.No year or language restriction was applied in the search.In order to find any more pertinent research that might have gone unnoticed during the original database search, the reference lists of the included papers were additionally manually searched.We included the following.1. Original research papers, reviews, and meta-analyses that looked into the connection between hearing loss and oxidative stress.2. Research that investigated the pathways-such as mitochondrial malfunction, ischemia-reperfusion injury, inflammation, and hair cell death-that underlie oxidative stress-induced hearing loss.3. Research on the use of pharmaceutical drugs, gene therapy, and nutritional antioxidants as antioxidant therapies to cure and prevent hearing loss.4. Studies involving animals, in vitro subjects, and humans were all taken into consideration for inclusion.We excluded the following.1. Conference abstracts, opinions, case reports, and letters to the editor.2. Research that did not concentrate on the connection between hearing loss and oxidative stress.3. Research that was not released in English.The relevance of the titles and abstracts of the identified publications was checked by two separate reviewers.After that, full-text publications for the chosen studies were obtained, and their eligibility was further evaluated in accordance with the inclusion and exclusion criteria.After the data were retrieved, a narrative approach was used to synthesize the information, with an emphasis on the mechanisms underlying hearing loss caused by oxidative stress and the potential benefits of antioxidant therapies for both prevention and treatment of this illness.To provide a more nuanced view of the problem, subgroup analyses were conducted based on the kind of hearing loss (e.g., age related, noise induced, ototoxicity) and the specific antioxidant intervention (e.g., dietary antioxidants, pharmaceutical agents, gene therapy).Following a thorough analysis, the results were presented in relation to the state of the literature, emphasizing the main conclusions, restrictions, and implications for further study and clinical application.In order to close these information gaps and progress the topic of oxidative stress and hearing loss, the review also suggested future research directions.
Results
A preliminary search of the database produced 1247 records.There were 987 unique records left after duplicates were eliminated.A total of 723 records that did not fit our inclusion criteria were excluded as a result of title and abstract screening.Figure 1 provides a summary of our literature search findings (Figure 1).
After evaluating 264 full-text articles for eligibility, 156 of them were disqualified for a variety of reasons, such as publications written in languages other than English or a lack of emphasis on the connection between oxidative stress and hearing loss.There were 108 studies in the final review.After evaluating 264 full-text articles for eligibility, 156 of them were disqualified for a variety of reasons, such as publications written in languages other than English or a lack of emphasis on the connection between oxidative stress and hearing loss.There were 108 studies in the final review.
How Are Oxidative Stress and the Auditory System Related?
Reactive oxygen species' (ROS) generation and the body's capacity to counteract them through antioxidant defenses are out of balance in oxidative stress [18].ROS are very reactive substances that have the ability to harm lipids, proteins, and DNA, among other cellular constituents, ultimately resulting in cellular malfunction and demise [19].NADPH oxidases, xanthine oxidase, and the mitochondrial electron transport chain are the main producers of reactive oxygen species (ROS) [20].Even while ROS are physiologically significant for the immune system and cell signaling, excessive ROS generation can cause oxidative stress and aid in the etiology of many disorders, including hearing loss [21].The cochlea, in particular, is extremely vulnerable to oxidative stress because of the special anatomical and physiological traits of the auditory system [22].There are different primary sources of ROS in the auditory system.The increased ROS generation and mitochondrial activity are the results of the high metabolic needs of the cochlea, particularly in the sensory hair cells and stria vascularis [23].Via a number of
How Are Oxidative Stress and the Auditory System Related?
Reactive oxygen species' (ROS) generation and the body's capacity to counteract them through antioxidant defenses are out of balance in oxidative stress [18].ROS are very reactive substances that have the ability to harm lipids, proteins, and DNA, among other cellular constituents, ultimately resulting in cellular malfunction and demise [19].NADPH oxidases, xanthine oxidase, and the mitochondrial electron transport chain are the main producers of reactive oxygen species (ROS) [20].Even while ROS are physiologically significant for the immune system and cell signaling, excessive ROS generation can cause oxidative stress and aid in the etiology of many disorders, including hearing loss [21].The cochlea, in particular, is extremely vulnerable to oxidative stress because of the special anatomical and physiological traits of the auditory system [22].There are different primary sources of ROS in the auditory system.The increased ROS generation and mitochondrial activity are the results of the high metabolic needs of the cochlea, particularly in the sensory hair cells and stria vascularis [23].Via a number of pathways, including NADPH oxidase activation, glutamate excitotoxicity, and mitochondrial dysfunction, exposure to high noise levels can increase the production of reactive oxygen species (ROS) in the cochlea [9,24].By raising ROS production and lowering antioxidant defenses, some pharmaceuticals, including cisplatin and aminoglycoside antibiotics, can cause oxidative stress in the cochlea [25,26].Decreased antioxidant capacity and mitochondrial dysfunction in the cochlea are agerelated alterations that lead to oxidative stress and hearing loss [4].The auditory system is equipped with a sophisticated network of antioxidant defense mechanisms to mitigate the harmful effects of reactive oxygen species [27].The main enzymes that neutralize ROS and shield the cochlea from oxidative damage are glutathione peroxidase (GPx), catalase (CAT), and superoxide dismutase (SOD) [28].Glutathione (GSH), vitamins C and E, and coenzyme Q10 are examples of small-molecule antioxidants that are essential for scavenging reactive oxygen species (ROS) and preserving redox balance in the cochlea [29,30].The cochlea's cellular redox equilibrium is preserved by the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway, which is a key regulator of antioxidant gene expression [30].Oxidative stress occurs when the generation of ROS surpasses the antioxidant defenses of the auditory system, resulting in a variety of pathological alterations in the cochlea [4].Direct damage from oxidative stress to sensory hair cells can result in their malfunction and eventual death, which is the main cause of hearing loss [31].Hearing loss can be made worse by a vicious loop of increased ROS generation and energy depletion brought on by damage to mitochondrial DNA, proteins, and lipids caused by ROS [22].Oxidative stress has the ability to trigger inflammatory pathways within the cochlea.Oxidative stress in the cochlea can trigger an inflammatory response, leading to the activation and recruitment of immune cells.This process involves the activation of resident immune cells as macrophages and leukocytes in the cochlea and the upregulation of pro-inflammatory cytokines, such as TNF-α, IL-1β, and IL-6, and increased vascular permeability.[32].Activated immune cells can also produce more ROS, creating a feedback loop that further exacerbates cochlear damage.The stria vascularis, which is in charge of preserving the endocochlear potential, is especially susceptible to oxidative stress, and when it malfunctions, hearing loss may result [32].
What Are the Mechanisms of Hearing Loss Induced by Oxidative Stress?
Hearing loss is primarily caused by oxidative stress, and there are multiple ways that excessive generation of reactive oxygen species (ROS) can harm the auditory system.The main mechanisms underpinning oxidative stress-induced hearing loss will be discussed in this part, including inflammation, ischemia-reperfusion injury, hair cell death, and mitochondrial dysfunction.The main places in cells where reactive oxygen species (ROS) are produced are the mitochondria, and oxidative stress-induced hearing loss is largely caused by dysfunctional mitochondria [21] (Figure 2).The cochlea depends heavily on mitochondrial function for energy production du to its high metabolic needs, especially in the sensory hair cells and stria vascularis [23].O the other hand, excessive ROS production can harm proteins, lipids, and mitochondri DNA, which impairs mitochondrial activity and produces more ROS [22].Superoxid (O2 •− ) is frequently the first reactive oxygen species (ROS) to develop, mostly as a result o NADPH oxidase activation or mitochondrial electron transport chain leakage.Th The cochlea depends heavily on mitochondrial function for energy production due to its high metabolic needs, especially in the sensory hair cells and stria vascularis [23].On the other hand, excessive ROS production can harm proteins, lipids, and mitochondrial DNA, which impairs mitochondrial activity and produces more ROS [22].Superoxide (O 2 •− ) is frequently the first reactive oxygen species (ROS) to develop, mostly as a result of NADPH oxidase activation or mitochondrial electron transport chain leakage.The cochlea's unique NADPH oxidase, NOX3, is a significant generator of superoxide.NOX3 produces a lot of superoxides when it is activated by ototoxic stimuli or noise stress, which starts a chain reaction of oxidative processes.Iron-sulfur clusters in proteins can be directly damaged by superoxide, which can result in the release of free iron and the inactivation of enzymes, exacerbating oxidative stress.Superoxide dismutates to create hydrogen peroxide (H 2 O 2 ), which is less reactive than superoxide but more persistent and diffusible.H 2 O 2 can cross membranes in the cochlea and oxidize different parts of the cell.It is important for redox signaling because it affects transcription factors, like NF-κB and AP-1, which control cochlear cell apoptotic and inflammatory responses.Furthermore, Fenton reactions involving excess H 2 O 2 and transition metals can produce extremely corrosive hydroxyl radicals.The most reactive ROS, hydroxyl radicals (OH • ), destroy cellular macromolecules without discrimination.The lipid-rich outer hair cell membranes in the cochlea are especially vulnerable to the damaging effects of hydroxyl radicals, which can cause lipid peroxidation cascades that impair membrane integrity and cellular function [33].This process is particularly important in noise-induced hearing loss because it can quickly cause the death of outer hair cells through the acute production of hydroxyl radicals.In the end, the auditory system may experience energy depletion, cellular malfunction, and cell death as a result of this destructive cycle of mitochondrial damage and ROS generation [34].Research has demonstrated a correlation between age-related hearing loss and mutations and deletions in the mitochondrial DNA, underscoring the significance of mitochondrial function in preserving auditory health [35,36].The loss of sensory hair cells in the cochlea, which are especially susceptible to oxidative stress, is the main cause of hearing loss [31].Both necrotic and apoptotic mechanisms can cause hair cell death when exposed to oxidative stress [37].Cell enlargement, organelle malfunction, and rupture of the plasma membrane-which results in the release of cellular contents and inflammationare the passive processes that define necrosis [38].On the other hand, apoptosis is a route of programmed cell death that includes chromatin condensation, caspase activation, and the creation of apoptotic bodies [39].By triggering caspases, releasing cytochrome c, and activating mitochondrial permeability transition pores (mPTPs), ROS can cause apoptosis in hair cells [40].Furthermore, oxidative stress can trigger additional apoptotic pathways, including the JNK and p53 signaling cascades, which can further exacerbate the death of hair cells [41,42].Hearing loss can result from processes that are intimately related to oxidative stress and inflammation (Figure 3) [32].Pro-inflammatory cytokines, chemokines, and adhesion molecules can be produced as a result of ROS-activating inflammatory pathways, such as the NF-κB and MAPK signaling cascades [43].These inflammatory mediators have the ability to draw in and activate immune cells, like neutrophils and macrophages, which can worsen tissue damage and oxidative stress in the cochlea [44].Furthermore, hair cell loss and auditory impairment can be exacerbated by the activation of resident immune cells in the cochlea, such as fibrocytes and macrophages, which can prolong the inflammatory response [45,46].The etiology of age-related hearing loss has also been linked to inflammaging, a chronic low-grade inflammatory response linked to aging [47].Another way oxidative stress might cause hearing loss is by ischemiareperfusion damage [48].Because the cochlea is so sensitive to changes in blood flow, ischemia during the reperfusion period might cause ROS to be produced [49].Hair cells, spiral ganglion neurons, and other cochlear structures may sustain oxidative damage as a result of this surge in ROS generation, which may outweigh the cochlea's antioxidant defenses [50].An important modulator of cellular antioxidant responses is the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway.Nrf2 translocates to the nucleus in response to oxidative stress, where it binds to Antioxidant Response Elements (AREs) and upregulates the expression of genes related to glutathione production and other antioxidant enzymes.Chronic oxidative stress, however, has the potential to overpower or compromise this defense system.Poly(ADP-ribose) polymerase-1 (PARP-1) is activated by ROS-induced DNA damage, especially in mitochondrial DNA.Although PARP-1 aids in DNA repair, overactivation of the protein can cause cochlear cells to experience an energy crisis and NAD+ depletion, which ultimately leads to cell death.ROS cause lipid peroxidation in the membranes of cochlear cells, especially influencing the outer hair cells that are rich in phospholipids.Reactive aldehydes, like 4-hydroxynonenal (4-HNE), are produced by this mechanism, and they have the ability to create protein adducts and spread cellular harm.Oxidative stress in the cochlea can set off an inflammatory response that activates leukocytes and macrophages, two types of native immune cells.Pro-inflammatory cytokines such TNF-α, IL-1β, and IL-6 are upregulated as a result [7].These cytokines can worsen the harm that oxidative stress causes to the cochlea and are known to have a role in the inflammatory process [8].Furthermore, increased production of reactive oxygen species (ROS) by the activated immune cells might result in a feedback loop that exacerbates cochlear damage [32].The article also notes that transcription factors, like NF-κB and AP-1, which control inflammatory and apoptotic responses in cochlear cells, can be activated by hydrogen peroxide (H 2 O 2 ), a reactive oxygen species [34].Additionally, oxidative stress has the ability to initiate apoptotic pathways, including the JNK and p53 signaling cascades, which may contribute to the cochlea's hair cells dying [41,42].Antioxidant therapy can mitigate the considerable hair cell loss and auditory impairment that ischemia-reperfusion injury can induce in animals [9,50].Ischemia-reperfusion damage and oxidative stress have been linked to cochlear blood flow impairment in humans, including presbycusis and abrupt sensorineural hearing loss [4,51].
Different Types of Hearing Loss and Oxidative Stress: Pathophysiological Mechanisms
It is essential to investigate the role that oxidative mechanisms play in different forms of auditory impairment in order to improve our comprehension of the intricate link between oxidative stress and hearing loss.This method gives insights into possible focused treatments in addition to a thorough framework.Oxidative stress has a complex involvement in presbycusis, an age-related hearing loss [2,28].Different cochlear structures are impacted by the slow accumulation of oxidative damage over time [24].For example, reduced endocochlear potential caused by mitochondrial malfunction in the stria vascularis impairs the cochlea's capacity to transduce sound [4].Hair cells have a build-up of mutations in their mitochondrial DNA at the same time, which eventually sets off apoptotic pathways.The oxidative stress-induced production of advanced glycation end products (AGEs) exacerbates the situation by decreasing the compliance of the tectorial and basilar membranes, which are crucial for hearing [14,15].An alternative scenario is that of noise-induced hearing loss, in which the cochlea experiences an acute increase in reactive oxygen species (ROS) generation that occurs quickly and intensely [11].Widespread lipid peroxidation results from this abrupt oxidative burst, which especially damages the fragile outer hair cells.The apoptotic cascades are triggered by the release of cytochrome c, and the ischemia-reperfusion injury caused by noise-induced vasoconstriction in the stria vascularis intensifies the damage [27,34].Different oxidative stress processes are involved in ototoxicity-induced hearing loss, which is frequently linked to specific drugs.For instance, iron and aminoglycoside antibiotics combine to produce complexes that catalyze the production of ROS [27].On the other hand, chemotherapeutics based on platinum, such as cisplatin, deplete antioxidant systems and activate NOX3, which is a major source of reactive oxygen species in the cochlea [28,43].Both paths converge on spiral ganglion neuron degeneration and hair cell death, despite their distinct beginning processes.Although complex in nature, sudden sensorineural hearing loss commonly involves acute oxidative stress.The early start of oxidative damage, whether due to viral infections, vascular events, or autoimmune reactions, can swiftly overwhelm local antioxidant defenses, leading to rapid and severe hearing impairment [46].Hearing loss caused by oxidative stress is also influenced by genetic factors.Genes such as GJB2, which encodes connexin 26, can be mutated to affect important functions, such as potassium recycling, which can result in an increased production of ROS and metabolic stress [24].Similar to this, in cochlear cells, mutations that impair mitochondrial activity can have a direct effect on cellular energy production and ROS control [23,25].Gaining knowledge of these many pathophysiological processes is essential for creating focused treatment plans.Therapies targeted at promoting mitochondrial activity may be very helpful in several disorders [52,53], in particular age-related hearing loss, and fast-acting antioxidants may be essential in noise-or abrupt-induced hearing loss [54].Strategies that strengthen endogenous antioxidant systems or directly block particular ROS-generating pathways may work well in cases of ototoxicity.Furthermore, tailored therapy methods have new opportunities thanks to this sophisticated understanding of oxidative stress in various forms of hearing loss.Through the identification of the major oxidative pathways involved in individual cases, healthcare providers may be able to more effectively customize prevention and treatment plans.In cases where there are known genetic components, this may entail a mix of systemic and local antioxidant medications, lifestyle changes to lessen the oxidative burden, or even genetic therapies that target certain pathways [55].By delving into the particular pathophysiological pathways of oxidative stress in different kinds of hearing loss, we further advance scientific knowledge and open the door to more focused, successful treatment interventions.This strategy fills the knowledge gap between fundamental science and practical application by providing a thorough framework that can direct future investigations and influence therapeutic procedures in the field of auditory health.
How Can Antioxidant Techniques Prevent and Treat Hearing Loss?
Antioxidant-based interventions have emerged as viable approaches to prevent and treat hearing loss due to the essential role that oxidative stress plays in the etiology of this disorder.This section will examine several antioxidant strategies that have demonstrated promise in reducing oxidative stress-induced hearing loss, such as pharmaceutical treatments, targeted delivery methods, gene therapy, and dietary adjustments.The potential of dietary antioxidants to prevent and treat hearing loss has been the subject of much research [52].These antioxidants include polyphenols, like quercetin, curcumin, and resveratrol, as well as vitamins A, C, and E [53,54] (Table 1).By lowering oxidative stress and inflammation in the cochlea, dietary supplements containing these antioxidants have been demonstrated in animal studies to mitigate age-and noise-related hearing loss [55,56].Higher dietary antioxidant intake has been linked to a lower risk of hearing loss in people, according to observational studies [57,58].Nevertheless, to determine if dietary antioxidants are effective in both preventing and treating human hearing loss, randomized controlled trials are required.The potential of pharmacological antioxidants to lessen hearing loss caused by oxidative stress has also been studied [59].N-acetylcysteine (NAC), ebselen, D-methionine, and coenzyme Q10 (CoQ10) are some of these agents [60,61].By scavenging reactive oxygen species (ROS) and bolstering endogenous antioxidant defenses, NAC, a precursor of glutathione, has been demonstrated to protect against noise-induced and ototoxic drug-induced hearing loss in animal models [62,63].In a similar vein, it has been discovered that the mitochondrial antioxidant CoQ10 can reduce age-related hearing loss in both people and animals [64,65].In preclinical studies, D-methionine and ebselen have also shown protective benefits against ototoxic drug-induced hearing loss and noise-induced hearing loss [66,67].Even though these pharmacological antioxidants seem promising, more clinical research is required to determine their efficacy and safety in people.Antioxidants may be delivered to the inner ear specifically to maximize therapeutic benefit and reduce systemic negative effects [68].Antioxidants like resveratrol and NAC have been delivered straight to the cochlea using nanoparticle-based delivery methods, including liposomes and polymeric nanoparticles [69,70].When compared to a systemic injection, these targeted delivery strategies have demonstrated increased efficacy in attenuating ageand noise-induced hearing loss in animal models [71,72].Another intriguing method for adjusting the inner ear's antioxidant defenses is gene therapy [73].It has been possible to transfer genes encoding antioxidant enzymes, such as catalase and superoxide dismutase, to the cochlea via adenoviral and adeno-associated viral (AAV) vectors [74,75].In animal models, these gene therapy techniques have shown preventive benefits against age-related and noise-induced hearing loss [76,77].To optimize targeted delivery methods and gene therapy strategies for clinical translation, more studies are necessary.A comprehensive strategy to prevent and manage hearing loss caused by oxidative stress should include lifestyle changes and noise reduction techniques [78].One of the main risk factors for hearing loss is exposure to loud noise.Noise-induced hearing impairment can be avoided by limiting noise exposure, using hearing protection devices, and avoiding loud places [79].In addition, hearing loss risk can be decreased and total antioxidant defenses strengthened by eating a well-balanced, antioxidant-rich diet, exercising frequently, abstaining from tobacco use, and limiting alcohol intake [80,81].In addition, sustaining auditory health requires close observation and management of long-term illnesses that might exacerbate oxidative stress and hearing loss, such as diabetes and cardiovascular disease [82,83].Antioxidant therapies have received a lot of attention lately due to their potential to treat hearing loss brought on by oxidative stress.But, for researchers and clinicians alike, a nuanced understanding of their limitations, accompanying obstacles, and effectiveness is essential.Antioxidants found in food, like vitamins C and E and β-carotene, have demonstrated potential in preventing age-related and noise-induced hearing loss.The evidence is still conflicting, though, with several trials showing little or no benefit.The blood-labyrinth barrier is a major obstacle to these chemicals' bioavailability in the cochlea.One example is the low penetration of vitamin C into the fluids of the inner ear.In order to improve cochlear bioavailability, researchers are investigating novel delivery methods, such as nanoparticle compositions.Even though they are usually thought to be harmless, large dosages of some antioxidants can have negative effects.It has been difficult, in addition, to consistently translate these discoveries into clinical benefits.Like many antioxidant medications, NAC has problems with bioavailability, such as low cochlear penetration and poor oral absorption.Research on drug delivery strategies that target the cochlea may be able to get around these restrictions.Pharmaceutical antioxidants have different safety profiles.While NAC is generally well tolerated, large doses can cause allergic responses or nausea.Before beginning treatment, a thorough patient assessment is essential due to the possibility of drug interactions, such as those that may occur between NAC and specific blood pressure drugs.An interesting new frontier in the prevention and treatment of hearing loss is the use of gene therapy techniques that target antioxidant pathways in the inner ear.Positive preclinical findings suggest that cochlear cells may produce targeted antioxidants over an extended period of time.This strategy might get over established bioavailability obstacles.Nevertheless, there is a dearth of long-term safety data and scant clinical evidence on people.Thorough research is necessary to identify potential dangers, which include immunological reactions to viral vectors and inadvertent off-target effects.
The various antioxidant techniques differ greatly in their practical aspects.While gene therapy options now face substantial cost and administration issues, dietary interventions may be more accessible and cost-effective for a greater number of individuals.The practicality of different antioxidant therapies is greatly influenced by these parameters.
Combination Therapies, Synergistic Approaches, and Long-Term Outcomes
Antioxidants and anti-inflammatory medications or neurotrophic factors have been combined in recent studies to maximize therapeutic effects [88][89][90][91].Combination therapy makes sense because of the intricate interactions that occur in the pathophysiology of hearing loss involving oxidative stress, inflammation, and cellular damage [90][91][92].For example, when combined with corticosteroids, N-acetylcysteine (NAC) has demonstrated increased effectiveness in preventing noise-induced hearing loss as opposed to when either medication is used alone [93,94].In an investigation conducted by Fetoni et al., this combination dramatically decreased hair cell loss and hearing threshold alterations in mice exposed to noise [2].NAC's antioxidant qualities work in tandem with corticosteroids' anti-inflammatory effects to potentially offer more complete protection against cochlear damage.In preclinical research, combining antioxidants with neurotrophic factors, such as neurotrophin-3 (NT-3) or the brain-derived neurotrophic factor (BDNF), has shown encouraging outcomes.The co-administration of the BDNF and the antioxidant Trolox improved spiral ganglion neuron survival in deafened guinea pigs, according to research by Sly et al. [95].Together, these mixtures scavenge free radicals while also encouraging cochlear hair cell survival and regeneration, thereby providing a two-pronged approach to hearing preservation and restoration.Nevertheless, creating potent combination treatments comes with a number of difficulties.It is necessary to determine the ideal dosage schedules in order to optimize synergistic effects and reduce the possibility of negative interactions.The significance of timing in combination therapy was brought to light by a study conducted by Eastwood et al., which showed that in a model of electrode insertion trauma, sequential delivery of dexamethasone and NAC was more efficacious than simultaneous delivery [4].When it comes to age-related or chronic disorders causing hearing loss, antioxidant therapies raise serious questions about the long-term safety and effectiveness of these treatments.Since there is currently little information available in this field, patients on long-term antioxidant regimens require close observation and extensive follow-up research.Many antioxidant therapies still lack long-term safety data, especially when it comes to hearing health.Even though dietary antioxidants are usually regarded as harmless, long-term highdose supplementation may be dangerous.As per the ATBC study, smokers' risk of lung cancer was observed to increase when they took high-dose beta-carotene supplements for an extended period of time [5].It is unknown if there are comparable hazards for cochlear health, but further research is necessary.Concerns regarding possible changes in redox signaling or disruption of regular physiological functions are also raised by the long-term usage of pharmacological antioxidants.Long-term disruption of the cochlea's sensitive redox equilibrium may have unforeseen repercussions.Proper function depends on this balance.Short-term NAC treatment prevented noise-induced hearing loss, according to a study by Rousset et al., while long-term administration enhanced oxidative stress in the cochlea [6].There is little evidence to support the long-term advantages of chronic antioxidant use in reducing age-related hearing loss or delaying the advancement of chronic hearing problems.According to a prospective study by Gopinath et al., older persons who consume more vitamin C and E in their diets are at a lower risk of developing hearing loss [7].To demonstrate causality and identify the best intervention techniques, however, randomized controlled trials with long follow-up periods are required.The long-term safety and effectiveness of antioxidant therapy may also be impacted by population-specific characteristics.The reaction of an individual to long-term antioxidant consumption may vary depending on factors such as age, genetic makeup, and co-occurring medical disorders.The effectiveness of antioxidant supplementation in mitigating noise-induced hearing loss differed depending on glutathione S-transferase gene polymorphisms, according to a study by Hou et al. [8].
What Are the Future Directions of Hearing Loss Management?
Research and therapy options are expanding as our understanding of the mechanisms underlying oxidative stress-induced hearing loss deepens.The management of hearing loss will be examined in this section, with particular attention paid to preclinical research, clinical trials, existing constraints and difficulties, and prospective topics for more studies.The processes underlying oxidative stress-induced hearing loss have been clarified, and possible treatment targets have been identified through preclinical research utilizing in vitro systems and animal models [88].Subsequent preclinical studies ought to concentrate on delineating the molecular underpinnings of oxidative stress in the auditory system, including the functions of antioxidant enzymes, particular ROS species, and signaling pathways [89].Preclinical research should also prioritize the creation and improvement of tailored antioxidant delivery methods, such as hydrogels and nanoparticles [90].Advanced methods, such as mass spectrometry imaging and single-cell RNA sequencing, can shed light on the temporal and spatial patterns of oxidative stress in the cochlea [91,92].Moreover, the translation of preclinical discoveries to clinical applications can be aided by the creation of innovative animal models that more closely resemble situations associated with hearing loss in humans, such as age-related and noise-induced hearing loss [93].To evaluate the safety and effectiveness of antioxidant-based treatments for hearing loss in people, clinical trials are crucial.More thorough, large-scale clinical trials are required to determine the clinical value of antioxidants, such as N-acetylcysteine (NAC) and vitamin E, even though some clinical studies have shown encouraging results in treating and preventing noise-induced hearing loss [56,94].The potential of combination therapy, such as antioxidants paired with anti-inflammatory drugs or neurotrophic factors, to improve therapeutic outcomes should also be explored in future clinical investigations [95].Furthermore, the application of new biomarkers, including blood or inner ear fluid oxidative stress markers, can aid in patient stratification and therapy response monitoring [41].The creation of uniform clinical trial procedures and outcome measures can make it easier to compare and synthesize data from many trials, which will ultimately result in evidence-based recommendations for the treatment of hearing loss [96].Though there has been progress in our knowledge and management of oxidative stress-induced hearing loss, there are still a number of obstacles to overcome.Since the blood-labyrinth barrier prevents systemic medications from entering the cochlea, getting antioxidants to the inner ear is a significant difficulty [97].Although local delivery techniques, like round window administration and intratympanic injections, have shown potential, they can be intrusive and necessitate repeated treatments [98].To increase patient compliance and treatment effectiveness, non-invasive, sustained-release medication administration devices must be developed [99].The variety of causes of hearing loss and the absence of particular diagnostic instruments to pinpoint hearing loss associated with oxidative stress present further difficulties [100].The emergence of innovative diagnostic technologies, like genetic testing and imaging, can aid in customizing treatment plans according to the underlying cause of hearing loss [101].Furthermore, research is required to determine the long-term safety and effectiveness of antioxidant treatments, especially in light of chronic use and possible drug interactions [102].There are many prospects for more research in the area of oxidative stress and hearing loss.The creation of regenerative treatments, such as those based on stem cells, to replace damaged cochlear neurons and hair cells is one exciting field [103].Regenerative methods, in conjunction with antioxidant therapy, may have a synergistic effect on hearing function restoration [104].The use of gene therapy to alter the production of protective factors and antioxidant enzymes in the cochlea is an additional topic of research [73].Personalized preventive and treatment techniques related to oxidative stress-associated hearing loss can be developed with the help of genetic risk factor identification [105].Additionally, new paths for intervention may become available as a result of research on the gut-brain-ear axis and the function of the microbiota in regulating inflammation and oxidative stress in the auditory system [106].Lastly, the combination of machine learning and big data analytics can aid in the discovery of new therapeutic targets and the optimization of treatment plans according to the unique characteristics of each patient [107,108].
Conclusions
This thorough analysis has brought to light the crucial part that oxidative stress plays in the pathophysiology of hearing loss and the promise that antioxidant-based treatments can play in both preventing and curing this crippling illness.The auditory system, and especially the cochlea, has special physiological and anatomical properties that make it extremely vulnerable to oxidative injury.Age-related, noise-induced, and ototoxic druginduced hearing loss are among the types of hearing loss that have been linked to excessive reactive oxygen species (ROS) formation and the lowering of endogenous antioxidant defenses.Numerous pathways, including ischemia-reperfusion injury, inflammation, hair cell death, and mitochondrial dysfunction, are involved in the mechanisms behind oxidative stress-induced hearing loss.Preclinical and clinical research has demonstrated the potential of targeting these pathways with antioxidant therapies, such as pharmacological drugs, gene therapy, targeted delivery systems, and dietary antioxidants.Furthermore, alterations in lifestyle, such as lowering noise levels and adhering to a nutritious diet and regular exercise routine, might bolster general antioxidant defenses and lower the likelihood of hearing impairments.The review's conclusions have significant ramifications for clinical practice when it comes to managing hearing loss.First off, measuring antioxidant status and oxidative stress markers can be useful diagnostic and prognosis markers for determining who is at risk for hearing loss and tracking how well a treatment is working.Second, including foods and supplements high in antioxidants in the diet may offer a secure and practical way to stop and lessen hearing loss brought on by oxidative stress.Third, pharmaceutical antioxidants, like coenzyme Q10 and N-acetylcysteine, may be used as adjuvant therapy for the treatment of hearing loss, especially when treating ototoxic drugs or exposure to noise.Additionally, more focused and effective methods of preventing oxidative damage to the auditory system may be available with the development of gene therapy techniques and tailored antioxidant delivery systems.Large-scale clinical studies and additional research will be necessary to validate these cutting-edge treatments before they can be used in clinical settings.Lastly, encouraging healthy lifestyles and putting noise reduction techniques into practice in both work and leisure environments can be crucial preventative measures against hearing loss brought on by oxidative stress.New preventive and therapeutic approaches have been made possible by the substantial advancements in our understanding of the role of oxidative stress in hearing loss in recent years.Nevertheless, a number of obstacles still need to be overcome, such as the requirement for more specialized diagnostic instruments, the improvement of medication delivery techniques, and the demonstration of antioxidant therapy's long-term safety and effectiveness.Future investigations into the possibility of regenerative and customized therapy, as well as the intricate interactions between oxidative stress and other pathogenic pathways in hearing loss, should be the main priorities.To translate research discoveries into practical therapeutic solutions, a multidisciplinary strategy combining basic scientists, doctors, engineers, and industry partners is important.Through the advancement of knowledge on oxidative stress in the auditory system and the creation of novel therapeutic approaches, the lives of millions of people afflicted with hearing loss globally can be enhanced.In the end, preventing and treating oxidative stress-related hearing loss will call for an all-encompassing, integrative strategy that incorporates lifestyle changes, antioxidant therapies, and other therapeutic modalities.
Figure 1 .
Figure 1.Flow diagram describing the literature research protocol.
Figure 1 .
Figure 1.Flow diagram describing the literature research protocol.
Antioxidants 2024 ,Figure 2 .
Figure 2. Flow diagram for molecular mechanisms of Oxidative stress related to Hearing lo development.
Figure 2 .
Figure 2. Flow diagram for molecular mechanisms of Oxidative stress related to Hearing loss development.
19 Figure 3 .
Figure 3. Molecular mechanism of oxidative stress in hearing loss.6.Different Types of Hearing Loss and Oxidative Stress: Pathophysiological MechanismsIt is essential to investigate the role that oxidative mechanisms play in different forms of auditory impairment in order to improve our comprehension of the intricate link
Figure 3 .
Figure 3. Molecular mechanism of oxidative stress in hearing loss.
Table 1 .
Antioxidant strategies for the prevention and treatment of hearing loss.
|
2024-07-17T15:12:13.871Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "6655a4633a96c83a5a499b6d7097829705676be7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0ed4274ee0f6485c0213a301fe4bc5dc58ff932a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
236900167
|
pes2o/s2orc
|
v3-fos-license
|
Loss of Function of Scavenger Receptor SCAV-5 Protects C. elegans Against Pathogenic Bacteria
Scavenger receptors play a critical role in innate immunity by acting as the pattern-recognition receptors. There are six class B scavenger receptors homologs in C. elegans. However, it remains unclear whether they are required for host defense against bacterial pathogens. Here, we show that, of the six SCAV proteins, only loss of function scav-5 protect C. elegans against pathogenic bacteria S. typhimurium SL1344 and P. aeruginosa PA14 by different mechanism. scav-5 mutants are resistant to S. typhimurium SL1344 due to dietary restriction. While scav-5 acts upstream of or in parallel to tir-1 in conserved PMK-1 p38 MAPK pathway to upregulate the innate immune response to defend worms against P. aeruginosa PA14. This is the first demonstration of a role for SCAV-5 in host defense against pathogenic bacteria. Our results provide an important basis for further elucidating the underlying molecular mechanism by which scav-5 regulates innate immune responses.
INTRODUCTION
Scavenger receptors are eight classes transmembrane glycoproteins that first defined by its ability to bind and subsequently internalize modified low-density lipoproteins (mLDL) (PrabhuDas et al., 2017). Later, a large repertoire of ligands such as lipoproteins, cholesterol esters, phospholipids, proteoglycans, and carbohydrates have been identified that can be recognized by Scavenger receptors. Thus, Scavenger receptors are involved in an impressively broad range of functions including lipid metabolism, antigen presentation, phagocytosis, clearance of apoptotic cells, and innate immunity (Silverstein and Febbraio, 2009;Graham et al., 2011;He et al., 2011;Canton et al., 2013). Innate immunity is the first line of defense against pathogens, which is critical to maintain homeostasis, prevent infection, and activate the adaptive immune response. The components of innate immunity include external physical and chemical barriers, mucous membranes, internal humoral and, cellular effector mechanisms (Riera Romo et al., 2016). It is now clear that many scavenger receptors can recognize conserved patterns unique to microbial surfaces that are referred to as pathogen-associated patterns (PAMPs) (Mukhopadhyay and Gordon, 2004;Pluddemann et al., 2006;Pluddemann et al., 2011). On the other hand, scavenger receptors can be used as co-receptors by bacteria and viruses for entry into host cells (Haraga et al., 2008;Hawkes et al., 2010;Que et al., 2013).
The nematode Caenorhabditis elegans has been used as a model for studying bacterial virulence and innate immunity (Irazoqui et al., 2010). Despite lacking both Toll and Imd pathways as well as the adaptive immune system (Anderson et al., 1985;Gottar et al., 2002), C. elegans has developed special mechanisms that include innate immune response, bacteria avoidance behavior, and RNA interference to defend against pathogenic bacteria throughout evolution (Aliyari and Ding, 2009;Taffoni and Pujol, 2015). In fact, the absence of adaptive immune response makes C. elegans very useful for dissecting the innate immune mechanisms in pathogen-host interactions. The primary food source of C. elegans in laboratory is the E. coli strain OP50, but other bacteria and fungi also support its growth and reproduction. Several feed-based infection models have been established, such as Salmonella typhimurium and Pseudomonas aeruginosa (Tan et al., 1999;Aballay et al., 2000), to study the regulation between intestinal infection and innate immunity (Alegado et al., 2011;Kirienko et al., 2013;Curt et al., 2014). Salmonella typhimurium SL1344 proliferates and establishes a persistent infection in the intestine of C. elegans, resulting in the eventual death of the worm by actively inhibiting innate immune pathways (Aballay et al., 2000;Desai et al., 2019). The human pathogen Pseudomonas aeruginosa PA14 kills worms within 4-24 hours by the production several diffusible toxins in 'fast killing' model (Cezairliyan et al., 2013), ultimately suppresses the FOXO/DAF-16 innate immunity pathway in the intestine of C. elegans (Zheng et al., 2021). The innate immunity in C. elegans is regulated by several major pathways to defense against pathogens including PMK-1 p38 MAPK pathway. In the PMK-1 pathway, TIR-1 functions upstream of the NSY-1-SEK-1-PMK-1 p38 MAPK cascade (Kim et al., 2002;Couillault et al., 2004;Andrusiak and Jin, 2016). In this cascade, NSY-1 phosphorylates SEK-1, SEK-1 phosphorylates PMK-1, and finally activates PMK-1 to regulate pathogen response genes (Tanaka-Hino et al., 2002).
SCAV-1-6 are the six scavenger receptors homologs in C. elegans, belong to class B scavenger receptors (SRB). There are three members in SRB in mammals including CD36, SRB-I/II (SRB-II is a splicing variant of SRB-I), and lysosomal integral membrane protein 2 (Means et al., 2009). SRB has been reported to be involved in innate immunity (Feng et al., 2011). However, whether SCAV-1-6 have effects on defense against pathogenic bacteria in C. elegans and the underlying mechanism remains unclear.
Here, we investigate the functions of SCAV-1-6 in innate immunity in C. elegans. We find that scav-5 mutants can effectively defend C. elegans against pathogenic bacteria S. typhimurium SL1344 and P. aeruginosa PA14. In addition, we show that the mechanism of protecting C. elegans against the above two pathogenic bacteria is quite different by scav-5 loss of function. The scav-5 mutants displayed reduced pharyngeal pumping after S. typhimurium SL1344 infection, which lead to dietary restriction and extended lifespan. While, after P. aeruginosa PA14 infection, innate immune response was activated through PMK-1 p38 MAPK pathways in scav-5 mutants. Moreover, our genetic epistatic analysis indicates that scav-5 may function upstream of, or in parallel to tir-1 in PMK-1 innate immune response pathway to protect C. elegans against P. aeruginosa PA14. Our results provide an important basis for further elucidating the underlying mechanism of how scav-5 regulates innate immune response.
RNAi of scav-1, scav-4, scav-6 Three RNAi constructs were created using 700 bp, 899 bp and 730 bp segments of scav-1, scav-4, scav-6 coding region (bases 1 to 700, 3101 to 4000, and 1to 730, respectively), which were amplified by PCR and cloned into the vector pPD129.36, These plasmids were transformed into the RNAi bacteria strain HT115, and RNAi experiments were carried out with these strains following established protocols (Timmons et al., 2001) by using HT115 bacteria expressing the empty vector pPD129.36 as the control. For all RNAi experiments, the synchronization egg of the old-adult N2 spawn on NGM plates with RNAi bacteria, and L4 stage animals were used for subsequent studies, as described in more detail below.
C. elegans Pathogenic Bacteria Infection Assay
S. typhimurium SL1344 were pipetted on NGM plates and incubated overnight. P. aeruginosa PA14 infect C.elgans with slow killing way, which P. aeruginosa PA14 were pipetted on 0.35% NGM low osmotic pressure medium. Plates were incubated for 24 hours at 37°C, and 24 hours at 25°C (Shivers et al., 2010). C.elegans were cultured on these plates based on eating pathogenic bacteria established infection model.
Measurement of Lifespan and Heat Resistance
The indicated genotypes old-adult animals spawn on NGM plates with E. coli OP50, and L4 stage animals were picked on new NGM plates with E. coli OP50. Each NGM plate culture approximately 25 animals and each group amount have 80 animals at least. Animals of L4 stage as zero-day in lifespan count. Thermotolerance assays were performed as described (McColl et al., 2010). Briefly, WT and scav-5(ok1606) L4 stage hermaphrodites were infected with S. typhimurium SL1344 and P. aeruginosa PA14 for 24 hours respectively, adults were transferred to 35°C and scored as alive or dead based on responding to prodding with a platinum wire after 12 hours.
Bacterial Avoid Behavior
Small, uniform circular bacterial lawns were prepared by pipetting 100 uL of overnight cultures of E. coli OP50 or S. typhimurium SL1344, P. aeruginosa PA14 on the center of NGM dishes and allowing at least one day to dry. To score each animal as inside or outside the lawn (Avoidance=N out /N total ) at larval stages (L1 to L4) and young adults (YA), we transfer 50-80 eggs to the bacterial lawn without disturbing or spreading the lawn and record avoidance of developmental stage.
Pharyngeal Pump
Pharyngeal pumping was assessed by observing the number of pharyngeal contractions during a 10-sec interval for longitudinal studies or a 60-sec interval. The experimental details of observing the number of pharyngeal contractions were determined as described (Kumar et al., 2016).
Bacterial Colonization
We transferred plasmid pET28a into DH5a through heat shock and transferred plasmid ptfLC3 into P. aeruginosa PA14 through electric shock. L4 stage WT and scav-5(ok1606) were cultured on DH5a-GFP or S. typhimurium SL1344-GFP, P. aeruginosa PA14-GFP continuing 48 h and following observation the bacterial colonization in intestine by microscope.
Quantitative RT-PCR
To generate synchronous populations of worms for RNA extraction, we bleached WT and scav-5(ok1606) adults to collect eggs and cultured eggs at 20°C on NGM dishes spread with E. coli OP50. These synchronized worms were washed and collected for RNA isolation. Briefly, RNA was isolated using a Trizol (Invitrogen) and chloroform extraction (reagent Phenol: chloroform: isoamyl alcohol, 25:24:1 was used to chloroform extraction and isopropanol was used to precipitation RNA). RNA was diluted in nuclease-free water and quantified using a NanoDrop. cDNA was synthesized using HiScript II Q RT SuperMix kit (Vazyme). Real-time PCR was performed using an Applied Biosystems Step One Plus Real-Time PCR system and SYBR green master mix. mRNA fold change was calculated using the comparative CT method (Schmittgen and Livak, 2008) by comparing mRNA levels of the internal control gene tbg-1.
Immunoblot Analyses
Harvested worms were put in a glass homogenizer with RIPA buffer for grinding adequately. The lysate was centrifuged and supernatant protein was transferred into the new tube. Protein was quantified using BCA protein assay kit. And the total protein from each sample was electrophoresis on PAGE 10% gels, transferred to nitrocellulose membranes, blocked with 5% powdered milk in TBST, and probed with a 1:1000 dilution of an antibody that recognizes the phosphorylated of PMK-1 (Cell Signaling Technology Corporation). The blot was then stripped and reprobed with a 1:1000 dilution of an anti-actin antibody. Anti-rabbit, and anti-mouse IgG secondary antibodies were used to detect the primary antibodies following the addition of ECL reagents (Thermo Fisher Scientific, Inc.) which were visualized using a luminescence instrument.
Membrane Yeast Two-Hybrid (MYTH) Assay scav-5 cDNA was inserted between the PstI and XbaI site of vector pMetYCgate and tir-1 cDNA was inserted between the EcroI and XmaI site of vector pNubXgate32-HA. The fusion plasmids were transferred into yeast strain AP4 and picked individual yeast colony into medium and incubated overnight, diluted it into 10 -1 , 10 -2 and 10 -3 respectively, dropped onto SD-Trp-Leu-His-Ade medium by 10 uL each drop. Images of yeast were taken after culturing at 30°C for 3 days.
Microscopy
Nematodes were mounted onto 2% agar pads, paralyzed with levamisole and photographed using an AXIO Imager Z1 microscope. Photographs were acquired using the same imaging conditions for a given experiment, and were processed in Photoshop.
All Members of the Scavenger Receptors Family Except SCAV-3 Are Expressed in the Intestinal Tissue in C. elegans
To investigate the role of scavenger receptors family in innate immunity in C. elegans, we first set out to define the tissues in which they are expressed. We generated plasmids that express green fluorescent protein (GFP) under the control of around 3 kb proximal promoter of scav-1-6, and got exchromosomal transgenic strains by microinjection. We observed strong GFP expression in intestine tissues, which were driven by scav-1, scav-2, scav-4, scav-5, and scav-6 promoter ( Figures 1A, B, D-F). While SCAV-3 was expressed in all tissues, which is consistent with a previous study ( Figure 1C) (Li et al., 2016). Intestinal epithelial cells provide an essential line for C. elegans against ingested pathogens (Gravato-Nobre and Hodgkin, 2005). Since immune response to pathogenic bacteria was measured from when they reached young adult stage, we further observed in vivo expression of scavenger receptors in L4 worms, and found that SCAV-1, SCAV-2, SCAV-4, SCAV-5 and SCAV-6 were expressed in intestine tissues ( Figure S1). Thus, our expression pattern results indicate that scav-1, scav-2, scav-4, scav-5, and scav-6 may be involved in defense against pathogenic bacteria.
Loss of Function of scav-5 Protects C. elegans Against Pathogenic Bacteria Infection by P. aeruginosa PA14 and S. typhimurium SL1344 To examine the interaction between scavenger receptors and innate immune response in C. elegans, we used scavenger receptors mutants scav-2(ok877), scav-3(ok1286), scav-5(ok1606) and scav-1, scav-4, scav-6 RNAi-treatment wide type (WT) worms, and analyzed their lifespan when feeding on non-pathogenic bacteria OP50. We observed that only the lifespans of scav-3(ok1286) mutants were significantly reduced compared to the wild-type N2 (Figures 2A, B). Next, we analyzed their lifespan after P. aeruginosa PA14 and S. typhimurium SL1344 infection. We found that only the lifespans of scav-5(ok1606) mutants were significantly extended than control (Figures 2C-F). Many C. elegans mutations that delay aging also increase stress resistance (Wu et al., 2002;McColl et al., 2005;Saier et al., 2018). To test the effect of scav-5 mutation on stress resistance, we monitored heat resistance at 35°C of scav-5 mutants fed on E. coli OP50, S. typhimurium SL1344 and P. aeruginosa PA14 respectively. Our results shown that scav-5 mutant animals have significantly increased thermotolerance compared to WT when feeding on OP50 and S. typhimurium SL1344. We did not observe any survival of wide type and scav-5 mutants by heat stress treatment when feeding on P. aeruginosa PA14, which may due to the strong toxicity of PA14 ( Figure S2A). scav-5(ok1606) is likely to be a strong loss of function or null allele based on the lack of detectable mRNA, we used this allele for our further analysis ( Figures S2B, S2C). To confirm that loss of function of SCAV-5 was responsible for defense against pathogenic bacteria in scav-5 (ok1606) mutants, we generate transgenic strains scav-5(ok1606); P scav-5 SCAV-5 that expresses SCAV-5 by its promoter by injecting plasmids into scav-5(ok1606) mutants, and analyzed their lifespan fed on S. typhimurium SL1344 and P. aeruginosa PA14 respectively. We found that re-expresses SCAV-5 in scav-5 (ok1606) mutants abolished its protection effects after S. typhimurium SL1344 and P. aeruginosa PA14 infection ( Figure S3). Our results demonstrated that loss of function of SCAV-5 defend C. elegans against pathogenic bacteria infection.
scav-5 Mutants Are Resistant to S. typhimurium SL1344 by Dietary Restriction
To determine the mechanism by which scav-5 mutants are resistant to S. typhimurium SL1344, we first quantified their bacterial lawn avoidance behavior. We found that scav-5 mutants animals did not display bacterial avoidance behavior when cultured on E. coli OP50 or S. typhimurium SL1344 ( Figures 3A-D). Next, we analyzed the pharyngeal pumping rate of scav-5 mutants when fed on E. coli OP50 or S. typhimurium SL1344 by using a dissecting microscope. We observed that scav-5 mutants can pump at a rate comparable to wild type when fed on E. coli OP50 ( Figure 3E). While scav-5 mutants displayed a substantial age-related reduction of pharyngeal pumping rate when cultured on S. typhimurium SL1344 ( Figure 3F). These imply that the reduced pharyngeal pumping rate in scav-5 mutants when feeding on S. typhimurium SL1344 leads to reduced bacterial ingestion, and then results in dietary restriction. To further confirm the dietary restriction in scav-5 mutants when feeding on S. typhimurium SL1344, we detected the E. coli DH5a and S. typhimurium colonization in the intestine of scav-5 mutants by feeding with E. coli DH5a-GFP and S. typhimurium-GFP for 48h after L4 stage respectively. scav-5 mutants intestine displayed significantly decreased fluorescence intensity when cultured on S. typhimurium-GFP ( Figures 3G, H), the decreased level of S. typhimurium-GFP in scav-5 mutants was confirmed by western blotting ( Figure 3H). Furthermore, we observed the mRNA level of eat-2, which is required for pharyngeal pumping rate and subsequent food intake (McKay et al., 2004), in WT and scav-5 mutants when feeding on S. typhimurium SL1344, and found the transcriptional level of eat-2 was reduced in scav-5 ( Figure 3I), these results demonstrate that scav-5 mutation protects worms against S. typhimurium SL1344 by dietary restriction.
Loss of Function of scav-5 Does Not
Upregulate Defense Genes Expression in C. elegans Infected With S. typhimurium SL1344 C. elegans has an innate immune system and responds to pathogenic bacterial by expression of defense genes (Kim et al., 2002;Ermolaeva and Schumacher, 2014). To test whether loss of function of scav-5 activates the innate immune response in C. elegans, we analyzed the mRNA level of pathogen response genes by quantitative RT-PCR (qRT-PCR). In C. elegans, clec-7, clec-60, and clec-82 encode C-type lectin proteins (Schulenburg et al., 2008), lys-5 encodes lysozyme (Boehnisch et al., 2011), and F53A9.8 encode antimicrobial peptides (Zugasti and Ewbank, 2009). Surprisingly, we observed that the expression of pathogen response genes clec-7, clec-60 clec-82, lys-5, and F53A9.8 were lower in scav-5 mutants compared to wild type when cultured on E. coli OP50. And not all but most of the pathogen response genes listed above were downregulated in scav-5 mutants cultured on S. typhimurium SL1344 (Figures 4A-E). These indicate that scav-5 may be a positive regulator for defense genes expression fed on E. coli OP50 or S. typhimurium SL1344. Intriguingly, we also found that p38 PMK-1 MAPK innate immune response pathway was downregulated in scav-5 mutants cultured on E. coli OP50 or S. typhimurium SL1344 by detecting the levels of PMK-1 phosphorylation ( Figure 4F), which further confirm that scav-5 is a positive regulator for innate immune response when cultured on E. coli OP50 or S. typhimurium SL1344.
Defense Gene Expression Were Upregulated in scav-5 Mutants Infected by P. aeruginosa PA14 To verify the mechanism by which scav-5 mutants are resistant to P. aeruginosa PA14, we first quantified their bacterial lawn avoidance behavior. We found that similar to wild type, scav-5 mutants animals displayed bacterial avoidance behavior when cultured on P. aeruginosa PA14 (Figures 5A, B). Next, we analyzed the pharyngeal pumping rate of scav-5 mutants when fed on P. aeruginosa PA14. We observed that compared to WT, scav-5 mutants displayed significantly higher pharyngeal pumping rate on 48h-72h after L4 stage when cultured on S. P. aeruginosa PA14 ( Figure 5C), indicating that scav-5 mutants had no dietary restrictions on P. aeruginosa PA14. To further confirm that the dietary restriction did not exist in scav-5 mutants when feeding on P. aeruginosa PA14, we detected P. aeruginosa PA14 colonization in the intestine of scav-5 mutants by feeding with P. aeruginosa PA14-GFP for 24h after L4 stage. scav-5 mutants intestine displayed significantly increased fluorescence intensity when cultured on P. aeruginosa PA14-GFP ( Figure 5D). The increased level of P. aeruginosa PA14-GFP in scav-5 mutants was confirmed by Western blotting ( Figure 5D). These results demonstrate that scav-5 mutation defend C. elegans against P. aeruginosa PA14 infection is not by dietary restriction. We then asked whether loss of function of scav-5 activates the innate immune response in C. elegans against P. aeruginosa PA14 infection by analyzing the mRNA level of pathogen response genes. We observed that the expression of pathogen response genes clec-60 clec-82, lys-5, and F53A9.8 were higher in scav-5 mutants when compared to wild type cultured on P. aeruginosa PA14 (Figures 6A-E). These results reveal that defense genes expression were upregulated in scav-5 mutants infected by P. aeruginosa PA14.
scav-5 Mutants Are Resistant to P. aeruginosa PA14 Through Activation of PMK-1 p38 MAPK Pathway To explore whether SCAV-5 participates in the innate immune response through the PMK-1 p38 MAPK pathway in C. elegans after P. aeruginosa PA14 infection, we first measured activated PMK-1 level by immunoblotting. Our results showed that activated PMK-1 in scav-5 mutants fed on P. aeruginosa PA14 was significantly increased compared to wild type control ( Figure 6F). These data proved that defects can activate PMK-1 p38 MAPK pathways in C. elegans infected by P. aeruginosa PA14. Next, we test whether pmk-1 was required for scav-5 mutants lifespan extension after P. aeruginosa PA14 infection.
Our results in line with previous studies that loss-and reductionof-function mutations of p38 MAPK PMK-1 pathway components lead to a reduced lifespan of worms fed on P. aeruginosa PA14 ( Figure 7A) (Xu et al., 2013;Head et al., 2017). Besides, we found pmk-1 RNAi treatment suppressed the extended lifespan phenotype of scav-5 mutants when cultured on P. aeruginosa PA14 ( Figure 7B), demonstrating the requirement of pmk-1 pathway for scav-5 mutants lifespan extension infected by P. aeruginosa PA14. Furthermore, our genetic epistasis analysis suggested scav-5 function upstream of or in parallel to tir-1 ( Figure 7B) (Couillault et al., 2004). Our preliminary analysis of protein structure by the SMART website revealed that SCAV-5 is a membrane protein with two transmembrane domains. We then utilized the split-ubiquitin based membrane yeast two-hybrid (MYTH) system for detecting the interaction of SCAV-5 with TIR-1. However, the result showed that SCAV-5 and TIR-1 did not physically interact ( Figure 7C). Taken together, our results reveal that scav-5 mutation protects C. elegans against P. aeruginosa PA14 by upregulating PMK-1 p38 MAPK pathway.
DISCUSSION
Mounting evidence shows that many scavenger receptors, including the prototype class B type scavenger receptor CD36, play an important role in innate immunity by serving as the patternrecognition receptors, in particular against bacterial pathogens. SCAV-1-6 are the six class B scavenger receptors homologs in C. elegans. However, it is not clear whether they have effect on host defense against bacterial pathogens. Here, we show that defects in scav-5 protect worms against pathogenic bacterial S. typhimurium SL1344 and P. aeruginosa PA14 by different mechanism. scav-5 mutants are resistant to S. typhimurium SL1344 due to dietary restriction. While scav-5 mutation protects worms against P. aeruginosa PA14 by activating the innate immune response through conserved PMK-1 p38 MAPK pathway.
We found that, of the six SCAV proteins homologous, only scav-5 is involved in innate immune response against pathogenic bacterial. scav-1, scav-2, scav-4, scav-5 and scav-6 are expressed in the intestine tissues. It is widely reported that, in response to bacterial infections, C. elegans produces an array in intestinal epithelial cells by expressing related antimicrobial genes (JebaMercy et al., 2011). The damage of intestinal epithelium would cause worms more hypersensitive to pathogenic bacteria, which allows live bacteria to enter the intestinal lumen (Kumar et al., 2019). The expression pattern of scav-5 consistent with its function. Indeed, a previous study demonstrates that scav-1 is necessary for C. elegans survival after fungal pathogens infection (Means et al., 2009). It will be interesting to test whether scav-2, scav-4, scav-5 and scav-6 participate in defense against other pathogens infection. We observed that SCAV-3 was expressed in all tissues and required for lifespan extension of C. elegans when cultured in E. coli OP50. Our results in line with the previous study, which also reveals that SCAV-3 is lysosomal membrane protein and is the key regulator of lysosome integrity, motility, and dynamics.
We found that compared to control, the expression of pathogen response genes were downregulated in scav-5 mutants without infection or cultured on S. typhimurium SL1344, either PMK-1 p38 MAPK pathway was activated. These indicate that scav-5 may be required for the expression of pathogen response genes in worms without infection. This is an important issue for future research. A possible explanation for the downregulation expression of pathogen response genes in scav-5 mutants fed on SL1344 might be dietary restriction. Bacteria as food for C. elegans was ingested by increasing its rate of pharyngeal pumping, which is a process controlled by pharyngeal motor neurons (Kumar et al., 2019). Thus, decreased ingestion of S. typhimurium SL1344 in scav-5 mutants implicate its function may relate to neuron regulation. Further studies will need to determine the underlying mechanism. We found that the expression of pathogen response genes were upregulated in scav-5 mutants when cultured on P. aeruginosa PA14.This was further confirmed by that scav-5 acts in PMK-1 p38 MAPK pathway after P. aeruginosa PA14 infection, although scav-5 is required for expression of pathogen response genes under normal condition. As PMK-1 controls basal levels of pathogen response genes on E. coli and also induce the upregulation of these genes' expression upon infection. Our results imply scav-5 may function in another pathway for controlling the basal level of pathogen response genes. We found that scav-5 may function upstream of, or in parallel to tir-1 in PMK-1 innate immune response pathway to protect C. elegans against P. aeruginosa PA14. However, SCAV-5 and TIR-1 did not physically interact. Further research should be undertaken to investigate how scav-5 participate in PMK-1 innate immune response pathway. In summary, our results provide evidence that loss of function of Scavenger receptor SCAV-5 protects C. elegans against pathogenic bacteria S. typhimurium SL1344 and P. aeruginosa PA14 by different mechanisms, establish the link between the SCAV-5 and the innate immune response. To our knowledge, this is the first demonstration of a role for SCAV-5 in host defense against pathogenic bacteria. SCAV-5 is an ortholog of human SRB-I/II, which is implicated in platelet-type bleeding disorder 10 and progressive myoclonus epilepsy 4 (Dibbens et al., 2009;Silverstein and Febbraio, 2009). Thus, our research provides important clues for further dissecting the mechanism by which SRB-I/II regulates innate immune responses.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
HX and QZ conceived the study. AL did most of the experiments. HJ did the manuscript revision and experimental repeats. LY and YW contributed to materials. HX, QZ, and AL wrote the manuscript with feedback from all authors. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
We thank Drs. Chonglin Yang as well as the C. elegans Genetic Center for C. elegans strains and Dr. Cheng-Gang Zou for P. aeruginosa PA14 and S. typhimurium SL1344 strains.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcimb.2021.593745/ full#supplementary-material A B C FIGURE 7 | Genetic epistatic analysis of scav-5 in the PMK-1 pathway defense against P. aeruginosa PA14 and test of the interaction SCAC-5 with TIR-1.
|
2021-08-04T13:22:25.702Z
|
2021-08-03T00:00:00.000
|
{
"year": 2021,
"sha1": "598cf529c4b2c16749e20b3ca305bf4b1c6d036f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2021.593745/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "598cf529c4b2c16749e20b3ca305bf4b1c6d036f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265201550
|
pes2o/s2orc
|
v3-fos-license
|
Fugacity Based EQC Level II Method: Prediction of Environmental Partitioning of a Fungicide Fluopyram
A fungicide Fluopyram has been subjected to an EQC Level 2 calculation using a fugacity-based environment model. According to the model, Fluopyram tends to build up in similar amounts in both soil and water. Sediment, soil, water, and air are predicted to contain the high concentration of the chemical. Fluopyram will be lost primarily by reaction (55.9%) and advection from water (39.7%). An overall residence time of 845 hours (35.3 days) is predicted by the model, as well as reaction and advection residence times of 1403 hours (58.5 days) and 2122 hours (88.4 days), respectively. Fluopyram is therefore not expected to be environmentally persistent, and reaction is predicted to be the key factor in the overall persistence. Fluopyram has a very low potential for atmospheric transport as only a very small portion of the chemical (2.42 x 10−03%) is predicted to leave the model environment in the advecting air
Introduction
Pesticides are widely acknowledged to play an important role through agricultural production due to their ability to reduce agricultural product losses while also improving overall yield as well as food quality [1].In the last ten years, at least 105 pesticides have been introduced or are in development: twenty one herbicides, one safener herbicide, thirty four insecticides/acaricides, six nematicides, forty three fungicides, and more [2].Numerous diseases are caused by pesticides [3].
Pesticides degrade as well as dissipate in farms after they are used, polluting ground water, air, soil, and rivers.Environmental fate models aid in estimating concentrations of pollutants in various environmental compartments [4].
Fluopyram, figure 1 is a Succinate Dehydrogenase Inhibitor (SDHI) fungicide that is used to control a variety of fungal infections and nematodes in plants [5,6].It is applied by spraying on foliage or soaking the soil.Its IUPAC name is N-(2-[3-chloro-5-(trifuoromethyl)-2-pyridinyl] ethyl)-2-(trifuoromethyl) benzamide originally developed by Bayer AG, Germany [7].It is primarily used to control grey mould as well as powdery mildew in grapes, but it is also used against fungi to regulate fungal infections in a variety of crops [8].It is somewhat soluble in water and has a low volatility.Fluopyram has poor acute toxicity to birds and mammals, but it is highly toxic to fish [9].
Figure 1. Fluopyram Structure
Given the significance of the Fluopyram in agriculture and in the absence of extensive experimental data on Fluopyram, this study aims to understand how physical-chemical properties of Fluopyram, control environmental fate parameters such as partitioning, transformation, and persistence using a "fugacity-based Equilibrium Criterion" Model (EQC Model) Level II [10].
Methodology
A Level II model explains a dynamic environment with chemical inflows and outflows through the compartment such that the different phases are all in equilibrium with one another at any given time.The total chemical within the environmental region of interest may vary over time due to two general processes: a) Advection; a chemical is transmitted into or out of the area of interest by the flow of a supporting medium; b) Degradation and chemical reactions; which result in a change in the overall chemical properties.There is no need to describe which compartment the chemical is introduced into, under this assumption [11].
Table 1 and table 2 show the important properties that influence Fluopyram fate and are referred to as model inputs [9,12,13].
For a multicomponent system the common fugacity, f is defined as the = / (1) where, I = Total input rate = Direct Emission Rate + Advective Inflow Rate (2) Di is the overall D value for removing from a specific compartment by advection and chemical change.
The equilibrium assumption lets us determine a single fugacity that's constant throughout all compartments for such system. = /( + ) (3) where zi is fugacity capacity of particular phase for chemical.All simulations assumed a 1000 kg.h -1 emission rate to the model system with zero advection inflow concentrations through both water and air compartments.
Results and Discussion
Chemicals like Fluopyram which are of Type 1 with measurable vapour pressures, aqueous solubilities, and octanol-water partition coefficients, are expected from partition to some extent to available environmental compartments.Fluopyram's low vapour pressure results in a small percentage partitioning to air compartment (3.29 x 10 -3 %).Fluopyram is a moderately hydrophobic compound with a Kow value of 3.30.It will disperse into organic phases like soils and sediments.
Table 1 shows that Fluopyram is a moderately hydrophobic compound (Kow= 3.3) with a low water solubility (16.0 g.m -3 ).It is anticipated that it will partition to phases which contain organic matter such as soils and sediments as well as inorganic phases such as water.It is important to note that air-water partition coefficient is low (1.22 x 10 -08 ), and that the Henry's Law constant, which indicates a chemical's potential to volatilize from water into the atmosphere, is also very low.As a result, Fluopyram is unlikely to evaporate from water.
Figure 2 depicts the model output for Fluopyram emitted to air, water, and soil.Approximately 52% (4.39 x 10 5 kg) and 47% (3.98x 10 5 kg) are found in soil and water, respectively and 1% (8316 kg) in sediment as shown in figure 2 and figure 3.In comparison, Level I calculations had shown 63%, 36% and approximately 1% Fluopyram distributions in soil, water and sediment compartments respectively [14].Simulations were also carried out with 100 ng.m -3 Fluopyram in air together with 0 ng.L -1 Fluopyram water; also with 0 ng.m -3 of the chemical in air together with 100 ng.L -1 of the chemical in water.Whether the chemical (Fluopyram) enters the system through advection via water or air, the model predicts the same percentage distribution and various loss processes of the chemical.
The Fluopyram was predicted to have an overall chemical residence (τo) of approximately 35.2 days, figure 2. The reaction residence times (TR) were predicted to be approximately 58.45 days and the advection residence times (TA) were predicted to be approximately 88.41 days.
The product of the volume (V, m 3 ) and the fugacity capacity of the compartment determines the capacity of a given compartment to accumulate the chemical (Z).As shown in table 3, the VZ (water) is 6.72 x 10 15 and the VZ soil is 7.42 x 10 15 indicating that soil has approximately 1.10 times the capacity of water to accommodate Fluopyram.The different advective as well as reactive losses from various environmental compartments are depicted in detail in figure 2. The most common loss mechanism in the air compartment is reaction is 8.00 x 10 -3 %; reaction is the most common loss mechanism in the water compartment that is 55.9%; reaction is the most common loss mechanism in the sediment compartment that is 0.0223%.The only loss mechanism from the soil compartment is reaction (4.10%), because there is no advection in this compartment.Overall, reaction loses 60.2% of the chemical while advection loses only 39.8%.
Conclusion
The EQC Level II model, fugacity-based environmental model was applied for Fluopyram.Fluopyram is predicted to accumulate in soil and water to similar quantities, according to the model.The chemical is expected to be present in high concentrations in water, sediment, soil, and air.It is lost primarily through reaction and advection from water.Fluopyram is not expected to persist in the environment.Only a trace of the chemical is expected to escape from the model environment in the advecting air.
Figure 2 .
Figure 2. Results of the EQC Level-II simulation represented diagrammatically.
Figure 3 .
Figure 3. Fluopyram's relative distribution among the four environmental media
Table 1 .
Physico-Chemical values used as Input for Fluopyram
Table 2 .
Half-life parameters for Fluopyram used as Level II Input
Table 3 .
Fugacity capacity, amount and concentration of Fluopyram in different environmental phases.
|
2023-11-15T16:47:31.994Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "0dc91d7b057307785f831c218f0a7627b7d8323a",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2603/1/012054/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "95376078dc40e532aa09e68a258f2cffa2bb33cd",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
}
|
182123808
|
pes2o/s2orc
|
v3-fos-license
|
Insulating 3D-printed templates are turned into metallic electrodes: application as electrodes for glycerol electrooxidation
We turned printed plastic pieces into a conductive material by electrochemical polymerization of aniline on the plastic surface assisted by graphite. The conductive piece was then turned into a metallic electrode by potentiodynamic electrodeposition. As a proof-of-concept, we built indirect-3D-printed Pd, Pt and Au electrodes, which were used for glycerol electrooxidation.
2 Section I: Printing parameters and oxidation states of polyaniline
Section II: Electrochemical measurements
Electrochemical measurements were carried out in a potentiostat/galvanostat model µAutolab Type III with current integrator algorithm. Polymerizations were performed in a 1 mol L -1 HCl supporting electrolyte solution at 0.05 V s -1 , as discussed later. H2PtCl6·6H2O, PdCl2 and HAuCl4·3H2O were used as metallic precursors; each 3 precursor had a concentration of 6 mmol L -1 of metal in 0.5 mol L -1 H2SO4 solution for the electrodeposition process. The electrochemical profiles of Pd, Pt and Au are registered in 0.5 mol L -1 H2SO4 or 0.1 mol L -1 KOH, as further indicated in the text. The electrochemically active surface area (ECSA) is calculated considering a specific, wellknown charge of a surface reaction on the metals. Namely, (i) for Pt, 210 µC cm -2 is considered as the charge involved in the desorption of a hydrogen monolayer; (ii) for Pd, 420 µC cm -2 is considered as the charge released by the desorption of a Pd oxide monolayer; and (iii) for Au, 386µC cm -2 as the charge involved in the reduction of a Au oxide monolayer. All electrodes were used as catalysts for glycerol electrooxidation in an alkaline medium. Electrochemical measurements were performed using an Ag/AgCl as reference electrode, except for the electrochemical growth of PANI, which was performed using a reversible hydrogen electrode (RHE). All potentials are corrected to RHE scale. A high-surface-area Pt plate was used as counter electrode. The potential ranges for registering electrochemical profiles and to perform glycerol electrooxidation are found through the text.
Section III: Characterization of the electrodes
Scanning electron microscopy (SEM) images were ac-quired with an FEG/SEM (Tescan) by using an electron beam of 10 kV. All images were registered using an inbeam secondary electron detector. Energy dispersive spectroscopy (EDS) analysis of the modified electrodes was performed with a coupled EDS detector (Oxford Instruments).
The elemental analyses were carried out using a mapping mode from the area of each selected image 4
Section IV: Building Pt and Au electrodes on 3D-printed templates: Assessments of activity towards glycerol electrooxidation
The most common catalyst for the electrooxidation of glycerol and other alcohols electrooxidation is Pt. Taking this into consideration, we also built an indirect-3D-printed Pt electrode. The same protocol was followed to turn the PLA onto a GR/PAni/PLA template. Afterwards, the template was immersed in a 6 mmol L -1 H2PtCl6·6H2O in 0.5 mol L -1 H2SO4 for potentiodynamic electrodeposition between 0.05 and 1.2 V. Similar to the previous case, the currents associated with the surface phenomena become more evident as more Pt is deposited on the template, as shown by the HUPD region in Figure S2A. After the electrodeposition process, a profile was registered to guarantee a successful Pt partial covering ( Figure S2B). Figure S2B shows a characteristic profile of a Pt electrode in 0.5 mol L -1 H2SO4.
The HUPD region is evident between 0.05 and 0.35 V; however, the profile is slightly inclined, suggesting resistivity to some extent. This feature may be a consequence of a large region absence of metal deposition, compared to the Pd electrode, which does not affect the catalytic properties, as seen in Figure S2C. to build further clusters. The main achievement of this work is that the Pt was successfully deposited on the template, as shown by the EDS mapping composition in the region of carbon ( Figure S2F) and in the Pt region ( Figure S2G). Moreover, the EDS spectrum of 6 the indirect-3D-printed Pt electrode shown in Figure S2H assures the presence of Pt through the peak at ~ 2.06 keV.
Another potential candidate to be used as catalyst in alcohol fuel cells and electrolyzers in alkaline medium is Au. An indirect-3D-printed Au electrode was built by following the same protocol as those aforementioned. Successive potential cycles in the range of 0.05-1.75 V were applied to the GR/PAni/PLA template in the presence of 6 mmol L -1 HAuCl4·3H2O in 0.5 mol L -1 H2SO4, as shown in Figure S3A. The increase peak currents in two regions of the voltammogram shows the electrodeposition of Au; one between 1.2 and 1.75 V at the positive scan related to the gold surface oxides and that between 1.4-0.8 V during the reverse scan related to the reduction of surface oxides. The characteristic Au profile shown in Figure S3B was obtained in 0.1 mol L -1 KOH between 0.05 and 1.65 V. The activity of the Au electrode was also investigated for glycerol electrooxidation, as shown in Figure S3C. Similar to what occurs for glycerol electrooxidation on Pd and Pt, the alcohol is electrooxidized during the direct and reverse potential scans. The onset potential is 0.7 V, afterwards, the anodic current increases until it reaches a maximum value at ~1.28 V, and then decreases for more positive potentials.
These electrocatalytic parameters are similar to those reported for glycerol electrooxidation (0.1 mol L -1 glycerol + 0.1 mol L -1 NaOH) on bulk Au. 1 Gomes et al.
found 0. 65 V and ~1.4 V as onset and peak potential, respectively. 1 During the reverse scan, the surface is reactivated and starts oxidizing the alcohol and the partially oxidized compounds at ~1.2 V, forming a well-defined peak centered at 1.15 V. Figure S3D shows a representative SEM image of the Au electrode. The potentiodynamic electrodeposition leads to the growth of non-uniform polygonal-shaped 7 Au particles having diameters of about 200-700 nm. A section of another region of the electrode surface ( Figure S3E) was also investigated by EDS compositional mapping. Figure S3F shows an extended red-colored region dominated by carbon species, whereas the Au particles are shown by the green-colored regions in Figure S3G. Finally, the EDS spectrum from Figure S3H indicates the presence of Au through the peak at ~ 2.15 keV.
|
2019-06-07T21:50:57.935Z
|
2019-05-14T00:00:00.000
|
{
"year": 2019,
"sha1": "a81749c9cdedfd5f844adf65e24ee054f4428b6f",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra01436e",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0612f99038edc05c4f3082c3408e066069f49fe6",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
3802345
|
pes2o/s2orc
|
v3-fos-license
|
Angry pathogens, how to get rid of them
The purpose of this paper is to present a new approach for introducing to a non-scientific audience a major public health issue: access to safe drinking water. Access to safe drinking water is a privilege in developed countries and an urgent need in the third world, which implies always more efficient and reliable engineering tools to be developed. As a major global challenge it is important to make children aware of this problem for understanding (i) what safe drinking water is, (ii) how ingenious techniques are developed for this purpose and (iii) the role of microfluidics in this area. This paper focuses on different microfluidic-based techniques to separate and detect pathogens in drinking water that have been adapted to be performed by a young audience in a simplified, recreational and interactive way.
I. Introduction
Diarrhoea is often considered in developed countries as a classical gastrointestinal symptom, while not enjoyable nor usually too serious. However, this illness results in 1.5 million deaths each year, most of which involve children, and is mainly due to the ingestion of pathogens through water, food or unclean hands. The observation in (Prüss-Üstün et al., 2008) 1 highlights the high privilege in developed countries to have access to specific water treatments, resulting in the delivery of safe drinking water. However, and despite these treatments, several outbreaks are reported every month. The Drinking Water Inspectorate 2 reports around 60 significant events caused by pathogens in water supplies in England and Wales in 2012 whose sources are not always clearly identified. The main difficulties when dealing with pathogens are first to deal with the large variety of existing harmful pathogens (viruses, bacteria and protozoa) and second to detect their presence as they are flowing at extremely small concentrations in large volumes of water. Their separation and detection are thus time-consuming tasks (days are typically needed) that require an experienced staff. 3 As a consequence, only three microbiological parameters are set by the European regulation to reflect the water quality: E. coli, Enterococci and Pseudomonas aeruginosa, all set to 0 bacteria per 100 mL of sample (per 250 mL for bottle water). 4 Current limitations are thus the correlation between these parameters and the concentration of all waterborne pathogens and the delay of detection of a pathogen than can be long enough to affect a significant part of the population. One could easily imagine how serious the situation could be in the presence of dangerous pathogens resistant to treatment. Cryptosporidium for instance has already been detected in water despite the absence of these indicators, 3,5 and is routinely tested for in UK waters. The development of new approaches is thus a growing and necessary research area leading to several new national, European and international projects. For instance, Aquavalens (http://aquavalens.org/) is a European project launched in April 2013 that "is centred on the concept of developing suitable platforms that harness the advances in new molecular techniques to permit the routine detection of waterborne pathogens and improve the provision of hygienically safe water for drinking and food production that is appropriate for large and small systems throughout Europe". Some of the techniques adapted in this paper for the comprehension of children are funded by this project, which highlights how the proposed public engagement is close to current laboratory techniques under investigation. Both within this project and other research initiatives, many different detection schemes have been proposed 3 and sample processing research is also developing. Microfluidics has recently been applied to both sample processing and detection within waterborne pathogen monitoring 6,7 with promising results. This paper focuses on how to introduce the existing approach and microfluidic alternatives to children in an interactive and recreational way.
II. Teaching objectives and workflow
There are a lot of different techniques that can be used for pathogen separation and detection. Detection can be based on growing cultures or highly specific biosensors for instance. 3 Presenting all the existing techniques would be a tedious task beyond the scope of this activity, and we here focus on emerging microfluidic approaches.
Laursen et al. 8 evaluated the impact of scientists in a classroom and features that enhance positive student outcome regarding a specific activity. These features include: (i) equipment and materials that enable science learning experiences, (ii) interesting science topics and (iii) style of presentation with hands-on and inquiry approaches. The proposed activity tries thus to encompass these parameters by selecting some specific separation and detection techniques that can be reproduced easily and handled by children in a recreational but educational approach.
As presented in Fig. 1, this paper focuses on the introduction to waterborne pathogen detection through a set of different modules dedicated to the standardized Immuno-Magnetic Separation (IMS), two microfluidic based separation techniques (IMS and Deterministic Lateral Displacement) and then to pathogen detection by fluorescent labelling. All or a selection of modules could be delivered according to the age of participants, learning objectives, time available, cost, etc. in either schools or as an outreach activity at science festivals. Indicative costs are given for each module independently, however some materials are common to multiple modules, thus reducing total costs. Each module employs familiar and widely available materials. To highlight the feasibility of these modules, each activity has been performed "in-house" without laboratory facilities. Cartoons are also proposed throughout the paper to illustrate the different topics introduced here and to broaden the spectrum of the audience to a non-scientific arena.
On one hand, the Immuno-Magnetic Separation is a wellknown and efficient technique to separate and concentrate specific biological matters. This technique is part of the standard protocol (USEPA Method 1623) developed for recovery and detection of protozoa. To the best of the authors' knowledge, there is no activity relating this technique for public engagement. On the other hand, microfluidics is an increasingly growing research area whose applications for drinking water are quite scarce, though increasing in recent years. 7 Due to its success in research laboratories, literature for introducing microfluidics to students is flourishing as well. [9][10][11][12][13] However, most of these papers are targeting middle school, high school or undergraduate students. The audience of the proposed activity is young children, to enhance their interest in science, technology, engineering and mathematics (STEM) and to promote the next workforce generation. By coupling microfluidics to waterborne pathogen detection, an interesting approach is proposed to children for understanding what microfluidics is and how relevant it could be for a concrete application. Fig. 2 shows a standardised method for separating and detecting Cryptosporidium, a well-known and highly resistant pathogen encountered in water systems. This method incorporates five concentration steps with two stages of filtration and elution followed by centrifugation, to minimize the volume of liquid and thus concentrate particles.
III. How it works in research labs…
The filtration steps rely on the size of particles to remove them from the water sample. All particles larger than the pore size of the filter will be trapped while the smaller particles will remain in water. As a consequence, a mix of different particles can be present after the concentration steps as long as they present a diameter larger than the filter pore size, only some of which will be pathogenic. Specific techniques are therefore needed to identify which particles are present to evaluate the water quality and if consumers can safely use this water. The next paragraph presents one of them, namely the Immuno-Magnetic Separation. The accompanying support poster proposed for introducing in a simplified manner notions of waterborne pathogens and their separation is shown in Fig. 3.
A. Standardized Immuno-Magnetic Separation (IMS)
The principle of the Immuno-Magnetic Separation (IMS) is schematically represented in Fig. 4. It relies on the addition of specific magnetic beads coated with antibodies 14 (e.g. anti-Cryptosporidium if the presence of Cryptosporidium needs to be confirmed). Particles in the sample are only captured if they correspond to the specific anti-bodies coated on the magnetic beads and can then easily be removed using a strong magnet. Although IMS is a powerful technique for separating specific biological particles such as pathogens, the standard protocol is usually limited to small volumes of samples and requires the intervention of experienced staff. Microfluidic-based techniques are a growing topic for proposing smart alternatives to water issues, and one approach that has been taken is to perform on-chip IMS. [15][16][17][18]
B. Microfluidic based separation techniques
Microfluidics is defined as "the science and technology of systems that process or manipulate small (10 −9 to 10 −18 litres) amounts of fluids, using channels with dimensions of tens to hundreds of micrometres". 19 Fig. 5 proposes a cartoon for introducing the notion of microfluidics and the manufacturing procedure of microchannels. The module related to microfluidics (module 3) is of prime importance for allowing children to have a better representation of systems that are presented in the following modules (modules 4 and 5).
B.1. Microfluidic based Immuno-Magnetic Separation. Microfluidics can offer several advantages to the standardized IMS, which explains the wide range of publications related to this topic. [15][16][17][18] A microfluidic-based IMS is more automated, can deal with larger volumes of sample than standard IMS and miniaturize the procedure into one on-chip unit. One other main advantage is the possibility to integrate several other procedures within the same device such as the detection of trapped particles. Techniques based on fluorescence detection are for instance proposed in the literature for identifying on-chip the presence of pathogens. 18 In order to introduce simply the notion of "multitask" chip, a microfluidic-based IMS is coupled with a piezoelectric sensor (Fig. 6). For pathogen detection, antibodies are usually immobilized onto the surface of a piezoelectric sensor. When pathogens are trapped, a shift in the resonance frequency of the sensor is detected and correlated to the mass of Lab on a Chip Focus pathogens blocked at the surface. 20,21 This approach is here extended to the detection of antibody-coated magnetic beads and pathogens onto the magnet. For a simple realisation, the piezoelectric sensor will detect the vibration due to the impact of particles onto the magnet that will turn on a red LED. Immuno-Magnetic Separation provides excellent recovery rates but remains specific to one particle/antibody combination. This procedure has to be iterated if different particles have to be detected and requires the corresponding specific antibodies, which are not always readily available and can be very expensive. When applied to drinking water purposes, this iterative procedure is a limiting step to the fast detection of all the potential harmful pathogens. Moreover, smaller pathogens such as viruses are not concentrated by the centrifugation step (step 5 in Fig. 2), they will remain in the supernatant and require further specific and expensive steps to be separated such as ultracentrifugation.
B.2. Deterministic Lateral Displacement (DLD). Different techniques have been developed in the literature for sorting particles using microfluidic devices. 7,22 This paper only focuses on one of these techniques, referred to as Deterministic Lateral Displacement (DLD), initiated by the work of Huang et al. in 2004 23 and easily reproducible at a macroscopic scale with LEGO®, 24-26 thus highly suitable for manipulation/ visualisation by children. The basic idea of DLD is to separate particles by changing their trajectory within a channel depending on their size. "Large" particles (i.e., particles larger than a critical diameter defined below) are deviated from their initial position due to the presence of posts placed in the microchannel.
These posts are designed within a specific geometry and periodicity in order to separate particles above a desired critical diameter D c : 27 with G the distance between two posts (see Fig. 7) and ε defined as where d is the shift between two successive vertical posts, λ is the centre-to-centre distance between two successive horizontal posts (see Fig. 7), N is the periodicity of the post array and θ is the angle of deviation of the posts. Due to the specific fluid motion present in devices containing posts, particles above the critical diameter are View Article Online deviated while small particles follow an ultimately straight path. This technique is relevant for introducing the safe drinking water challenge since pathogens present different characteristic sizes depending on their kingdom (nanometres for viruses, around a micrometre for bacteria and several micrometres for protozoa). Although studies focusing on the separation of non-spherical biological particles are limited and need further investigation before this method can be fully applied to waterborne pathogens, DLD devices can be produced at a macroscopic scale with LEGO®. This offers an excellent interactive approach to introduce current research aims to children and is easy to implement in schools or during outreach activities for example. The support poster proposed for introducing the notion Deterministic Lateral Displacement is proposed in Fig. 8.
C. Detection
The last step of the process to be introduced to children is the detection of the separated pathogens. This process usually relies on the labelling of pathogens with specific fluorescent antibodies. Using a fluorescent microscope, pathogens conjugated with fluorescent antibodies can then easily be detected and counted (Fig. 9).
IV. How it works with children…
Now that the challenge of pathogen separation and detection has been introduced, this paper presents an easy and interactive way to reproduce and illustrate these different techniques with children. Detailed explanations to reproduce all the experiments are proposed in the ESI. †
A. Immuno-Magnetic Separtion with FIMO®
Pathogens and other biological particles are represented in a simplified and magnified manner using FIMO® clay (Fig. 4-bottom). FIMO® is a soft polymer clay, available in a large range of colours, that can be easily shaped and then hardened after baking for 30 minutes in an oven at 110°C.
In this paper and for ease of children's understanding, two kinds of particles have been represented: -"Bad" particles, red and brown particles in Fig. 4, 6 and 8. "Bad" particles represent waterborne pathogens, defined by the Environmental Agency as microorganisms capable of causing disease that may be transmitted via water and acquired through ingestion, bathing or by other means. The size of red and brown particles is roughly the same (diameter around 1.6 cm). In order to let children identify which are these "bad" particles, they are represented with angry faces. Note that faces could be directly painted on baked polymer clay by children. In this paper, angry faces are also made with the polymer clay. To reproduce the Immuno-Magnetic Separation, small magnetic beads are incorporated inside the red "bad" particles ( Fig. 4-bottom) before baking.
-"Good" particles defined as non-harmful for humans. These particles are the yellow and green ones throughout the paper. These particles are smaller than the bad ones to be then separated using DLD which, as mentioned previously, is a separation technique considering the particle size as the sorting parameter.
Note that this representation of "good" and "bad" particles with different sizes is obviously largely simplified in comparison to the reality. Even within a same "family" of pathogens, some are harmful while some others are not. Challenges for researchers are still to define the pathogenic characteristics of these particles, a problematic far too complex to be introduced within minutes to children.
Assuming this simplification, the Immuno-Magnetic Separation focuses here on the removal of red particles. The magnetic antibodies are represented in Fig. 4 by small fluorescent beads made also with FIMO® (fluorescent FIMO® no. 04). Small magnetic beads are also incorporated in these fluorescent beads before baking to be attracted toward the red particles.
Children only have to incorporate these fluorescent beads in the sample and observe that they are directly attracted by the red "bad" particles. A strong magnet is then used to remove all (and only) the red "bad" particles.
B. Microfluidics
In order to understand the notion of microfluidics and its relevance to waterborne pathogen separation, a simple procedure based again on FIMO® is proposed. Using a block of FIMO® that is flattened with a book or a rolling pin, a channel is created by using a mould, a Y wooden letter for example here. A Y-channel is produced to complement the Y-channel proposed for the microfluidic-based IMS (module 4), although this approach allows an infinite number of designs to be created (see angry pathogen device bottom right of Fig. 4). Three smaller channels are then produced using a toothpick to allow the liquid to enter and exit the device. To close the channel, a piece of Plexiglas is used. After baking the FIMO® block, transparent silicon for bathrooms is finally used to bond it to the Plexiglas layer. Using a needletip bottle, red liquid (e.g. squash or food dye) is incorporated through one of the hole.
Yang et al. 10 proposed in their paper an interactive and hands-on activity for manufacturing magnified microfluidic devices with Jell-O® dessert. This fun and simplified approach, closer to the actual procedure of manufacturing, can directly be related to the proposed activity if time is available. However, the FIMO® approach allows children to easily touch and mould their own device during an outreach activity for instance.
B.1. Microfluidic based Immuno-Magnetic Separation. For the microfluidic-based IMS, a Y-shaped channel (29 cm in length, 5 cm in width and 3.5 cm in height) made of Plexiglas is inclined. A similar device made with a plastic bottle is also proposed in the ESI † for reducing the costs of the activity. The outlets of the channels (two branches of the Y-channel) can be let opened to allow the fluid and particles to be collected in two different cups. A small support is fixed on the wall of the channel to hold the magnet while being easily removable by children. A piezoelectric sensor is then placed next to the magnet with transparent blue tack to detect the Lab on a Chip Focus shock of trapped particles against the magnet. For safety reasons, the magnet and piezoelectric sensor are placed outside the channel to avoid any contact with water. A small piece of foam (see Fig. 6) is placed at the bottom of the channel inlet to absorb the shock when particles are entering the channel and to avoid false detections by the piezoelectric sensor. Each shock detected by the sensor propagates a current through an electrical circuit (cf. ESI †) to finally here turn on a light (LED). Extensions of this system can easily be imagined by placing a buzzer, several LEDs to know the force of the impact against the magnet, etc. At the beginning of the experiment, a set of particles is poured in the device just above the foam (Fig. 6).
Since the device is inclined, particles will roll down by gravity. The outlet on the left of the channel is initially closed, here by a piece of flexible plastic from a plastic bottle. All the particles will then flow in the right outlet of the device. A second experiment is performed with, this time, magnetic beads incorporated inside the red bad pathogens and by adding antibodies also with magnets (similarly to the standard IMS). Red pathogens and antibodies are attracted to each other and, while flowing in the device, will be deviated by the magnet.
When trapped, the piezoelectric sensor will detect the shock that will then turn on the light to warn of pathogen presence. Once pathogens are detected, the right outlet of the channel is closed, the left one opened and the magnet removed. All the trapped particles will finally flow in the left outlet thus are separated from the other particles. B.2. Separation of all the "bad" particles using DLD and LEGO®. After this first separation and detection step, children should notice that other "bad" particles (brown particles in Fig. 4 and 6) remain in the water sample and cannot be separated by IMS since they don't have the corresponding antibodies in this activity. The last step of this experience thus consists of trying to remove all the "bad" particles with another technique, the Deterministic Lateral Displacement (DLD) presented previously. Microchannels and posts used in our laboratory are here represented by a rectangular vase (IKEA®, Rektangel) and LEGO® board with cylindrical LEGO® posts of diameter D = 7.8 mm to shape the obstacles (Fig. 8).
The positions of the posts are crucial to separate "good" from "bad" particles. In this paper, the following configuration is proposed: -Gap between two posts G = 1.7 cm.
-ε = 0.37. This parameter can easily be determined by measuring the angle θ between the first blue line with the vertical axis. ε can be deduced given θ and based on eqn (2).
Based on eqn (1), the critical diameter of this system is thus 1.47 cm. As presented in Fig. 8, red and brown particles with a diameter around 1.6 cm are larger than the critical diameter and are thus deviated in the device to follow the blue path. Yellow (1.1 cm in diameter) and green (0.8 cm in diameter) particles are smaller than the critical diameter and follow a straight path within the LEGO® device.
It can be noted that such macroscopic experiments cannot be performed in water. Microfluidics is characterized by laminar flow and thus slow fluid motion. To reproduce this phenomenon, viscous media have to be considered. While glycerol is considered in some studies, [24][25][26] in the present paper and for safety reasons, diluted shower gel is used. Depending on the product used and especially its viscosity, it can be used pure, without dilution, but if the viscosity is too high, particles will need a very long time to pass through the LEGO® device. If so, a slight dilution with tap water can solve the problem. The shower gel should be carefully introduced in the vase to avoid air bubbles to be trapped in the liquid. Due to the high viscosity of the solution, air bubbles require a long time to rise and hinder any visualization in the vase. The liquid should be carefully introduced by using for instance the LEGO® board to pour the liquid against and avoid bubble formation. Finally, it is important to mention the higher the device, the larger the displacement between large and small particles.
C. Detection with insect magnifier
After separation, the number of "bad" particles trapped by IMS are counted by fluorescence. All the trapped particles are placed within a fake fluorescent microscope composed of an insect magnifier for children placed in a black-painted cardboard box to see the fluorescence of the fluorescent magnetic beads (the fluorescence of beads is hardly visible with daylight) (Fig. 9).
Even though simplified in comparison with the real process for labelling pathogens, this approach allows children to be introduced to complex notions such as antibodies, fluorescence, microscopy while being able to run the whole experiment on their own.
V. Conclusion
This paper presents a new and original approach to introduce children major scientific challenges. A recreational and interactive procedure is proposed to define notions of safe drinking water, pathogens, separation, detection and microfluidics. By simplifying and magnifying laboratories procedures, the next work-force generation can enjoy being part of the research world by visualizing, testing, running experiments and analysing results related to this water problem. The procedure has been developed as a story, starting from the presence of particles in water that require magnifying techniques to be visualized, then a first separation procedure (Immuno-Magnetic Separation) specific to one particle/antibody combination. The several advantages offered by microfluidics are then introduced in the context of waterborne pathogen separation. Once all the components containing in this activity are completed (FIMO® beads, LEGO® board, etc.), the duration of this "story" is about 30 minutes. The activity can easily be shortened by not presenting all the modules proposed in the paper. The total cost of each module is kept as low as possible (around £15 for the vase, £20 for the LEGO®, £10 for FIMO®, £20 for the magnetic beads, £20 for the shower gel, £10 for the cardboard box and the insect magnifier, £25 for the piezoelectric sensor). The activity presented in this paper is easy to run and can involve children from the beginning (particle modelling, etc.) to introduce complex notions in a fun and interactive manner. Such activities are of prime interest to master children with the science world, interesting and increasingly growing research topics and perhaps promote scientific vocations.
DLD
Deterministic Lateral Displacement IMS Immuno-Magnetic Separation LED Light-Emitting Diode STEM Science, technology, engineering and mathematics USEPA US Environmental Protection Agency
|
2017-04-04T07:35:09.780Z
|
2015-02-03T00:00:00.000
|
{
"year": 2015,
"sha1": "14935c6c8d7df3e35dbbf0f1ab10b2f921bdb5f5",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/lc/c4lc00944d",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "058718c30bc6c8c6d1e8f57e693c4d8a5ed149f2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Engineering"
]
}
|
220411155
|
pes2o/s2orc
|
v3-fos-license
|
Musculoskeletal computational analysis on muscle mechanical characteristics of drivers’ lumbar vertebras and legs in different sitting postures
Today, prolonged sitting has been the most common work posture in the industrialized areas, especially for professions that need to use vehicles as a work tool, such as for taxi drivers. A highly comfortable car-seat not only increases the safety by relieving drivers’ physical and mental fatigue but potentially enhances the psychological acceptance of consumers. Therefore, it is essential to investigate seat comfort in automotive seat development. A reliable analysis method for measuring seat comfort must be established by combining objectively measurable comfort-quantifying parameters with subjective comfort. Traditional evaluation methods considered the content of survey items, the precise number of rating scales, a reasonable crowd positioning, and the motivation of the respondent1-3. However, it is difficult to accurately obtain information about the level of artificial muscle activity and joint strength by SUMMARY
Musculoskeletal computational analysis on muscle mechanical characteristics of drivers' lumbar vertebras and legs in different sitting postures
Fei Gao 1, 2 Shi Zong 3 Zhi-Wu Han 2 Yang Xiao 1 Zhen-Hai Gao 1 INTRODUCTION Today, prolonged sitting has been the most common work posture in the industrialized areas, especially for professions that need to use vehicles as a work tool, such as for taxi drivers. A highly comfortable car-seat not only increases the safety by relieving drivers' physical and mental fatigue but potentially enhances the psychological acceptance of consumers. Therefore, it is essential to investigate seat comfort in automotive seat development.
A reliable analysis method for measuring seat comfort must be established by combining objectively measurable comfort-quantifying parameters with subjective comfort. Traditional evaluation methods considered the content of survey items, the precise number of rating scales, a reasonable crowd positioning, and the motivation of the respondent 1-3 . However, it is difficult to accurately obtain information about the level of artificial muscle activity and joint strength by SUMMARY Using computer-aided engineering (CAE) in the concept design stage of automobiles has become a hotspot in human factor engineering research. Based on human musculoskeletal biomechanical computational software, a seated human-body musculoskeletal model was built to describe the natural sitting posture of a driver. The interaction between the driver and car in various combinations of seat-pan/ back-rest inclination angles was analyzed using an inverse-dynamics approach. In order to find out the "most comfortable" driving posture of the seat-pan/back-rest, the effect of seat-pan/back-rest inclination angles on the muscle activity degree, and the intradiscal L4-L5 compression force were investigated. The results showed that a much larger back-rest inclination angle, approximately 15°, and a slight backward seat-pan, about 7°, may relieve muscle fatigue and provide more comfort while driving. Subsequently, according to the findings above, a preliminary driving-comfort function was constructed.
RESULTS
Successful construction of a finite element model of a seated human Fig.1a shows that the vertebral spine is divided into four sections. An apparent S-type for the entire upright spine is revealed, which is called the normal physiological curvature of the spine. As can be seen in Fig.1b, there are also many muscle groups in the leg that are involved in driving, including the gluteus maximus, semitendinosus, iliopsoas, sartorius, and anterior tibial. Ultimately, a seated human body model in a normal position was developed as shown in Fig.1c. According to the recommended sizes for the interior overall arrangement in passenger cars, the initial height of the seat could be set to 0.38m. The figures α, β represent the inclination angle of the back-rest and the seat-pan, respectively. The dimensions of other shell components were entered using the minimum size of design specifications, which are displayed in Fig.1c.
The effect of seat-pan/back-rest adjustment on the muscles of lumbar vertebra and legs Next, we investigated the muscle activity degree and the distribution of the compression force in the major working muscle groups of the spine and leg using an inverse-dynamics approach in the present work.
In the musculoskeletal human-body model simulation test, the inclination angle of the back-rest and seat-pan was changed from 0° to 15° and adjusted 1.5° at a time. As the seat inclination angle changed, there were four major working muscles in the spine affected, namely the erector spinae, semispinalis, musculus obliquus externus abdominis, and musculus transversus abdominis, which are shown in Fig.2 The effect of the inclination angle change for the backrest and seat-pan on the erector spinae muscle activity degree is shown in Fig.2a. Careful observation can be found in Fig.2a, the muscle activity degree monotonically increases with the decrease of the inclination angle of the back-rest, and a reverse trend occurred for the adjustment of the seat-pan. Additionally, a larger rangeability is revealed by adjusting the inclination angle of the back-rest. Similar variation tendencies can be observed in Fig.2 Fig.2(d), which describe the effect of the inclination angle change of the seat-pan measuring the pressure distribution using traditional development means 4, 5 .
In this study, we used a musculoskeletal humanbody model from the analysis software named Any-Body to predict muscle-activity and spinal joint force and analyzed the interaction between a passenger and a vehicle in various combinations of seat-pan and back-rest inclination angles using an inverse dynamic approach. A preliminary driving-comfort function (DCF) was created by analyzing the simulation results.
AnyBody Model
The Anybody Modeling System, initially developed at Aalborg University, was used as a musculoskeletal model and simulation program in the present work 6 .
Car-seat models A universal seat model is used in this study, which consists of five rigid bodies (head-rest, back-rest, seatpan, leg-rest, and foot-rest) and several revolute joints to adjust the inclination angle of the back-rest and seat-pan.
Musculoskeletal human-body model
The musculoskeletal human-body model used is named "seated human model", downloaded from the public-domain AnyScript Model Repository. The model contains more than 500 individual rigid bones, joints, muscles, and tendons with characteristics of physiology.
Integration of the human body and seat models
In the present work, a finite element model of a seated human was established to analyze driving fatigue. It contained a simplified human-body musculoskeletal model and a generic car-seat model. Furthermore, the bones and soft-tissue muscles were used to develop the seat-human finite element model, and the shell components of the car-seat model comprised the foot-rest, seat-base, seat-pan back-face, back-rest back-face, and head-rest back-face, and three solid components in contact with the human body, including seat-pan, back-rest, and head-rest. According to the GB10000-88 for the average Chinese adult body size, we adjusted the original human model (167.8cm, 59kg) by adopting the 50th percentile of Chinese adult male body sizes, which could reflect the basic and back-pan on the muscle activity degree of the musculus obliquus externus abdominis, semispinalis, and musculus transversus abdominis, respectively. Comparing the results in Fig.2(b)-Fig.2(d), it can be seen that the muscle activity degree of the erector spinae is the largest, about 7.12%, in the red area, when the inclination angle is in the original state.
The effect of changing the inclination angle of the seat-back and seat-pan on the magnitude of the compression force in the L4-L5 of lumbar vertebra, which are investigated because of the most frequent contact with the seat, is shown in Fig.2(e). Carefully observing the result of the simulation in Fig.2(e), it can be found that the compression force suffered by the musculus transversus abdominis gradually changes as the inclination angle of the seat-rest and seat-pan is adjusted. These results obtained and shown in Fig.2 demonstrate that the reasonable adjustment of the inclination angle for the seat-rest and seat-pan helped the human-body muscles to relax. Fig 3 shows the effect of the seat inclination angle on the activity degree of muscles of the left and right legs, including the gluteus maximus, semitendinosus, iliopsoas, sartorius, and anterior tibial. Similar results can be seen in Fig.3. Fig.3a shows that when the inclination angle of the seat-pan is about 6°, the muscle activity degree of the gluteus maximus in the left leg is almost reduced to zero, while the lowest muscle activity degree in the right leg found was about 10.5°. Similar results for the muscle tissues of the left leg can be observed in Fig.3(d)-(e), which is similar to the results shown in Fig.3a. Combined with the above analysis, from Fig.3 it can be concluded that the muscle activity degree monotonically decreases with the increase of the seatpan and back-rest inclination angle.
Corresponding partial correlation coefficients Furthermore, several pairs of muscles, like the musculus obliquus externus abdominis, semispinalis, gluteus maximus, and semitendinosus showed a high correlation coefficient (R﹥0.8) after analyzing the correlation data. For the sake of simplifying the analysis, the muscle activity of the erector spinae (MAES), musculus transversus abdominis (MAMTA), gluteus maximus (MAGM), and anterior tibial muscle (MAATM) should be considered in the investigation of the fatigue degree for drivers with a typical driving position. Furthermore, the compression force on the L4-L5 (CFL4-L5) is important in the investigation of the degree of fatigue of drivers. As a result, the driving-comfort function (DCF) can be written as the following general formula:
DISCUSSION
The demand for car-seat comfort is constantly increasing. It is worth mentioning that, from the Fig.3 (a-d) The effect of variations in the seat-pan and back-rest inclination angle on muscle activity degree of the muscle tissues in legs (blue and red represent the muscle tissue in the right leg and left leg, respectively). (e) The effect of variations in the seat-pan and back-rest inclination angle on the anterior tibial muscle activity degree (blue and red represent the muscle tissue in the right leg and left leg, respectively).
perspective of human biomechanics, the human-body movement is a mechanical response that is accomplished by a complex mechanical interaction between muscles, ligaments, joints, and bones, which are controlled by numerous nervous systems 7, 8 . The static or dynamic stabilities of the human body under gravitational and other loads and precise limb behaviors depend on the tensile forces formed by working muscles in the musculoskeletal systems 9 . According to previous literature, the thoracic vertebra section mainly bears the seat-back support, and the most stressed points are always on the T9, T10, and T12 10,11 . At the same time, the compression force in the lumbar vertebra from L1 to L5 changes with different distributions of body weight, and the largest value of stress force is distributed between L4 and L5 12, 13 . The response of the human tendon tissues could be influenced by exterior activities, like stretching or extruding, which results in increased muscle activity and a long-term feeling of muscle soreness. When the muscle activity degree is greater than 1, it exceeds the limits of muscle fatigue activity and represents a state of exhaustion. In this state of fatigue, sustained muscle stretching results in damage to the muscle tissues.
The particularity of the driver's sitting posture when driving a car was taken into account during the process of developing the finite element model, i.e, the drivers' right feet in a seating position was maintained on the footrest and the hands were placed on the steering wheel. For drivers who spend a lot of time driving, seating comfort control is affected by the distribution of the contact pressure on the contact interface of the seated human/car-seat.
By analyzing the relationship between the muscle activity degree and the inclination angle, we found that the activation intensity of the muscles in the lumbar vertebra is at its lowest when the inclination angle of the back-rest is 15°, and the inclination angle of the seat-pan only needs to be adjusted at 7.5°, that way the degree of muscle use can be at the minimum. However, for the muscles in legs, there is no significant influence on the muscle activity degree by adjusting the inclination angle of the back-rest, whereas a large rangeability is revealed by adjusting the inclination angle of the seat-pan.
The results in the present work demonstrated that the different muscle tissues are subject to varying degrees of compression force or activation, and the adjustment of the seat-pan/back-rest changes the pressure distribution on the muscle tissues, thus helping to relieve driving fatigue. Similar experimental methods and results have been reported in many literatures 14, 15 . Moreover, it has been reported that lower maximum contact pressure and more uniform pressure distribution on the contact interface of the human-body/ car-seat contribute to improved seating comfort 16 .
The DCF was established as a new auxiliary reference method to provide a way for fabricating seats that more compliant with human comfort in the future. According to the DCF equation, the comfort of a car seat is related to the muscle activity of the working muscle groups and the compression force on the L4-L5. Hence, the driving comfort can be improved by carefully using the postural angles and seat adjustment levels.
CONCLUSIONS
This study clarified the effect of the seat-pan and back-rest inclination angle on the muscle activity degree and spinal joint force, which may improve design for car-driver comfort during driving.
|
2020-07-09T09:12:14.897Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "9e39bdffe72bee7faf219619c5619294a8bbdc01",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/ramb/v66n5/1806-9282-ramb-66-5-0637.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4a44b557c0d7fa11923c1fb99cf642e4f94b6b46",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
119343717
|
pes2o/s2orc
|
v3-fos-license
|
A Survey of CH3CN and HC3N in Protoplanetary Disks
The organic content of protoplanetary disks sets the initial compositions of planets and comets, thereby influencing subsequent chemistry that is possible in nascent planetary systems. We present observations of the complex nitrile-bearing species CH3CN and HC3N towards the disks around the T Tauri stars AS 209, IM Lup, LkCa 15, and V4046 Sgr as well as the Herbig Ae stars MWC 480 and HD 163296. HC3N is detected towards all disks except IM Lup, and CH3CN is detected towards V4046 Sgr, MWC 480, and HD 163296. Rotational temperatures derived for disks with multiple detected lines range from 29-73K, indicating emission from the temperate molecular layer of the disk. V4046 Sgr and MWC 480 radial abundance profiles are constrained using a parametric model; the gas-phase CH3CN and HC3N abundances with respect to HCN are a few to tens of percent in the inner 100 AU of the disk, signifying a rich nitrile chemistry at planet- and comet-forming disk radii. We find consistent relative abundances of CH3CN, HC3N, and HCN between our disk sample, protostellar envelopes, and solar system comets; this is suggestive of a robust nitrile chemistry with similar outcomes under a wide range of physical conditions.
INTRODUCTION
Planets form by accreting gas and dust within the protoplanetary disk; the material present in the disk therefore sets the initial composition of planets. Observations of solar system comets suggest that the solar nebula was rich in volatile organic molecules around the time the comets were formed, usually with abundances of a few percent with respect to water ice (e.g. Mumma & Charnley 2011). Understanding how this early inventory of organic molecules developed into the vast complexity of biochemistry is key to the study of the origins of life. Recent successes in prebiotic syntheses of RNA and protein precursors suggest that nitrile-bearing molecules, characterized by the C≡N functionality, played a crucial role in prebiotic chemistry (Powner et al. 2009;Ritson & Sutherland 2012;Sutherland 2016). From cometary studies, HCN, CH 3 CN, and HC 3 N indeed appear to have been common in the young Solar system (Mumma & Charnley 2011;Cordiner et al. 2014;Le Roy et al. 2015). To evaluate whether the nitrile chemistry of the solar nebula is typical, and in turn any implications for the chemical habitability of other planetary systems, observations of planet-forming disks are essential.
The simple nitrile species CN and HCN were first detected towards protoplanetary disks two decades ago (Dutrey et al. 1997;Kastner et al. 1997) and have since been observed towards many additional disks (see Dutrey et al. 2014). Until the advent of ALMA, observational challenges limited our ability to detect and characterize more complex nitrile species in disks. The first disk detections of HC 3 N were made by Chapillon et al. (2012) with the IRAM 30m telescope towards GO Tau, LkCa 15, and MWC 480. CH 3 CN was first detected in a disk byÖberg et al. (2015) towards MWC 480 using ALMA. The molecular emission was spatially resolved, allowing radial abundance profiles of CH 3 CN and HC 3 N to be derived. At comet-forming disk radii the abundances were found to be similar to those measured in solar system comets, suggesting that the solar system is not unique in its nitrile chemistry.
In addition to their relevance to pre-biotic chemistry, CH 3 CN and HC 3 N are, along with CH 3 OH (Walsh et al. 2016), the only large organic molecules detected in protoplanetary disks to date. Because of this, these molecules are key to furthering our understanding of the growth of organic complexity in disks. Their abundances and distributions can be used to benchmark astrochemical models of disks, and in turn to help predict the chemistry of complex molecules which cannot currently be directly observed. This is of particular importance for gaining insights into the ice compositions in disks, which can be constrained only through chemical models.
To date, the number of disks with well-characterized nitrile abundances is small, and it is unclear (i) whether other disks commonly host similar nitrile abundances as the solar nebula, and (ii) how robust the nitrile chemistry is across different circumstellar environments. A larger sample of observations is required to obtain constraints on the nitrile chemistry in disks.
Here, we present observations of the complex nitrile molecules HC 3 N and CH 3 CN towards a diverse sample of six protoplanetary disks: our targets span over an order of magnitude in luminosity and disk age and represent both transition disks and full disks. In Section 2 we describe the observations and data reduction. The observational results are presented in Section 3. In Section 4 a parametric model is used to obtain abundance profiles of CH 3 CN an HC 3 N towards the bright sources MWC 480 and V4046 Sgr. Finally, in Section 5 we comment on implications for the nitrile chemistry in other circumstellar systems based on our findings.
Observations
During ALMA Cycle 2, the HC 3 N 27-26 transition as well as the CH 3 CN 14-13 K-ladder were observed for the disks AS 209, HD 163296, IM Lup, LkCa 15, MWC 480, and V4046 Sgr (project code 2013.1.00226). Disk and stellar properties for each source are listed in Table 1. The MWC 480 data were previously presented inÖberg et al. (2015); these lines were re-analyzed in this work to ensure that all sources were treated consistently. During Cycles 3 and 4, as part of a line survey in disks (project code 2013.1.01070.S), the HC 3 N 31-30 and 32-31 transitions and the CH 3 CN 15-14 and 16-15 K-ladders were observed towards MWC 480 and LkCa 15.
A detailed description of the Cycle 2 observations can be found in Huang et al. (2017). Briefly, from 2014 to 2015 Band 6 observations were taken at two spectral settings, 1.1mm and 1.4mm, containing 14 and 13 narrow spectral windows, respectively. Baselines spanned 18 to 650m, and the total on-source time was ∼20 minutes per source. Amplitude and phase calibration, as well as frequency bandpass calibration, were performed using observations of a quasar. Flux calibration was performed using observations of either Titan or a quasar. A portion of the CH 3 CN 14-13 ladder (14 0 -13 0 , 14 1 -13 1 , and 14 2 -13 2 ) is contained in a spectral window of 59 MHz with a channel width of 61 kHz (0.071 km/s) in the 1.1mm spectral setting. The HC 3 N 27-26 transition is contained in a spectral window of 117 MHz, also with a channel width of 61 kHz (0.075 km/s) in the 1.1mm spectral setting. The Cycle 3/4 observations are described in full in Loomis et al. (subm.). This work made use of data taken in two Band 7 correlator setups at 0.9mm and 1.1mm, each containing 4 spectral windows of 1920 channels with channel widths of 975 kHz (∼0.99 -1.06 km/s for the lines of interest). Baselines spanned 15 to 650m, and the total on-source integration time was ∼20 minutes per source. Phase calibration, bandpass calibration, and flux calibration were all performed using quasar observations.
Data reduction
Initial data calibration was performed by ALMA/NAASC staff.
Two rounds of phase selfcalibration were performed using the continuum emission from individual spectral windows, except for HD 163296 which was self-calibrated using averaged spectral windows due to weak continuum emission. Following continuum subtraction, the data cubes were imaged in CASA using the CLEAN task with a 3σ noise threshold. Briggs weighting was used with a robust parameter of 2.0 for all lines except those in the source V4046 Sgr; here, a value of 1.0 was adopted to improve the angular resolution, which was possible due to higher signal-to-noise ratios. To obtain channel maps, the data were regridded to 0.5 km s −1 and 1.1 km s −1 spectral resolution for the Cycle 2 and Cycle 3/4 observations, respectively. CLEAN masks were drawn by hand for lines with obvious emission (V4046 Sgr HC 3 N 27-26 and CH 3 CN 14 0 -13 0 and 14 1 -13 1 ). For all other lines, the 5σ continuum contour was used as the CLEAN mask. A Keplerian mask was applied to the cleaned data cube to obtain moment zero maps, line spectra, and integrated flux densities for each transition. The use of Keplerian masks as CLEAN templates and for spectral extraction is well established (e.g. Rosenfeld et al. 2013a;Loomis et al. 2015;; details on the use of Keplerian masking for moment zero map generation will be presented in (Pegues et al. in prep). Briefly, to construct appropriate masks, the Keplerian velocity of each image pixel was calculated based on its depro- jected radius from the star, assuming disk and stellar parameters taken from the literature (Table 1). In each velocity channel, only pixels corresponding to the appropriate Keplerian velocity were included, and all other pixels were masked. The mask outer radii were chosen to encompass all HC 3 N and CH 3 CN emission. The masks were verified to fit the actual disk profile using H 13 CN emission ), which has more obvious Keplerian structure than HC 3 N or CH 3 CN. An example channel map with its Keplerian mask overlaid is shown in Figure 1 for the HC 3 N 27-26 transition in V4046 Sgr; for all other lines and sources, a similar figure can be found in Appendix A.
Since the moment zero maps are produced by summing only emission within the Keplerian mask, each moment zero pixel represents the sum of a different number of channels. The rms is therefore non-uniform across the moment zero map. We approximate the moment zero rms by bootstrapping: the same Keplerian mask used to obtain the moment zero map is applied to 1000 offsource positions. The rms per pixel is determined from the standard deviation of each mask pixel across all offsource moment zero maps. The median rms value is taken to be the representative moment zero rms, and is quoted in Table 2 and used to draw contours in Figures 2 and 4. The uncertainty in the integrated flux density was estimated from the standard deviation of the integrated fluxes within 1000 off-source Keplerian masks.
In V4046 Sgr, the CH 3 CN 14 0 -13 0 line is blended with the 14 1 -13 1 line (spectrum shown in Figure 2). Likewise, in MWC 480 the CH 3 CN 15 0 -14 0 and 15 1 -14 1 lines are blended. To treat blended lines, two Keplerian masks (one centered on each line) were calculated and used to extract the emission from both lines, and the resulting moment zero map sums the total emission. To estimate the integrated flux density from each individual transition, we assume that emission is symmetric around the rest velocity of the source. The integrated flux for the lower-energy transition was therefore assumed to be twice the integrated flux of the lowest-velocity horn (i.e., velocities lower than the source velocity). The integrated flux of the higher-energy transition was taken to be the integrated flux of the total blended feature minus the integrated flux of the low-energy transition. To account for the added uncertainties in this procedure an additional error of 30% the integrated flux was added in quadrature with the bootstrapped integrated flux uncertainty for blended lines.
OBSERVATIONAL RESULTS
Based on our observations, we now present the molecular line detections and non-detections in our sample. For molecules with multiple detections towards a source, we use the rotational diagram method to derive rotational temperatures and column densities. Finally, we calculate disk-averaged abundance ratios of CH 3 CN/HCN and HC 3 N/HCN for all disks by assuming a range of emission temperatures.
Molecule detections
A summary of the line observations is presented in Table 2. The CH 3 CN 14 0 -13 0 and HC 3 N 27-26 transitions were targeted towards all 6 disks and, compared to the other observed CH 3 CN and HC 3 N transitions, are the brightest across the sample. We therefore use these lines to classify whether molecules are detected or not detected in a disk. A molecule is considered detected if emission >3×rms is present within the Keplerian mask in at least three channels. We also tested a detection criterion of a SNR >3 for the integrated flux density, and find the same status of detection vs. non-detection for all transitions. Based on these criteria, HC 3 N is detected towards all disks except IM Lup. CH 3 CN is firmly detected towards three of six disks (HD 163296, MWC 480, and V4046 Sgr). In AS 209 and LkCa 15, CH 3 CN is not seen at significant levels in individual channel maps but shows suggestive features at around a 3σ level in the moment zero maps, as well as positive integrated intensities in the radial profiles. Higher-sensitivity followup observations are required to confirm if these are indeed detections. For subsequent analysis these lines are treated as non-detections (3σ upper limits). Figure 2 shows the disk continuum maps as well as the CH 3 CN 14 0 -13 0 and HC 3 N 27-26 line maps and spectra for each disk. In all cases the molecular emission is more compact than the continuum emission. Comparing the nitrile emission across the sample, V4046 Sgr is by far the strongest emitter, with strong detections of both CH 3 CN and HC 3 N. Next strongest are HD 163296 and MWC 480, which both host strong HC 3 N and moderate CH 3 CN emission. AS 209 and LkCa 15 both exhibit moderate HC 3 N and tentative CH 3 CN emission, and neither molecule is detected in IM Lup. Figure 3 shows the deprojected radial profiles for each transition. The uncertainties are estimated by dividing the median moment zero rms by the square root of the number of independent measurements (i.e., the number of pixels at each radius divided by the beam size in pixels, to account for beam convolution). For both CH 3 CN and HC 3 N, almost all detections exhibit centrally peaked emission. The exception is CH 3 CN and HC 3 N in AS 209, which peak at larger radii, indicative of a ringed structure; this can also be seen in the moment zero map ( Figure 2). Although it does not have a large central dust cavity, AS 209 also exhibits a ringlike structure in the molecules H 13 CN, HC 15 N, DCN, H 13 CO + , and DCO + Huang et al. 2017), possibly due to dust opacity effects. Additionally, we note that when imaged with a smaller robust factor, HC 3 N and possibly CH 3 CN in V4046 Sgr show evidence of a central depression. Higher-resolution observations are required to confirm the morphology of these molecules at small scales.
In V4046 Sgr, MWC 480, and LkCa 15, multiple transitions from the same molecule were observed. Figure 4 shows the moment zero maps for these additional transitions. The CH 3 CN 14 1 -13 1 and 14 2 -13 2 lines were covered within the same spectral window as the 14 0 -13 0 transition, and in V4046 Sgr these higher-K lines were strong enough to be detected. Additionally, for MWC 480 and LkCa 15 the CH 3 CN 15 0 -14 0 and 16 0 -15 0 and HC 3 N 31-30 and 32-31 lines were observed in a separate program (Loomis et al. subm.). In MWC 480, CH 3 CN 15 0 -14 0 and HC 3 N 31-30 and 32-31 were detected. These additional lines were not detected in LkCa 15.
Population diagrams
For molecules with multiple detections, rotational diagrams can be used to determine the disk-averaged column densities and rotational temperatures of emission ( Figure 5). In most cases, only detected lines were used in fitting rotational diagrams; the exception is for HC 3 N in LkCa 15, as only a single line is detected. In this case the 3σ upper limits on the non-detected transitions were used to obtain an upper limit for the rotational temperature.
Assuming LTE and optically thin emission, the diskintegrated flux density S ν ∆V can be converted to an upper level population N u (Goldsmith & Langer 1999): where S ν is the flux density, ∆V is the line width, A ul is the Einstein coefficient and Ω is the solid angle of the source. For a disk-averaged column density, Ω is the same for each transition of a molecule. To estimate Ω for each molecule, we use the deprojected radial profile of the brightest line ( Figure 3) to identify the maximum angular extent of emission. Ω is taken to be the solid angle subtended by a circle with this radius. In turn, the total column density N T and rotational temperature T rot can be determined from the upper level populations given by the Boltzmann distribution: Here, g u is the upper state degeneracy, Q is the molecular partition function, and E u is the energy of the upper state (K). To calculate the partition functions for CH 3 CN and HC 3 N, we use the symmetric top and linear polyatomic approximations respectively (Gordy & Cook 1984;Mangum & Shirley 2015): where A, B, and B 0 are the rotational constants for CH 3 CN and HC 3 N. The best-fit CH 3 CN and HC 3 N column densities and rotational temperatures are listed in Table 3. The rotational temperature of CH 3 CN in V4046 Sgr is 29 ± 2K and in MWC 480 is 73 ± 23K. By comparison, the rotational temperature of CH 3 CN was recently measured in the disk around the solar analog TW Hya to be 29K; follow-up chemical modeling predicted abundant gas-phase CH 3 CN between temperatures of ∼25-50K corresponding to the warm molecular layer of the disk (Loomis et al. in prep.). The measured rotational temperature in V4046 Sgr is consistent with these TW Hya results, while CH 3 CN in MWC 480 is warmer. MWC 480 is a Herbig Ae star and hosts a stronger radiation field than the T Tauri stars TW Hya and V4046 Sgr, which may explain the warmer emission. However, we emphasize that given the line blending of CH 3 CN (see Section 2.2) the rotational temperature is rather poorly constrained. Moreover, at densities below ∼10 7 cm −3 , the CH 3 CN and HC 3 N transitions targeted in this survey will be sub-thermally excited. Modeling in Loomis et al. (in prep.) suggests that CH 3 CN emission can arise from regions with densities down to 10 6 cm −3 ; therefore, the rotational temperatures listed in Table 3 may underestimate the kinetic temperature of the emission region. We note that in V4046 Sgr, the derived CH 3 CN rotational temperature also corresponds to the kinetic gas temperature: for symmetric top molecules, the populations of different K levels with the same J value are a direct result of collisions (e.g. Loren & Mundy 1984). This is not the case for MWC 480 since the multiple lines detected consist of different J levels within the same K ladder.
HC 3 N in MWC 480 has a measured rotational temperature of 49 ± 6K, consistent within the uncertainties with the warm CH 3 CN temperature found in the same source. Additional observations are needed to determine whether there is a real difference in the emission temperature (and therefore emission location) of CH 3 CN and HC 3 N within a disk. In LkCa 15 only one HC 3 N line was firmly detected, resulting in a rotational temperature upper limit of 103K.
Disk-averaged abundance ratios
Disk-averaged abundance ratios of HC 3 N and CH 3 CN with respect to HCN are calculated using the integrated fluxes listed in Table 2 and adopted rotational temperatures. H 13 CN integrated fluxes are taken from Guzmán et al. (2017); we assume a standard 12 C/ 13 C ratio of 70 (with an uncertainty of 15%) to convert to HCN column densities. HC 3 N/HCN and CH 3 CN/HCN abundance ratios are calculated for rotational temperatures of 30, 50, and 70K corresponding to the range of observed rotational temperatures. In this treatment we assume that HC 3 N and CH 3 CN are co-spatial with HCN. If all molecules are emitting from the molecular layer between the midplane and the disk atmosphere this is a reasonable approximation vertically, however differences in the radial extent of emission for each molecule may introduce some error into the abundance ratios. IM Lup is excluded from this analysis since only upper limits are available for all species.
The derived abundance ratios are listed in Table 4. For all rotational temperatures, HC 3 N is more abundant than CH 3 CN. The CH 3 CN abundance is on the order of a few percent with respect to HCN for all choices of rotational temperature. For a given rotational temperature, the CH 3 CN/HCN ratios are consistent within the uncertainties with a single value across the entire disk sample. The derived HC 3 N/HCN abundance ratios are a few percent for a 70K rotational temperature but significantly higher (∼50% for AS 209, LkCa 15, and V4046 Sgr, and over 100% for MWC 480 and HD 163296) for a 30K rotational temperature. In MWC 480, the only source with a well-constrained HC 3 N rotational diagram, the derived temperature is close to 50K; therefore, for at least the Herbig Ae stars, the 50K abundances (∼20%) are likely the most reliable. 3.0 ± 0.9 V4046 Sgr 5.1 ± 1.8 2.7 ± 0.9 2.2 ± 0.8 37.9 ± 6.2 5.9 ± 1.0 2.7 ± 0.4 HD 163296 5.6 ± 1.6 2.9 ± 0.9 2.4 ± 0.7 134.1 ± 28.0 20.9 ± 4.4 9.4 ± 2.0 MWC 480 5.9 ± 1.9 3.1 ± 1.0 2.5 ± 0.8 121.9 ± 24.0 19.0 ± 3.7 8.5 ± 1.7 For disks with measured rotational temperatures, the closest corresponding abundance is marked in bold.
ABUNDANCE PROFILE MODELING
The strong emission of both CH 3 CN and HC 3 N in MWC 480 and V046 Sgr enables a more detailed modeling of the radial abundance profiles within each disk.
V4046 Sgr model
The physical model for V4046 Sgr is adapted from the parametric model of the V4046 Sgr disk described in Rosenfeld et al. (2013b) and is further developed in Guzmán et al. (2017). For fitting CH 3 CN and HC 3 N emission, we use the same physical disk model as described in Guzmán et al. (2017).
Molecular abundance profiles are assumed to follow a power law, following e.g. Qi et al. (2008: where X is the abundance with respect to the total hydrogen density, X 100 is the abundance at the characteristic radius of R 100 = 100 AU, and α is the power law index. An outer radius cutoff R out of 100AU was adopted based on the extent of emission in the deprojected radial profile (Figure 3). In the disk atmosphere, photodissociation is assumed to destroy most molecules, and the molecular abundances are attenuated by a factor of 10 8 above z/r = 0.5. To account for depletion in the disk midplane, molecule abundances are attenuated by a factor of 10 3 at temperatures below 25K. This temperature does not correspond to a purely thermal freeze-out boundary for nitriles, but was chosen empirically based on the boundary where gas-phase CH 3 CN disappears in the chemical model presented in Loomis et al. (in prep.), and is also consistent with the ∼30K rotational temperature found for CH 3 CN in this disk. This depletion boundary is similar to the expected CO freeze-out temperature, and likely arises due to either a coincidence with the photodesorption boundary of the nitrile molecules, or an increase in gas-phase nitrile chemistry driven by CO sublimation.
CH 3 CN and HC 3 N were fit independently for the free parameters X 100 and α. While we use the same physical disk model as in Guzmán et al. (2017), the boundary conditions for the molecular abundance profiles are slightly different here; since we ultimately wish to normalize CH 3 CN and HC 3 N with respect to H 13 CN, we also re-fit the H 13 CN observations to ensure consistency.
The fitting was performed by generating synthetic observations of the brightest line emission, holding the gas density and temperature profiles constant and adopting the disk inclination, position angle, stellar mass, and systemic velocity listed in Table 1. The synthetic images had a spectral resolution of 0.5 km s −1 and spanned -2.0 -14.5 km/s for CH 3 CN (including both the 14 0 -13 0 and 14 1 -13 1 transitions) and -3.0 -9.0 km/s for HC 3 N. The radiative transfer code RADMC-3D (Dullemond 2012) was used to calculate level populations for each synthetic image assuming LTE conditions. The vis sample Python package 1 (Loomis et al. subm.) was then used to sample the synthetic image at the u − v points of the observations. The likelihood function was calculated from the weighted difference between observations and the model in the u − v plane. The affine-invariant MCMC package emcee (Foreman-Mackey et al. 2013) was used to sample posterior distributions of both parameters X 100 and α. A flat prior was used for each parameter when generating new samples: 10 −20 < X 100 < 10 −8 and -3 < α < 2. The resulting best-fit values are listed in Table 5. Figure 6 shows channel maps of the observations along with the bestfit model and residuals for CH 3 CN and HC 3 N (H 13 CN and CH 3 CN 14 2 -13 2 can be found in Appendix B); the observations are well reproduced by the model.
To ensure that the choice of depletion boundary does not significantly impact our results, the models were also run with an adopted depletion boundary of 19K and 30K. For CH 3 CN the best-fit X 100 at 19K and 30K are within 25% of the 25K value, and the best-fit α are within 15%. For HC 3 N there was less than 3% change for both X 100 and α. The results are therefore not highly sensitive to the choice of depletion boundary. To further confirm the derived abundances we use the bestfit X 100 and α values from the CH 3 CN 14 0 -13 0 line to create a synthetic image of the 14 2 -13 2 transition. The higher-frequency transition is well reproduced by the lower-frequency best-fit values (shown in Appendix B), indicating that the adopted model is appropriate.
Since V4046 Sgr is a transition disk with a large inner gap, we also test whether a model with a large cavity produces a better fit to the observations. The fiducial model uses a cavity radius of 3 AU based on the CO line emission and SED of V4046 Sgr (Rosenfeld et al. 2013b). An adopted 29 AU radius, corresponding to the mm dust radius hole, produces a worse fit to the data for both molecules. (2015), the CH 3 CN 14 0 -13 0 and HC 3 N 27-26 profiles in MWC 480 were fit by obtaining the minimum χ 2 value from a grid of calculated abundance models. Here, we adopt the same parametric physical model for the disk density and temperature described in that paper but use the MCMC fitting procedure described in the previous section to constrain X 100 and α. This allows us to better explore the parameter space and therefore to obtain more robust constraints on the fit parameters and their uncertainties. Also in contrast toÖberg et al. (2015), we use RADMC-3D instead of the non-LTE LIME code for radiative transfer calculations to ensure consistency with the V4046 Sgr results.
To retrieve molecular abundances, we use the same power-law prescription (Equation 5) as in Section 4.1. Again the abundances are attenuated above a z/r of 0.5 to account for photo-destruction. We adopt a depletion boundary of z/r < 0.05 which roughly corresponds to the cutoff in the V4046 Sgr model. As in the V4046 Sgr model, this does not correspond to the nitrile freeze-out boundary but likely corresponds to where photodesorption of nitriles and/or CO-driven gas phase chemistry become efficient. A higher cutoff of z/r < 0.2 was also tested, corresponding to the lower boundary of CH 3 CN emission in the model of Loomis et al. (in prep.). An outer radius R out of 180 AU was chosen, corresponding to the extent of emission in the MWC 480 radial profiles ( Figure 3). The MCMC fitting procedure is otherwise the same as described above. CH 3 CN, HC 3 N, and H 13 CN were each fit with spectral resolutions of 0.5km/s and velocity ranges of 1-9.5km/s, 0-14.5km/s, and 0-14.5km/s, respectively. The resulting model channel maps and residuals are shown in Appendix B. The bestfit values of X 100 and α are listed in Table 5. For both V4046 Sgr and MWC 480, CH 3 CN and HC 3 N appear to have increasing or flat abundance profiles, while H 13 CN shows a decreasing abundance profile.
Like for V4046 Sgr we can use the constraints from additional lines to confirm the validity of the best-fit model. For both depletion boundaries of z/r < 0.05 and z/r < 0.2 we created synthetic images of the upperlevel CH 3 CN and HC 3 N lines based on the best-fit X 100 and α values from the lower-level lines. Rotational diagram calculations were performed for CH 3 CN and HC 3 N using the modeled fluxes. We obtain very similar rotational temperatures for z/r < 0.05 and z/r < 0.2 models: 20K and 23K for CH 3 CN, and 44K and 45K for HC 3 N, respectively. The HC 3 N model temperatures are very close to the observed values, while the CH 3 CN temperatures are low. Because the modeled CH 3 CN rotational temperature is not substantially improved by increasing the z/r cutoff, a more complex parametric model is likely required to fully describe the disk physical and/or abundance structure. For instance, modeling of H 2 CO emission in TW Hya required both a hot inner component and a cool extended component to match observations (Öberg et al. 2017); the possible presence of a warm inner component in MWC 480 would not be captured by the single power-law profile used in our models, resulting in a potential under-prediction of the observed rotational temperature. We emphasize, however, that the observed rotational temperature of CH 3 CN in MWC 480 is not well constrained due to a small lever arm in upper energy levels as well as line blending uncertainties, and may in reality be closer to the modeled rotational temperature. Of the two depletion boundaries, the z/r < 0.05 cutoff produced better fits to the higher-J lines as determined by the reduced χ 2 , and therefore all future discussion pertains to the z/r < 0.05 model results. Channel maps for these models are shown in Appendix B.
CH 3 CN and HC 3 N column densities and abundances
The best-fit CH 3 CN and HC 3 N abundance profiles derived for V4046 Sgr and MWC 480 are shown in Figure 7, along with the derived radial column density profiles. For comparison, we also show the column densities of HC 3 N and CH 3 CN predicted by a disk chemistry model for a generic T Tauri star and disk (Walsh et al. 2014). CH 3 CN column densities from the disk chemistry model are within an order of magnitude to those derived in this work for V4046 Sgr and MWC 480, while HC 3 N column densities are under-estimated by over an order of magnitude in the model. However, neither V4046 Sgr nor MWC 480 is well-described by the disk physical structure adopted by Walsh et al. (2014), and further tuning of models is required to make conclusive comparisons. Comparing the relative shapes of the radial column density profiles, the extremely centrally peaked profile in the model is not reproduced in the profiles derived in this work. However, we note that due to the high upper-state energies of the lines fitted (92K for CH 3 CN and 165K for HC 3 N), our observations are not sensitive to cool material and therefore may not reflect the true spatial distributions of the molecules.
To illustrate this, we use the modeled best-fit abundance profiles to determine the fraction of emitting molecules (i.e., molecules in the upper energy state of the observed transition) in temperature bins from 10 to 200K, following Bergin et al. (2013). The number density of a species in the upper energy state n u is related to the total number density n T by the Boltzmann distribution (Equation 2). n u is integrated over the disk to find the number of upper-state molecules in target temperature bins. The fraction of upper-state molecules in each temperature bin relative to the total number of upper-state molecules, f u (T ), is shown in Figure 8 as a cumulative distribution function.
For both CH 3 CN and HC 3 N in V4046 Sgr and MWC 480, roughly half of the emitting molecules are in gas warmer than 50K, with virtually no contribution from 30K gas. This indicates that our observations are mostly probing warm emission. Since most gas in disks exists at <50K temperatures, there may be a substantial amount of material that our observations are not sensitive to. As further discussed in Section 5, follow-up observations of lower-J transitions of HC 3 N and CH 3 CN will be helpful in addressing this issue. Because the retrieved abundance profiles may depend on the physical disk structure assumed in the model, we also present the CH 3 CN and HC 3 N abundance profiles normalized with respect to HCN. This should be less sensitive to the details of the physical model structure assuming that all three molecules are emitting cospatially; this is already implicit in our model since we used the same freeze-out and photo-dissociation boundary conditions to retrieve molecular abundances within a given source. H 13 CN is converted to HCN using the standard isotopic ratio of 70 with an uncertainty of 15%.
The resulting CH 3 CN and HC 3 N abundance profiles with respect to HCN in V4046 Sgr and MWC 480 are shown in Figure 9. The derived gas-phase abundances with respect to HCN in the inner 100AU of the disks are on the order of a few percent for CH 3 CN and a few tens of percent for HC 3 N. These model-derived inner disk abundances are consistent with the range of disk-averaged abundances calculated assuming 30-70K rotational temperatures (Table 4).
DISCUSSION
In a sample of 6 protoplanetary disks, we have detected the complex nitrile molecules HC 3 N and CH 3 CN in five and three disks respectively. These molecules therefore appear common in other nascent planeatry systems. The disks in our sample host a range of physical conditions, which can be used to evaluate the nitrile chemistry in disks. We begin by surveying the possible origins of complex nitriles in disks, followed by an evaluation of our source sample.
Nitrile formation in disks
5.1.1. Chemical pathways HC 3 N is proposed to form efficiently in the gas phase, via either CN + C 2 H 2 or HCN + C 2 H (Fukuzawa & Osamura 1997), and has no known efficient grain surface formation pathways. In contrast, CH 3 CN can form via both gas-phase and grain-surface processes. In current astrochemistry codes, the dominant gas phase CH 3 CN formation channel is CH + 3 + HCN followed by dissociative recombination. On grain surfaces, CH 3 + CN or hydrogenation of C 2 N are proposed to be efficient at forming CH 3 CN (Huntress & Mitchell 1979;Walsh et al. 2014). Current evidence suggests that grain-surface chemistry is a primary contributor of CH 3 CN in disk environments: inÖberg et al. (2015), a gas-phase only chemical model failed to reproduce observed abundances of CH 3 CN/HCN towards MWC 480, implying a significant contribution from grain-surface chemistry to the observed gas-phase abundances. Likewise, Loomis et al. (in prep.) test a gas-phase only and gas-grain model and find that grain chemistry is required to reproduce observed CH 3 CN abundances in the disk around TW Hya. We note that all of the proposed formation mechanisms of CH 3 CN have estimated rate constants and have not been experimentally validated, and there may be important contributions from as yet unexplored chemistry that leads to CH 3 CN formation.
Nitrile abundance correlations
To explore the relationship among different nitrilebearing species in our disk sample, Figure 10 shows the distance-normalized fluxes of the CH 3 CN 14 0 -13 0 and HC 3 N 27-26 transitions each plotted against the H 13 CN 3-2 transition.
The HC 3 N emission strength has no clear relation to H 13 CN. However, interpreting this lack of correlation is complicated by the high upper energy of the HC 3 N 27-26 line: with an excitation temperature of 165K, this transition is not sensitive to cool HC 3 N molecules which may be abundant in some disks. Indeed, the enhanced HC 3 N emission around the Herbig Ae stars compared with the T Tauri stars is consistent with a thermal effect (Figure 10), as there will be more hot molecular material around more luminous stars. Observations of lower-J HC 3 N transitions are needed to determine whether the HC 3 N chemistry is related to the other nitrile chemistry in disks.
From the available data, CH 3 CN emission appears to correlate with H 13 CN, although more detections are required to establish if this relationship is real. We note that CH 3 CN exhibits no such correlation with C 18 O emission strength; the tentative correlation with H 13 CN is therefore not simply a trend with the amount of gas in the disk. If additional data points confirm this correlation, it may be evidence for an active gas-phase contribution to CH 3 CN formation, as this is currently the only chemical pathway with a direct link between HCN and CH 3 CN. Other potential gas-phase and grain-surface channels to CH 3 CN formation which could explain a correlation with HCN but are not currently included in models should be also explored.
Nitrile spatial correlations
Correlations in the spatial extent of molecules within a disk can also be used to constrain their formation chemistry. Across the disk sample, the spatial distributions of CH 3 CN and HC 3 N ( Figure 2) as well as H 13 CN (see Guzmán et al. 2017 for H 13 CN maps) are all compact, typically well within the bounds of the dust continuum. This spatial similarity of nitrile emission within each disk is consistent with a chemical scheme in which CH 3 CN and HC 3 N depend on abundant HCN (or its photo-product CN) to form. We note that this is a stronger constraint for CH 3 CN and H 13 CN than for HC 3 N because, as discussed above, the emission from high-J transitions of HC 3 N may not reflect the true distribution of molecules within the disk, and lower-energy transitions are required to confirm a compact distribution.
Determining whether the spatial distributions of HC 3 N and its proposed precursor C 2 H are related will provide important constraints on the HC 3 N formation chemistry. Spatially resolved observations of C 2 H towards TW Hya show a ringed structure peaking near the edge of the sub-millimeter continuum; DM Tau similarly demonstrates an outer ring near the dust edge and an inner ring co-spatial with the continuum (Kastner et al. 2015;Bergin et al. 2016). A ringed morphology may be a feature of hydrocarbons more generally, and indeed is reproduced for the slightly larger hydrocarbon C 3 H 2 (Bergin et al. 2016;. The apparent anti-correlation of HC 3 N and C 2 H suggests that the proposed HC 3 N formation pathway of C 2 H + HCN may not be efficient. If the ringed morphology is common to all hydrocarbons, this is also problematic for the C 2 H 2 + CN pathway, although the relationship between C 2 H and C 2 H 2 distributions is unconstrained. Yet again we note that due to the high-energy HC 3 N transitions and comparatively low angular resolution of our observations we cannot exclude the possibility of HC 3 N rings. Higher-resolution observations of lower-J HC 3 N transitions combined with C 2 H observations in the same sources will be important for constraining whether the current HC 3 N formation paradigm is viable.
Physical drivers of nitrile chemistry in disks
The physical conditions of a protoplanetary disk set what chemistry can occur; while myriad properties have been proposed as chemical drivers in disks, we focus here on those which are to some degree testable based on the properties of our sample, namely the radiation field, disk structure, and evolutionary stage.
Radiation field
The quiescent luminosity and accretion luminosity of a host star both contribute to the overall radiation environment in a disk. The FUV emission in T Tauri stars arises dominantly from accretion luminosity, while Herbig Ae stars should have significant FUV contributions from quiescent stellar photospheric emission in addition to accretion luminosity (e.g. Kurucz 1993;Matsuyama et al. 2003). Figure 11a-b shows the distancenormalized disk-integrated fluxes of CH 3 CN 14 0 -13 0 and HC 3 N 27-26 plotted against the quiescent luminosity and the mass accretion rate of each star. For comparison, the CH 3 CN 14 0 -13 0 flux calculated for TW Hya based on the observations of Loomis et al. (in prep.) is also included, with a bolometric luminosity and mass accretion rate taken from Van Boekel et al. (2017) and Herczeg & Hillenbrand (2008) respectively. For CH 3 CN we do not see any obvious trends with L orṀ . This lack of correlation with the radiation field could indicate emission from the colder UV-shielded layers of the disk, but is also possibly due to the small number of CH 3 CN detections. HC 3 N appears to correlate with both the stellar luminosity and the mass accretion rate, suggesting that the UV field may play an important role in driving its chemistry. However, due to the high upper energy of the 27-26 transition, this could be due in part to the presence of hotter gas in high-UV environments rather than increased abundances of HC 3 N; observations of lower-J HC 3 N lines will be able to break this degeneracy.
Disk age
As disks evolve, processes such as viscous accretion, dust growth/settling, and radial drift reshape the physical structure of the disk (reviewed in Williams & Cieza 2011). Astrochemical modelers have recently begun to explore how a dynamically evolving disk impacts the chemistry, with a particular focus on the C/O ratio over time (Piso et al. 2015;Eistrup et al. 2017). Modeling by Du et al. (2015) shows that the abundance of nitrile species can be greatly enhanced as a result of gas-phase carbon and oxygen depletion: as the system ages and more CO and H 2 O are depleted from the gas-phase, the nitrile abundances should correspondingly increase. Observationally, Kastner et al. (2014) observed enhanced CN abundances towards the evolved disks around TW Hya and V4046 Sgr. Figure 11c shows the CH 3 CN 14 0 -13 0 and HC 3 N 27-26 integrated fluxes normalized to a distance of 140pc and plotted against the disk age. Again, the CH 3 CN 14 0 -13 0 flux in TW Hya calculated from the observations of Loomis et al. (in prep.) is included for comparison. V4046 Sgr, the oldest disk in the sample, shows anomalously high CH 3 CN emission. However, in all other disks CH 3 CN detections and upper limits are fairly clustered, showing no obvious trend with age. Likewise, HC 3 N emission does not appear related to disk age. With the existing data there is therefore insufficient evidence for an evolutionary trend in nitrile emission. We note that the discrepancy in CH 3 CN emission between V4046 Sgr and TW Hya is somewhat surprising given that the line intensities of other small molecules in the two disks are quite similar (Kastner et al. 2014).
Inner dust cavity
The disk structure sets set how radiation is processed through the disk. Transitional disks, characterized by inner gaps in mm dust emission, may host a distinct chemistry due to increased UV radiation in the inner disk (e.g. Cleeves et al. 2011). LkCa 15 and V4046 Sgr are both transition disks and yet exhibit very different nitrile chemistries: V4046 Sgr is strongly detected in both CH 3 CN and HC 3 N, while LkCa 15 is weakly detected in HC 3 N and tentatively or not detected in CH 3 CN. There is therefore no strong global impact of an inner cavity on the disk's nitrile chemistry; observations towards other transition disks are needed to confirm this in a larger sample. On smaller scales, we expect that the presence of an inner gap would result in warmer gas and a higher UV field within the cavity. Suggestively, there is a slight peak in the radial profile of CH 3 CN in LkCa 15 out to ∼50 AU scales (Figure 3), consistent with the cavity radius derived by Piétu et al. (2006); however, this emission is not significant at the 3σ level and therefore no firm conclusions can be drawn.
Nitriles in different circumstellar environments
We now compare the disk-averaged abundance ratios for our sample with the abundances measured in similar objects at different evolutionary stages. Low-mass protostars are the evolutionary precursors to the <2M stars in this sample, while comets formed out of the midplane of the protosolar nebula and should preserve material from the time of planet formation. We note the environments in these different types of objects span a wide range of temperatures, densities, radiation fields, and other physical conditions. As discussed in Section 4.3, for V4046 Sgr and MWC 480 the model-derived CH 3 CN/HCN and HC 3 N/HCN abundances in the inner 100 AU are consistent with the range of disk-averaged abundances calculated for 30K-70K rotational temperatures. In this section, we therefore use the range of disk-averaged abundances as representative of the inner 100AU of the disk in order to compare across the entire disk sample. Figure 12a-b show the range of CH 3 CN/HCN and HC 3 N/HCN abundances measured in solar system comets (Mumma & Charnley 2011) compared to the gas-phase abundances measured in our disk sample. The disk abundances of CH 3 CN are within a few percent of the values measured in solar system comets for sources with detections. The upper limits for AS 209 and LkCa 15 are somewhat lower but still possibly within a few percent of cometary.
For HC 3 N, the disk abundances are up to an order of magnitude higher than cometary for an adopted 30K rotational temperature; however, given the warmer (50K) HC 3 N rotational temperature derived for MWC 480, we expect that the abundances calculated assuming a 30K temperature over-estimate the HC 3 N/HCN ratio for the Herbig Ae disks at least. Restricting the comparison to the 50-70K values, the HC 3 N abundances are quite close to cometary. Figure 12c shows the range of CH 3 CN/HC 3 N ratios measured in a sample of 16 low-mass protostellar envelopes (Bergner et al. 2017). HCN column densities towards these sources are not available, however the CH 3 CN/HC 3 N ratio still provides a useful proxy for the relative efficiency of different complex nitrile chemistries. The ratios across the disk sample are mostly consistent with the values measured in protostellar envelopes, with the exception of V4046 Sgr which is somewhat enhanced in CH 3 CN/HC 3 N compared to the other disks and protostars.
Based on this comparison, we see that the gas-phase nitrile abundances relative to other N-bearing molecules are consistent across various physical environments: disk molecular layers, protostellar envelopes, and the midplane of the solar nebula. We note that with these observations alone we cannot directly compare the comet-and planet-forming material in our sample with that of the protosolar nebula, as this would require extrapolations (i) from the molecular layer down to the midplane, and (ii) from gas-phase to ice abundances. Nonetheless, the consistency of nitrile abundances across a wide range of physical conditions demonstrates a robust nitrile chemistry with similar outcomes in different environments. CH 3 CN abundance ratios (and upper limits) in particular appear to be especially regular both across the disk sample and in comparison with comets and protostars. Complex nitrile species should therefore be reliably produced in a variety of different star-and planet-forming environments.
While the abundances of N-bearing molecules appear internally consistent across a range of physical environments, there is evidence that the ratio of N-to O-bearing COMs in disks is distinct compared to other environments. In both comets and protostellar envelopes the CH 3 CN/CH 3 OH ratio is typically on the order of a few percent (Mumma & Charnley 2011;Bergner et al. 2017). By contrast, in the one disk where CH 3 OH has been detected (TW Hya), the column density ratio of CH 3 CN/CH 3 OH is about unity (Walsh et al. 2016;Loomis et al. in prep.), indicative of an oxygen-poor chemistry. Similarly, our observations covered a number of CH 3 OH transitions in the 5-4 ladder, with no CH 3 OH detections despite the strong nitrile emission. This suggests that the under-abundance of gas-phase Ovs. N-bearing COMs is systematic in disks. A nitrogenrich, oxygen-poor chemistry is qualitatively consistent with an oxygen-starved environment due to e.g. the depletion of H 2 O and CO from the gas phase (Du et al. 2015). Such a scenario would indicate a predominantly gas-phase formation pathway for CH 3 CN in disks. Another possible factor is if the photodesorption efficiency of intact CH 3 CN is high compared to CH 3 OH, which has been shown to photodesorb mainly as fragments (Bertin et al. 2016;Cruz-Diaz et al. 2016). Since photodesorption from grains is thought to be of primary importance in disks, compared to mainly thermal desorption in protostars and comets, this could also contribute to the observed discrepancy in CH 3 CN/CH 3 OH across circumstellar environments. Further exploration of the nitrile formation chemistry in disks using astrochemical models is needed to resolve the origin of this unique chemistry.
CONCLUSIONS
Based on ALMA observations of the complex nitrile species CH 3 CN and HC 3 N towards six protoplanetary disks, we conclude the following: 1. Complex nitrile molecules are commonly observed in protoplanetary disks, with five of six disks detected in HC 3 N and three of six disks detected in CH 3 CN.
2. Rotational temperatures derived for sources with multiple line detections are consistent with emission from the temperate molecular layer of the disk. V4046 Sgr exhibits cool (29 ± 2K) CH 3 CN emission consistent with the temperature measured in TW Hya (Loomis et al. in prep.). CH 3 CN and HC 3 N in MWC 480 are both characterized by warmer emission, with rotational temperatures of 73 ± 23K and 49 ± 6K respectively. The increased radiation field around Herbig Ae disks compared to T Tauri disks may be responsible for this difference.
3. Parametric models of the CH 3 CN, HC 3 N, and H 13 CN abundances in MWC 480 and V4046 Sgr are used to fit the observed emission and constrain radial abundance profiles. Within 100AU, CH 3 CN/HCN abundances are on the order of a few percent and HC 3 N/HCN abundances on the order of tens of percent.
4. Across the disk sample we observe a tentative correlation of CH 3 CN with H 13 CN emission; if confirmed by further detections, the formation chemistry of CH 3 CN should be re-visited to explain this relationship. We see evidence for a possible anticorrelation in the spatial distributions of HC 3 N and its pre-cursor C 2 H; if confirmed in lower-J HC 3 N transitions this would seem to rule out the current proposed HC 3 N formation path.
5. We use the heterogenous physical properties of our disk sample to explore whether the UV field, disk age, or presence of an inner dust cavity impact the nitrile chemistry. We observe no strong trends relating these environmental properties to the nitrile emission strength. We emphasize the need for observations of lower-energy HC 3 N lines to help constrain any relationships with disk physical properties.
6. Disk-averaged CH 3 CN and HC 3 N abundances relative to other N-bearing molecules are compared to values measured in solar system comets and protostellar envelopes and found to be consistent across these different of environments, although the HC 3 N/HCN uncertainties are large due to sensitivity to the adopted rotational temperature. These molecules appear to be reliably produced under a wide variety of physical conditions, demonstrating a robust nitrogen chemistry with similar outcomes in different environments.
7. Our results are suggestive of a disk chemistry systematically rich in N-bearing relative to O-bearing COMs when compared to other circumstellar environments. The origin of this unique chemistry observed in disks compared to other stages of star and planet formation remains to be resolved.
|
2018-03-13T18:03:22.000Z
|
2018-03-13T00:00:00.000
|
{
"year": 2018,
"sha1": "c5122822a3bb68ff9946e740b5fa18f564d15d30",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1803.04986",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c5122822a3bb68ff9946e740b5fa18f564d15d30",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
232380828
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the Hydration Characteristics and Anti-Washout Resistance of Non-Dispersible Underwater Concrete with Nano-SiO2 and MgO
In this paper, the effect of nano-SiO2 (NS) and MgO on the hydration characteristics and anti-washout resistance of non-dispersible underwater concrete (UWC) was evaluated. A slump flow test, a viscosity test, and setting time measurement were conducted to identify the impacts of NS and MgO on the rheological properties of UWC. The pH and turbidity were measured to investigate the anti-washout performance of UWC mixes. To analyze the hydration characteristics and mechanical properties, hydration heat analysis, a compressive strength test, and thermogravimetric analyses were conducted. The experimental results showed that the fine particles of NS and MgO reduced slump flow, increased viscosity, and enhanced the anti-washout resistance of UWC. In addition, both NS and MgO shortened the initial and final setting times, and the replacement of MgO specimens slightly prolonged the setting time. NS accelerated the peak time and increased the peak temperature, and MgO delayed the hydration process and reduced the temperature due to the formation of brucite. The compressive results showed that NS improved the compressive strength of the UWC, and MgO slightly decreased the strength. The addition of NS also resulted in the formation of extra C–S–H, and the replacement of MgO caused the generation of a hydrotalcite phase.
Introduction
As bridges continue to age, experts have been increasingly concerned about their structural safety, and suggest strengthening procedures. However, structures located underwater are more difficult to strengthen than superstructures. The existing strengthening methods [1][2][3][4][5][6] for substructures are costly, traffic-disrupting, and have a long construction time. To deal with these weak points of conventional strengthening methods, the FRP underwater strengthening method [7], the jacket strengthening method [8], and the precast concrete segment assembly method [9] have been proposed. Underwater concrete (UWC) is commonly used in these methods, and its performance is important in determining strengthening efficiency [10,11].
UWC is generally produced using viscosity-modified admixtures (VMAs) and antiwashout admixtures (AWAs) to fabricate viscous concrete mixes [12]. It is directly poured into water, and washout resistance is a significant factor that determines its strength, durability, and workability. In underground engineering, UWC with anti-washout characteristics is used due to its excellent viscosity, low dispersibility, and low water pollution potential [13]. As UWC is based on self-compacting concrete, its rheological properties must be adequately maintained to ensure higher washout resistance.
Several attempts have been made to control the rheological properties and washout resistance of UWC. Park et al. [14] studied the various contents of AWA and its superplasticizer, and found that the optimum content of UWC is 1% AWA and 5.5% superplasticizer.
Kumar et al. [15] mentioned that a combination of AWAs can postpone the setting time and strengthen the rheological properties of UWC. Khayat et al. [16] achieved higher anti-washout resistance using fly ash and silica fume as viscosity-enhancing agents. In addition, Grzeszczyk et al. [17] used a nanomaterial to enhance the anti-washout resistance of UWC. Sonebi et al. [18] used 8% silica fume and 20% cement binder instead of fly ash to control the workability and stability of UWC, which also decreased the segregation coefficient, washout loss, and surface settlement. Kojouri et al. [19] used lime powder as a mineral additive in UWC and found that it enhanced washout resistance, workability, the acceleration of the hydration reaction, and compressive strength.
Nano-SiO 2 (NS) is the common name for nano-sized silica particles with a diameter between 5 nm and 100 nm. NS particles are about 1000 times smaller than OPC particles [20]. Previous studies [21,22] found that NS can strengthen the cement matrix due to its smaller particles, high pozzolanic activity, and ability to form dense and compact microstructures. Because of these properties of NS, many studies focus on enhancing the mechanical performance and durability of cement composites with different percentages of NS [23][24][25][26][27][28]. However, only a few studies [17] of NS have been used to control the washout resistance of UWC. Therefore, further specific investigations and analyses of the fluidity and anti-washout resistance of UWC with NS are needed.
Magnesium oxide (MgO) is widely used in concrete manufacturing. Previous studies have demonstrated that the hydration and expansion characteristics of MgO positively affected concrete by reducing the hydration heat, decreasing the shrinkage, and changing the porosity and pore size distribution [29][30][31][32]. As the anti-washout resistance of concrete depends on the fine fraction of its binder material, various very fine mineral additives have been suggested to control the anti-washout properties of UWC [17]. Fly ash, silica fume, and ground granulated blast furnace slag have been generally used as mineral additives in UWC mixes [33,34]. Studies have shown that the incorporation of mineral admixtures in UWC is sought around the world to produce high-quality non-dispersible UWC. Although much attention has been given to fly ash, silica fume, and granulated blast furnace slag, there are few data available on the effects of NS and MgO on the anti-washout resistance of UWC.
In summary, this study investigated the effects of NS and MgO as mineral admixtures on the workability, washout properties, strength development, and hydration process of UWC. A slump flow test and a viscosity test were conducted to analyze these effects on the workability of UWC; the pH and turbidity measurement tested the anti-washout resistance of UWC; the setting time and isothermal conduction calorimetry were used to determine the hydration characteristics of UWC; and a compressive strength test measured the mechanical property of UWC.
Materials
Ordinary Portland Cement (OPC) obtained from A company (Seoul, Korea) that complies with ASTM C-150 [35] was used as the primary binding material. Specifically, fine and coarse aggregates with maximum sizes of 5 mm and 25 mm, respectively, were used. The fine aggregate had a fineness of modulus (F.M) of 2.7 and a water absorption capacity of 1.05%, and the coarse aggregate had a specific gravity of 2.63, an F.M. of 7.01, an abrasion of 45%, and a water absorption capacity of 1.02%. To achieve the needed fluidity of UWC, a poly-carboxylate-based superplasticizer from E company in Seoul (SP) was used. Commercially available hydroxypropyl-methylcellulose (HPMC)-based anti-washout admixture (AWA) from a local E company in Seoul was used to produce non-dispersible UWC mixes. Magnesium oxide (MgO) with a purity of 80.1%, especially light burned magnesia (LBM), sourced from R company in Gwangyang, Korea was used as the binder replacement material. The gradation curves for the OPC and MgO used are shown in Figure 1. The gradation curves indicated that the MgO particles have a smaller size compared to OPC. Nano-SiO 2 (NS) powder with a purity of 99.5% and average particle sizes of approximately 20-40 nm was used as an additional binder material.
Mix Proportions and Fabrication of Specimens
Five mix proportions of non-dispersible UWC and control concrete mixtures were prepared. Table 1 shows the mix proportions of all the concrete mixes produced in this study, which were classified into two types. The NS series denotes the sample with NS powder, and the M-series represents the samples with MgO. The numbers after NS and M indicate the dosage: 0%, 5%, and 10% of the weight of OPC was sequentially replaced with MgO, and NS powder was sequentially added at 0%, 1%, and 2% of the OPC weight. A water binder (Sum of OPC and MgO) ratio of 0.45, a superplasticizer content of 2% of the weight of the binder, and an AWA content of 0.5% of the weight of the binder were used, followed by a standard mix of 25 MPa target strength UWC obtained from a local concrete company. Concrete cylinders with dimensions of Φ 100 mm × 200 mm were prepared following ASTM C 31 [36]. The casting method of the anti-washout UWC was adopted from previous study [37]. The UWC specimens were demolded after they were cured for 24 h and placed in a water curing condition until the set curing age was reached.
Experiment Methods
The concrete mixes were prepared with a laboratory-scale concrete mixer (MR500, Inter, Korea). The fresh properties of the UWC mixes were determined by testing their workability. The slump test was conducted according to ASTM C 143 [38]. The viscosity values of the UWC mixes with NS and MgO were assessed with a rheometer (Rheometer R/S Plus, Brookfield, WI, USA). To observe the setting time of the fresh UWC mixes, paste samples were obtained after mixing the UWC concrete, and the initial and final setting times were investigated using Vicat needles [39,40] according to ASTM C 191 [41].
To measure the anti-washout resistance of the UWC mixes, the pH and turbidity measurements preceded via a method used in a previous study [42]. First, 500 g of UWC was slowly poured into a beaker. After 3 min, 100 mL of water from the top surface of the beaker was extracted using a pipette, its pH was immediately evaluated with a pH meter, and its turbidity was measured in accordance with EN ISO 7027 [43] using a turbidity machine (TB 300IR, Lovibond, Amesbury, UK).
To evaluate the mechanical properties of UWC concrete mixes, the compressive strength of the concrete cylinders with dimensions of Φ 100 mm × 200 mm at 7 and 28 days of curing was measured using a universal testing machine (UTM, Shimadzu, CCM-200A; Shimadzu Corporation, Kyoto, Japan) and following ASTM C39 [44]. Three replicates were used for each mix.
Isothermal calorimetry is a convenient means of examining the hydration characteristic of cementitious materials at early ages. The heat evolution was investigated using a semiadiabatic calorimeter, as in a previous study [45]. The specimens were placed in the calorimeter 2 min after the water and binder were mixed. The temperature change was measured every 3 min for 48 h.
The hydration product of the UWC concrete specimens at later ages was measured via thermogravimetric analysis (TGA). The powder samples were obtained from pieces of the concrete specimens used for the compressive strength test. A thermogravimetric analyzer (TGA7 PERKIN ELMER, TA Instruments, New Castle, DE, USA) was used for TGA analysis in an N 2 environment at a heating rate of 10 K/min within a temperature range of 2 to 1000 • C.
Slump Flow Test
To evaluate the effects of NS and MgO on the workability of UWC concrete mixes, a slump flow test was conducted, as shown in Figure 2. The slump flow decreased with the addition of NS and its replacement with MgO. For example, slump values of 610 mm, 601 mm and 574 mm were measured for the control, NS1, and NS2 concrete mixes, respectively, mainly due to the large specific surface area of the NS powder. The fine particles of NS in the cement matrix had the strongest effect on the total water demand of the concrete mixes [46]. Therefore, the finer the particle powder added, the less workable the UWC. In addition, the incorporation of MgO in the UWC concrete mixes reduced the slump flow value. For instance, slump flow values of 610 mm, 598 mm, and 562 mm were measured for the control, M5, and M10 specimens, respectively. The decrease in fluidity with MgO was similar to the results of a previous study [47]. Finer MgO particles enhanced the cohesiveness of the cement paste, which reduced the fluidity. Furthermore, Zhang et al. [47] reported that the incorporation of reactive MgO decreased the workability of a cement composite, which increased its water demand to achieve a similar flow value. Figure 3 shows the rheological properties of the UWC mixes with different amounts of NS and MgO. Figure 3a shows the relationship between viscosity and shear rate, and Figure 3b shows the shear stress versus shear rate flow curves for the UWC mixes. Figure 3a illustrates that the enhancement of the shear rate decreased the viscosity. The addition of NS and replacement OPC with MgO increased the viscosity compared to that of the control specimens. Among the five UWC mixes, that to which 2% NS was added showed significant viscosity enhancement, followed by the NS1, M5, M10, and control specimens. Figure 3b shows that the shear stress increased when shear rate increased. In the region with a low shear rate of approximately 10 s −1 , the shear stress was significantly reduced, probably because of destruction of the flocculated structures due to rotor rotation [48]. The shear stress results are similar to the viscosity values. As the amounts of NS and MgO increased from 1% to 2%, the shear stress improved compared to that of the control specimen. However, NS enhanced the viscosity more significantly than MgO. The higher UWC viscosity values could be attributed to the finer particle sizes with higher specific surface areas of NS and MgO, which required more water to achieve flow [49]. As NS has a much smaller particle size than MgO, its viscosity enhancement effect was greater.
Setting Time
The initial setting time, the final setting time, and the setting time that is the duration between the initial and final setting times of the UWC pastes with different NS and MgO contents, are illustrated in Figure 4. Both NS and MgO incorporation into the UWC mixes reduced the initial and final setting times compared to the control specimen. When NS was increased from 1% to 2% and when MgO was increased from 5% to 10%, the initial setting time decreased by 60.1%, 79%, 38.46%, and 70.63%, respectively, and the final setting time decreased by 42.42%, 58.1%, 12.1%, and 33.84% compared to the control specimen. The results showed that the setting time decreased as the NS and MgO contents increased. However, NS reduced the setting time more significantly than did MgO. The reduction in the setting times of the UWC mixes that contained NS and MgO might be related to the finer particle sizes and higher surface areas of NS and MgO. Zhang et al. [50] found that NS decreased the setting time by reducing the dormant period and enhancing cement hydration. Li et al. [51] reported that the addition of NS promoted the gelation of C-S-H gels, which reduced their setting time. However, for MgO, the initial setting time decreased, and the setting time improved, compared to that of the control specimen. This was due to the fine particle size and hydration properties of MgO. The finer particle size of MgO compared to that of OPC gives it a larger surface area and a higher water demand, which shortens the initial setting time. The delay in final setting time could have been due to Mg(OH) 2 formation during MgO hydration. Polat et al. [52] found that Mg(OH) 2 formation slowed the hydration process and delayed the setting time by encircling the cement particles.
Anti-Washout Resistance
To evaluate the anti-washout resistance of the UWC concrete mixes, pH measurements and a turbidity test were conducted. Figure 5 shows the pH and turbidity values of each of the mixes. UWC generally loses quality during on-site construction because cement materials are washed off and diluted by water flow [37]. If segregation and washout occur while the concrete mixes are poured, the overall pH and turbidity of the cement particles with a high pH value increase in the water. The high pH and turbidity values of UWC concrete mixes signify their washout resistance ability. As in Figure 5, all the mixes showed similar tendencies in terms of pH and turbidity values. The results indicate that the addition of NS powder reduces the pH and turbidity values of UWC mixes. For example, pH values of 11.9, 11.22, and 11.07, and turbidity values of 307.5 NTU, 192 NTU, and 120 NTU, were reported in the control, NS1, and NS2 specimens, respectively. This can be attributed to the reduced segregation of the UWC concrete mixes by the fine particles of NS because of their cohesive effect, as mentioned above. Senff et al. [53] reported that the incorporation of NS can reduce the diameter of the spread on the flow table due to increased cohesiveness. For MgO replacement, pH values of 11.9, 11.63, and 11.31 and turbidity values of 307.5 NTU, 256.5 NTU, and 194.5 NTU were measured in the control, M5, and M10 specimens, respectively. The results showed that pH and turbidity decreased with increased MgO replacement. The increase in the anti-washout resistance of the UWC that contained MgO presumably occurred because the fine particles of MgO increased the cohesion of the concrete mixes. Both NS and MgO increased the cohesion of the UWC mixes due to their finer particle sizes compared to that of OPC, which enhanced the anti-washout resistance of the UWC.
Compressive Strength Test
To investigate the effects of NS and MgO on the mechanical properties of the UWC concrete, a compressive strength test was conducted after 7 days and 28 days of curing. The results are shown in Figure 6, whereby the compressive strength increased with curing age in all mixes. This is because the continued hydration of cementitious materials increases the compressive strength. The 28-day compressive strengths attained for C, NS1, NS2, M5, and M10 were 36.73 MPa, 40.9 MPa, 41.27 MPa, 34.11 MPa, and 32.88 MPa, respectively. After 7 days and 28 days, the compressive strength of the concrete that contained NS was slightly higher than that of the concrete without NS. This strength development with NS was due to the fine particles and high pozzolanic activity of NS, which provide extra nucleation sites to cement particles, accelerate their hydration, and generate additional hydration products. Yu et al. [25] found that the application of nanoparticles can increase the physical and mechanical properties of concrete by refining its microstructure. In addition, Scrivener et al. [54] and Xu et al. [55] mentioned that nanoparticles can accelerate the hydration process, densify the microstructure, and improve the Interfacial Transition Zone (ITZ) of concrete, which decreases the porosity and enhances the compressive strength of a cement composite. However, the incorporation of MgO into UWC reduces the compressive strength of UWC after 7 days and 28 days, compared to that of the control specimen. As the MgO proportion increased from 0% to 10%, the compressive strength of M5 and M10 decreased by 7.13% and 10.48%, respectively, after 28 days of curing, compared to the control specimen. This is because the amorphous active silica in MgO can react with MgO or Mg(OH) 2 and water during the hydration process [56,57]. Unluer et al. [58] reported that MgO reduced the compressive strength of cement due to the formation of brucite, which is weaker than C-S-H, in normal cement hydration.
Hydration Heat
The isothermal calorimetry method is used to investigate the effects of temperature on the reaction kinetics of cementitious materials [40]. Figure 7 shows the calorimetric curves of heat release from the UWC specimens with (a) NS and (b) MgO. Detailed information on peak time and peak temperature variation is listed in Table 2. The first sharp peak was produced from the dissolution of the dry mixtures, and occurred immediately after the mixtures were mixed [59]. The second peak was generated between 500 and 2000 min and was related to the polymerization degree of the binder materials [60]. Figure 7a shows that the incorporation of NS into the UWC specimens accelerated the peak time and increased the peak temperature variation. Peak times of 940 min, 784 min, and 696 min were reported for the control, NS1, and NS2 specimens, respectively. In addition, the control, NS1, and NS2 specimens showed temperature variations of 2.4 • C, 3.73 • C, and 3.38 • C, respectively. This effect was mainly due to the seeding effect of nano-materials. Moreover, the acceleration effect of NS particles on cement hydration was due to the additional nucleation of C-S-H caused by the increased surface area of the nano-particles [61,62]. However, NS1 showed higher temperature variation compared to NS2. This phenomenon might be due to the dispersion of NS particles in the cement composite. The usage of more than adequate amounts of NS is considered to reduce efficiency due to the dispersion problem [63]. Figure 7b shows the effect of MgO content on the heat release of UWC specimens. When the amount of MgO increased, the peak time tended to be delayed, and the peak temperature was lower than that of the specimens without MgO. Although MgO has a smaller particle size than OPC, the UWC specimens that contained MgO showed lower hydration rates and heat release due to the lower reactivity of MgO. Mehta et al. [64] reported that the MgO within cement slightly delayed hydration by producing magnesium hydroxide, which is insoluble. Figure 8 shows the Derivative Thermo Gravimetry (DTG) curves of the UWC mixes with different amounts of NS and MgO after 28 days of curing. All the UWC mixes showed mass loss peaks in four temperature regions. The mass loss below 15 • C is attributed to the evaporation of free water from the pore structure [65,66]. Lothenbach et al. [67] found that mass losses between 50 and 200 • C were caused by the dehydration of C-S-H, and the secondary peaks at around 146 • C were associated with the decomposition of ettringite. Bernal et al. [68] and Rozov et al. [69] reported that the mass losses between 250 and 400 • C were related to the thermal decomposition of hydrotalcite. In the present study, no carbonate source was used, so the mass loss in this region was mainly due to the dehydroxylation of hydrotalcite [70]. In addition, the C-H groups at around 420 • C and between 660 and 700 • C were associated with the decarbonation of CaCO 3 . However, this carbonate region can be generated through powder manufacturing [71]. The results showed that increasing the NS content increased the mass loss when below 200 • C, as seen on the DTG curve of the UWC mixes. For specimens with and without NS, the type of hydration products remained unchanged. Only M5 and M10, which contained MgO, showed significant mass losses between 250 and 400 • C, which indicated the presence of hydrotalcite in the UWC with MgO. Figure 9 shows that the mass loss fractions contributed to the C-S-H (50-200 • C) and the hydrotalcite (250-400 • C) of the UWC mixtures as functions of NS and MgO. In the 50 to 200 • C region, the specimens that contained NS had higher mass loss than the control specimens. For example, mass loss fractions of 8.4%, 8.9%, and 10.3% were noted in the control, NS1, and NS2 specimens, respectively. The increase in the C-S-H gel is attributed to the promotion of cement hydration by NS due to its pozzolanic activity and nucleation effect [72]. In addition, in the 250 to 400 • C region, the specimens that contained MgO showed enhanced mass loss fractions, which led to higher amounts of hydrotalcite. A similar phenomenon was reported in a previous study. Yoon et al. [73] found that, when MgO was incorporated in a cement composite, the MgO combined with Al and promoted the generation of the hydrotalcite phase in the cement composite.
|
2021-03-29T05:26:38.419Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ab440212aae3a06039160faf84c87e65a90ad196",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/6/1328/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab440212aae3a06039160faf84c87e65a90ad196",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249581583
|
pes2o/s2orc
|
v3-fos-license
|
Gaia Data Release 3: Exploring and mapping the diffuse interstellar band at 862 nm
Context. Di ff use interstellar bands (DIBs) are common interstellar absorption features in spectroscopic observations but their origins remain unclear. DIBs play an important role in the life cycle of the interstellar medium (ISM) and can also be used to trace Galactic structure. Aims. Here, we demonstrate the capacity of the Gaia-Radial Velocity Spectrometer (RVS) in Gaia DR3 to reveal the spatial distribution of the unknown molecular species responsible for the most prominent DIB at 862nm in the RVS passband, exploring the Galactic ISM within a few kiloparsecs from the Sun. Methods. The DIBs are measured within the GSP-Spec module using a Gaussian profile fit for cool stars and a Gaussian process for hot stars. In addition to the equivalent widths and their uncertainties, Gaia DR3 provides their characteristic central wavelength, width, and quality flags. Results. We present an extensive sample of 476117 individual DIB measurements obtained in a homogeneous way covering the entire sky. We compare spatial distributions of the DIB carrier with interstellar reddening and find evidence that DIB carriers are present in a local bubble around the Sun which contains nearly no dust. We characterised the DIB equivalent width with a local density of 0 . 19 ± 0 . 04 Å / kpc and a scale height of 98 . 60 + 11 . 10 − 8 . 46 pc. The latter is smaller than the dust scale height, indicating that DIBs are more concentrated towards the Galactic plane. We determine the rest-frame wavelength with unprecedented precision ( λ 0 = 8620 . 86 ± 0 . 019Å in air) and reveal a remarkable correspondence between the DIB velocities and the CO gas velocities, suggesting that the 862nm DIB carrier is related to macro-molecules. Conclusions. We demonstrate the unique capacity of Gaia to trace the spatial structure of the Galactic ISM using the 862nm DIB.
Introduction
Diffuse interstellar bands (DIBs) are interstellar absorption features that primarily exist in the optical and near-infrared (NIR) wavelength range, the physical origin of which is still debated.The name was formally given by Merrill (1930), where 'diffuse' refers to the fact that their profiles are broader than those of interstellar atomic lines (e.g.NaI lines).DIBs presumably originate from molecular absorption, which is supported by the fact that their central wavelength does not match any known atomic transition lines.The fine structure observed in some DIBs also suggests that the molecular carriers are probably in the gas phase.
Nowadays, molecules are strongly suggested to be associated with the DIB carrier, because DIB profiles are usually much broader than atomic lines and contain substructures even through single-cloud sight lines (e.g.Sarre et al. 1995, Cami et al. 1997, Kerr et al. 1998, Galazutdinov et al. 2008).Carbon-bearing molecules are the most favoured species in this respect as carbon can form many stable compounds and is relatively abundant in the Universe (Puget & Leger 1989) The DIB at 862 nm (hereafter referred to as DIB λ862) is a strong band, but was not identified until 1975 (Geary 1975), Article number, page 2 of 25 more than 50 years after the discovery of the first DIBs, because the wavelength range beyond 8600 Å was not covered by earlier work.The DIB λ862 was confirmed by Sanner et al. (1978), who further reported λ 0 = 8620.7 ± 0.3 Å and a tight linear correlation between the DIB equivalent width (EW 862 ) and the colour excess, that is E(B − V) = 2.85 ± 0.11 × EW 862 (coefficient calculated by Kos et al. 2013).Munari (1999) and Munari (2000) made preliminary studies of the relation between the EW 862 of DIB λ862 and interstellar extinction.This author found a surprisingly tight correlation with E(B − V)/EW 862 = 2.63 (Munari 1999) and 2.69 ± 0.03 (Munari 2000), respectively.Therefore, the DIB λ862 was suggested to be a tracer of Galactic extinction in the context of the Gaia mission, while Krełowski (2018) and Krełowski et al. (2019) argued that E(B − V)/EW 862 can vary depending on the line of sight.Munari et al. (2008) measured the DIB λ862 in the spectra of 68 early-type stars observed by the RAdial Velocity Experiment (RAVE; Steinmetz et al. 2006) and derived a very good correlation between EW 862 and E(B − V) with E(B − V)/EW 862 = 2.72 ± 0.03.These results, as well as those of Munari (1999) and Munari (2000), were all consistent with each other, but none agreed with those of Wallerstein et al. (2007), who derived a much higher ratio of E(B − V)/EW 862 .Munari et al. (2008) determined the rest-frame wavelength of DIB λ862 as λ 0 = 8620.4± 0.1 Å based on the assumption that the average velocity of their carriers towards the Galactic center is approximately zero, as derived from the interstellar-medium (ISM) radial-velocity map of Brand & Blitz (1993).
To make use of the vast number of cool-star (3500 T eff 7000 K) spectra in RAVE, Kos et al. (2013) implemented a datadriven method to derive the EW 862 of interstellar spectra using real spectra at high Galactic latitudes (b < −65 • ) and furthermore stacked spectra in small spatial volumes to increase the final signal-to-noise ratio (S/N) and measure EW 862 with high precision.In this way, they confirmed the linear EW 862 −E(B − V) correlation in a statistical way.
Based on measurements with a large number of RAVE spectra, Kos et al. (2014) built the first projected DIB λ862 intensity map, mainly within 3 kpc from the Sun, where for the first time the large-scale structure of the distribution of the DIB λ862 carrier was shown.The findings of these authors further suggested an exponential distribution of EW 862 in the direction perpendicular to the Galactic plane with a scale height of 209.0 ± 11.9 pc, larger than the scale height of 117.7 ± 4.7 pc for the dust derived by their A V map.Puspitarini et al. (2015) measured the DIB λ862 in the spectra of 64 late-type stars from the Gaia−ESO (GES) survey (Gilmore et al. 2012) towards a Galactic anticentre region at ( , b) = (212.9• , −2.0 • ).Puspitarini et al. (2015) fitted the observed spectra with synthetic spectra containing stellar components, telluric transmissions, and a DIB empirical profile.For DIB λ862, they obtained the empirical model by averaging the profiles detected in several spectra based on the data analysis reported by Chen et al. (2013).
Similar to Puspitarini et al. (2015), Krełowski et al. (2019) also argued that a simple Gaussian fit was not enough to describe the irregular profile of the DIB λ862.They therefore used the observation towards BD + 40 4220, a heavily reddened and rapidly rotating star, as a template for the profile of λ862.Measurements of other targets were obtained by rescaling the depth of the template to match the observed band profiles.
Using this method, Krełowski et al. (2019) measured 56 high-resolution spectra (R > 30 000) and derived a ratio of E(B − V)/EW 862 = 2.03 ± 0.15 with an offset of 0.22, which was close to the result of Puspitarini et al. (2015).Maíz Apel-lániz (2015) showed a linear relation between EW 862 and the colour excess E(4405 − 5495) up to A V ∼ 6 mag with a Pearson coefficient of r p = 0.878.All previous studies suggested a linear relation between EW 862 and extinction except Damineli et al. (2016), who reported a quadratic relation based on the observations of 12 bright field stars and 11 members of Westerlund 1 cluster.Their relation is in good agreement with those found by Wallerstein et al. (2007) and Munari et al. (2008) for EW 862 < 0.8 Å.
In this paper, we discuss the DIB λ862 measurements of nearly half a million DIBs measured by the RVS spectrometer.This is, by one order of magnitude, the largest sample of individual DIB measurements with full sky-coverage to be obtained so far.
In Sect.2, we discuss the DIB λ862 sample.In Sect. 3 we define our high-quality sample and in Sect. 4 we validate the DIB λ862 measurements in the HR diagram.In Sect. 5 we show the correlation with the dust extinction and in Sect.6 we present our analysis of the spatial distribution of the DIBs λ862.In Sect.7 we describe how we determined the rest-frame wavelength of DIB λ862, and in Sect.8 we look briefly at an application to kinematic studies.We conclude in Sect.9.
Description of the sample of diffuse interstellar bands
This work makes use of the DIB λ862 parameterisation derived from the Gaia RVS spectra using the General Stellar Parameteriser spectroscopy (GSP-Spec, Recio-Blanco et al. 2022) module and made available through the astrophysical_parameters table of the Gaia third data release (DR3).We note that the RVS wavelength range is [845,870] nm (Sartoretti et al. 2018), and its medium resolving power is R = λ/∆λ ∼ 11500 (Cropper et al. 2018).In addition to the DIB λ862 parameterisation, GSP-Spec estimates the main atmospheric parameters and the individual abundances of 12 different chemical elements from Gaia RVS spectra of single stars.When necessary (e.g.stars with T eff < 7000 K), the DIB λ862 spectral parameterisation is based on the MatisseGauguin GSP-Spec workflow.More details on the DIB λ862 measurement algorithms can be found in Zhao et al. (2021a).A GSP-Spec catalogue flag was implemented (Recio-Blanco et al. 2022) during the post-processing with a chain of 41 digits including all the adopted failure criteria and uncertainty sources considered during the post-processing.In this chain, value '0' is the best, and '9' is the worst, generally implying the parameter masking.For our purposes, we use only the first 13 characters (see Sect.3, Table .2) We performed a local renormalisation of the spectrum around the DIB λ862 feature (35 Å wide around its central wavelength) for each Gaia-RVS spectrum.We carried out a preliminary fit using a preliminary detection of the DIB λ862 profile and sources where noise is at the level of or exceeds the depth of the DIB λ862 feature were eliminated.Only detections above the 3 σ-level are considered as true detections.In order to perform the main fitting process of the DIB λ862, our sample is separated into cool (3500 < T eff 7000 K) and hot (T eff > 7000 K) stars.For cool stars, we divided the observed spectrum by the best matching synthetic spectrum from GSP-Spec (corresponding to the derived atmospheric parameters), and fitted the DIB λ862 Article number, page 3 of 25 profile with a Gaussian function and a constant that accounts for the continuum: where p 0 and p 2 are the depth and width of the DIB profile, p 1 is the measured central wavelength, C is the constant continuum, and λ is the spectral wavelength.
For hot stars, we applied a Gaussian process similar to Kos (2017) in which the DIB λ862 profile is fitted by a Gaussian process regression (Gershman & Blei 2012).In order to extract the information of the DIB feature, we applied a Gaussian mean function (Eq. 1) with C ≡ 1.For the kernels, we followed the strategy of Kos (2017) and used exponential-squared kernel models for the stellar absorption lines: and a Matérn 3/2 kernel model for the correlated noise: where a scales the kernels, and l is the characteristic width of each kernel.We refer to Zhao et al. (2021a) for a more detailed description of this process.
For each of the sources, the EW 862 , depth (p 0 ), central wavelength (p 1 ), and width (p 2 ) together with their uncertainties are determined with EW 862 = where C is the continuum level and p 2 = FWHM/(2 √ 2ln(2)), where FWHM is the full width at half maximum of the DIB λ862 profile.
We consider two main uncertainties on the derived EW: the random noise error (σ noise ), which is related to the signal-tonoise ratio (S/N) of the spectrum, and the mismatch between the observed spectrum and the synthetic one (σ spect ).σ noise was estimated for different DIB profiles using a random-noise simulation (see Sect. 2.6 in Zhao et al. 2021a for more details).The total uncertainty of the EW is considered to be σ 2 EW = σ 2 noise + σ 2 spect .We refer to Zhao et al. (2021a) for a more detailed description of the derived uncertainties.
Quality flags (QFs) ranging from QF = 0 (highest quality) to QF = 5 (lowest quality) are generated.The defined values of the QF depend on the parameters p 0 , p 1 , and p 2 , but also on the global noise level R A defined by the standard deviation of the data-model residuals between 8605 and 8640 Å as well as the local noise level R B within the DIB λ862 profile.Table 1 shows the definition of the QF values.For a more detailed description of QF, we refer to Zhao et al. (2021a) and Recio-Blanco et al. (2022).In this paper, we concentrate on a high-quality sample (QF ≤ 2, see Sect. 3) but we stress that the full DIB λ862 sample should be scientifically exploited; for example, weak DIBs λ862 in low extinction areas.
The full GSP-Spec sample contains 5 591 594 sources.Of these, 476 117 have a valid DIB λ862 measurement (∼8.5%).The number of sources for each QF is specified in Tab. 1.
Figure 1 shows the distribution on the sky of the DIB λ862 measurements at a resolution of 1.8 • (HEALPix map with level 5).As expected, the DIBs λ862 are concentrated towards the Galactic plane which is even more pronounced for the highquality DIBs λ862 (right panel).
Article number, page 4 of 25 Table 1.Definition of the Quality flags.R A is the fitting residual between the observed and synthetic spectrum for the global RVS spectrum, and R B is the region close to the DIB λ862 feature.Targets with a central wavelength beyond 816.6-8628.1 Å in vacuum are all labelled as QF = 5.The last column gives the number of sources for each QF.
Figure 2 displays the relation between EW 862 and the E(BP− RP) interstellar reddening measure from GSP-Phot (Andrae 2022).We see that DIBs λ862 with low QFs (QF > 2) show very small EW 862 but a large range of E(BP − RP) which is not the case for the high quality (HQ) DIB λ862 measurements (QF 2, see Sect.3).
Definition of the high-quality sample
Figure 3 displays the GSP-Spec Kiel diagram of a subsample with QF < 5 as a function of the fractional uncertainty of the EW 862 .The vast majority of our sources show typical uncertainties below 20%.However, on the red giant branch (RGB) sequence, the cooler stars (which are in general metal-richer) show larger uncertainties compared to the hotter ones.This can be explained by the fact that for cooler metal-rich stars, in general, we see a poorer agreement between the observed and the synthetic spectra due to the presence of molecular bands.This is also revealed by the larger log χ 2 values from GSP-spec.
We also notice higher uncertainties for hot dwarf stars in the range 7000 < T eff < 8000 K.The majority of those stars are classified as very metal-poor with [M/H] < − 3 dex by GSP-Spec.They further exhibit very large vsini values from ESP-HS (Extended Stellar Parametrizer for Hot Stars; see Sect.5.3).In addition to the parameter degeneracy between T eff and [M/H] for high-temperature stars, these objects present large vsini values, which are not taken into account in the present GSP-Spec parameterisation, inducing parameter biases (c.f.Recio-Blanco et al. 2022).Applying the specifically defined GSP-Spec flags (see.Tab. 2) removes the majority of these stars.
Figure 4 shows the distribution of the fractional uncertainties (err(EW 862 )/EW 862 ) with QF < 5.A clear bimodal distribution is apparent that is related to cool stars (T eff < 4500 K) with relatively weak DIBs λ862 (<0.2 Å) and a mismatch between the observed and the synthetic spectrum.We decided to reject sources with uncertainties larger than 35%.In addition, we decided to neglect DIB λ862 measurements outside the wavelength interval 8620 < C obs < 8626 Å -where C obs is the measured central wavelength in the heliocentric frame with C obs = p 1 + v rad × p 1 /c where v rad is the stellar radial velocity and c the velocity of light-because the majority of those are weak DIBs λ862, where the determination of the p 1 parameter could be corrupted and lead to high, unrealistic velocities.We stress that p 1 and C obs are reported in the vacuum.
Our HQ sample is defined based on the criteria specified in Tab. 2 which comprises 141 103 objects.For a detailed explanation of the GSP-spec flag we refer here to Recio-Blanco et al. (2022).
The Kiel diagram
Figure 5 shows the Kiel diagram colour-coded as a function of the EW 862 (left panel), the corresponding Gaia distances from Gaia EDR3 (middle panel, Bailer- Jones et al. 2021), and the DIB λ862 width (p 2 ).The very similar trend in these diagrams is striking, and indicates a clear relation between the EW 862 of the DIB λ862 carrier and its distance, that is stars with larger distances show larger EW 862 .This is to be expected: As an interstellar feature, the DIB λ862 profile measured in the spectrum of a background star is the result of an integration of the DIB λ862 carrier between the observer and the star.DIB λ862 strength and dust extinction increase along the line of sight, and so both of them correlate with the distance and therefore also with each other.Also, we note that the distance of the background star is only an upper limit to the true distance of the DIB λ862 carrier clouds along the line of sight (Zasowski et al. 2015).As shown by Zhao et al. (2021b), direct measurements of the DIB λ862 Table 2. Definiton of our high-quality sample carrier clouds can be obtained using kinematic distances.This method will be further investigated in another paper.
The right panel of Fig. 5 shows how the measured width of the DIB λ862 (the value of the parameter p 2 ) increases with decreasing surface gravity; that is, we see that widths in giants are generally larger than in dwarfs.One may also conclude that the widths of DIB λ862 absorptions increase with distance, and explain this as a consequence of a superposition of an increasing number of clouds at slightly different radial velocities which accumulate along the line of sight.However, we also see DIB λ862 with large widths for close-by stars with T eff < 5000 K and log g > 3.This could be a consequence of spectral mismatches between observed spectra and the templates we use.These systematic trends will be investigated in a future work, but for now we stress that the measured widths of the DIB λ862 should be interpreted with caution.
From Fig. 5 we see that stars with 5000 < T eff < 7000 K and log g < 2.5 have strong DIBs λ862.These massive stars lie at distances of between 2 and 4 kpc and most of them are located in the closest spiral arms (e.g.Sagittarius/Carina, Local and Perseus arms).This is in perfect agreement with the findings of Recio-Blanco et al. ( 2022), who clearly identified those objects in their GSP-Spec Kiel diagram as massive stars that are tracers of the spiral arm structure, in agreement with the spatial maps derived from Poggio et al. (2021).The DIB λ862 measurements can therefore be considered as an excellent tracer of spiral arm structures.
In contrast, our HQ sample lacks hot dwarf stars in the temperature range 7000 < T eff < 8000 K and 4.0 < log g < 4.5 because their EW 862 uncertainties are too high due to their high vsini and therefore large uncertainties in their stellar parameters (see Sect. 3).A specific treatment of those stars is necessary but is beyond the scope of this work.
Correlation with dust extinction
As mentioned in Sect. 1, the DIB λ862 shows a strong correlation with measurements of interstellar reddening such as E(B − V) (e.g.Munari et al. 2008, Wallerstein et al. 2007, Kos et al. 2013).Here, we use the interstellar reddening E(BP − RP) derived from GSP-Phot as our main dust extinction tracer for individual objects.GSP-Phot provides a detailed characterisation of single stars based on their BP/RP spectra, including stellar parameters (T eff , log g, [M/H]) and extinction A 0 .We refer to Andrae (2022) for a detailed description of the GSP-Phot module.Due to the extensive filtering in GSP-Phot, only 66 144 stars in our sample have E(BP−RP) measurements from GSP-Phot.Figure 6 compares the distribution on the sky of the median EW 862 of the DIB λ862 with the median E(BP − RP).Overall, we see similarities between these two maps, with both showing larger values in the Galactic plane.Nevertheless, we also see some differences: (i) The DIBs λ862 seem to be generally more concentrated towards the galactic plane compared to the interstellar dust (see also Sect.6.2). (ii) In the inner Galaxy (| | < 30 • ), Article number, page 6 of 25 (iv) In the Galactic anticentre region (| | > 160 • ), some specific asymmetric tails of the DIB λ862 carrier are visible (see third panel of Fig. 6), reaching large Galactic latitudes (b < −30 • ), which, interestingly, are absent in the northern hemisphere.A detailed comparison between the correlation of interstellar dust and the DIB λ862 carrier along certain lines of sight is now possible thanks to the full sky coverage of Gaia together with the distances; this should be further investigated.
EW 862 versus E(BP − RP)
Figure 7 shows the correlation between E(BP − RP) and EW 862 .
We see the expected trend between EW 862 and E(BP − RP) with a Pearson correlation coefficient (PCC) of 0.68 (the red circles with the uncertainty bars show the corresponding median values and their standard deviation).A linear fit through the median points (indicated by the red line in Fig. 7) is given by E(BP − RP) = 4.507(±0.137)× EW 862 − 0.026(±0.047).( 4) However, stars that were classified as hot stars by GSP-Phot but as cool stars by GSP-Spec deviate from this relation -as indicated by the black open circles in Fig. 7-in the sense that E(BP − RP) is too high compared to the measured DIB EW 862 .Due to the degeneracy between temperature and extinction (see Andrae 2022), the temperatures of those stars are overestimated by GSP-Phot, leading to overestimation of E(BP − RP).The DIB EW 862 can therefore be used to find outliers of E(BP − RP) measurements.
For highly extincted regions, the EW 862 of the DIB λ862 should become smaller with increasing interstellar reddening and thus depart from a linear relation.Lan et al. (2015) attributed this behaviour to the 'skin effect', noting that the DIB strength per unit reddening depends on cloud opacity.Adamson et al. (1991) suggested that the DIB carriers must concentrate in the surface layers ('skin') of the clouds and that the carrier depletion might be related to the reduction of the radiation field in the cloud interiors.Adamson et al. (1994) observed this effect with the NIR DIB, something that was later confirmed by Elyajouri & Lallement (2019) for the APOGEE DIB in the dense cores of the Taurus, Orion, and Cepheus clouds.We do not see this effect in our sample, which could be due to a selection effect in the sense that the Gaia RVS selection function does not trace the most extincted regions.Figure 8 shows the correlation between EW 862 and E(B − V) as well as their corresponding linear fits.We notice a large variation in the derived E(B − V)/EW 862 , which is due to the use of different methods for extinction calculation, with a very high value of 4. 2016) imply that extinction measured from infrared emission is not only overestimated in some regions but presents systematic differences (larger values) compared to the values calculated using other methods.
Hot stars
In addition to the results obtained by GSP-Phot and GSP-Spec, the Apsis pipeline also contains the ESP-HS (Extended Stel-Article number, page 8 of 25 lar Parametrizer for Hot Stars) which specifically processes the BP/RP and RVS data for stars hotter than 7500 K (Gaia Collaboration 2022).The module provides the astrophysical parameters of O-, B-, and A-type stars, including an estimate of the interstellar extinction (A 0 , A G ), and reddening E(BP − RP).The target overlap between GSP-Phot, GSP-Spec, and ESP-HS is small due to the the post-processing filtering and quality assessment of the module, their T eff validity domain (e.g.main valid AP domain of GSP-Spec is T eff < 8000 K), and/or parameter degeneracy.Keeping this in mind, there are 2929 ESP-HS hot stars with an estimate of the DIB EW 862 , and only 1142 that belong to the HQ sample.In the upper panel of Fig. 9, we plot the interstellar reddening against the DIB EW 862 for the latter sample, which provides a Pearson correlation coefficient (PCC) of +0.69.Eight outliers were identified.A brief description of these is provided in Table A.1 (8 upper rows).
The hottest stars (labelled 1-3 in Fig. 9) are targets cooler than 7500 K (according to GSP-Spec), and those that were treated with non-adapted synthetic spectra by ESP-HS.Outlier '7' is known from Simbad (Wenger et al. 2000a) to exhibit emission.On the other hand, the Hα pseudo-EW provided by the ESP-ELS module is positive (i.e.no significant emission is found in Hα from the BP/RP spectrum), and its RVS spectrum appears normal.It therefore remains unclear as to why the derived APs (which include the extinction) do not provide a correct fit to the data.Outlier '6' has a very peculiar RVS spectrum belonging to an extreme He star (FQ Aqr).Outliers '4', '5', and '8' show good agreement between observed and RVS fitted spectra.
A similar trend is observed in the GSP-Phot vs. GSP-Spec data, and plotted in the two lower panels of Fig. 9.In the middle panel, the selection is solely based on the effective temperature provided by GSP-Spec.Targets with a DIB EW 862 of greater than 0.5 Å are identified and numbered (Table A.1).With the exception of the star labelled '6', which shows an RVS spectrum typical for an early-B or late-O star, all the stars have spectral features usually seen in M or late-K-type stars (which is con-Article number, page 9 of 25 firmed by Simbad in two cases; in the other ones no additional information was found).Therefore, these are confirmed outliers, and to consistently (e.g. between the two GSP modules) remove those points, we performed a second selection based on the T eff derived by both modules (T eff > 7000 K).This last selection is plotted in the lower panel of Fig. 9, and provides a PCC = +0.77.
The first selection attempt (middle panel) provides a median E(BP − RP) versus EW 862 that is slightly lower than the relation obtained for the cooler stars (represented by the broken blue line), while the first and third ones are in fair agreement with this latter.The sample combination of the ESP-HS and GSP-Phot/GSP-Spec (Fig. 9, lower panel) selections provides 1 804 hot stars.
Comparison with the TGE dust map
The total galactic extinction (TGE) map is a full-sky 2D representation of the foreground extinction from the Milky Way towards extragalactic sources, which is constructed from selected sources at large distances beyond the Galactic disk.To derive this map, distant giants were selected in order to obtain a set of stars situated beyond the dust layer of the disk of the Galaxy.The median of extinctions derived by GSP-Phot was then used to assign an extinction value for each HEALPix at different levels.
For further details on the TGE maps, see Delchambre (2022).
In the following, we use the HQ DIB sample as defined in Sect.3. In order to compare the EW 862 of the DIB λ862 to the TGE map, it is first necessary to construct a HEALPix map of the EW 862 in the same way as for the TGE map.We selected the DIB λ862 DIB EW 862 measurements based on their Galactic altitude (|z| > 300 pc) and then calculated the median EW 862 in each HEALPix.Only HEALPixels with more than one DIB λ862 measurement were retained.
The resulting DIB EW 862 HEALPix map is shown at level 5 in Fig. 10 (top left panel).We note that, due to our selection of DIB λ862 sources, this figure is not the same as the top panel of Figure 6.Also shown in the top right panel of Fig. 10 is the TGE map at level 5, where the value of a level-5 superpixel is the mean of the four level-6 pixels.Any level-5 HEALPix contain- ing at least one level-6 HEALPix with insufficient tracers (less than three) is flagged as having no data.The lower left panel of Fig. 10 shows the resulting skymap of the EW 862 /A 0 ratio, and the lower right panel shows a scatter plot of EW 862 as a function of TGE A 0 .Although the DIB λ862 map does not cover the entire sky (due to a lack of sufficient tracers), the two maps trace the same large-scale structures across the sky.The ratio of the two values is fairly constant from low to mid Galactic latitudes, but large fluctuations are seen at higher latitudes where the number of tracers drops considerably.The scatter plot shows good correlation between the two values up to an A 0 of 1.5 mag, after which the EW 862 rises more slowly than the TGE A 0 .This is a consequence of the fact that A 0 traces asymptotic values of extinction which (in the highly extinct regions) may occur beyond the distance of stars observed in DIB λ862 measurements.A straight line fit to the scatter plot (broken line) below 1.5 mag results in a slope of 0.07 and an intercept of 0.03.
Article number, page 10 of 25
Spatial distribution of the DIB λ862
Figure 11 shows a full sky map of the median values of the integrated EW 862 of the DIB λ862 for the whole HQ sample, taken from 0.1 kpc × 0.1 kpc bins in XY, XZ, and YZ planes, respectively.Stellar photogeometric distances are those from Bailer- Jones et al. (2021).The overall distribution is similar to the pseudo-3D map (Kos et al. 2014) from RAVE data (Steinmetz et al. 2020), although a larger number of sight lines and coverage over the whole sky with Gaia DR3 allow us to draw more specific conclusions.
First, we note that EW 862 increases with distance.This is expected, but it is a nice validation of our results, as this increase was not assumed when measurements of the DIB λ862 were made.The two cross-sections perpendicular to the Galactic plane in Fig. 11 show that DIB λ862 carriers are largely confined to the Galactic plane, as expected.We note that the regions with strong DIBs λ862 in two directions away from the plane (seen in the YZ cross-section) start locally and do not increase in intensity with distance.They therefore originate in clouds of DIB λ862 carriers which reside close to the Sun and cause DIB λ862 absorption in spectra of all stars located behind them.
The XY panel of Fig. 11 suggests that stars within spiral arms generally show stronger EW 862 of the DIB λ862 carriers.This is true for the Scutum-Centaurus arm and for the Perseus arm.Our map lacks the reach needed to claim the same for the Outer arm, though an increase of DIB λ862 intensity at a distance of ∼4 kpc in the Galactic anticentre direction agrees with this conjecture.The situation for the Local arm and the Sagittarius-Carina arm is more complicated: a region with strong DIBs λ862 at 60 • coincides with the spur between these two arms (indicated by the blue line in Fig. 11).However, there is also an indication of a region of strong DIBs λ862 in the opposite direction, at 270 • .This may indicate that DIBs λ862 fill in the region between the Sagittarius-Carina and Local arms, with the exception of a large void around the Solar position.However, we note that we do not claim the DIB carrier clouds are seen to reside within the spiral arms, as the presence of the Local Bubble around the Sun amplifies a general rise of EW with distance in any direction along the Galactic plane.A detailed investigation of the spatial distribution of DIB carriers is beyond the scope of this paper and will be discussed in Zhao et al. (in preparation).
Figure 12 compares the spatial distribution of DIB λ862 and dust absorptions.We note that only 40% of the DIB λ862 sample has valid E(BP − RP) measurements due to a strong quality filtering in GSP-Phot.The comparison therefore only refers to 55 080 sources in common and not to the whole DIB λ862 HQ sample shown in Fig. 11.The top panels show the distribution of the colour excess, and the bottom panel is the ratio between EW 862 and E(BP − RP) with a subtracted linear fit from Fig. 7.
Two important results of Figs.11 and 12 are that the spatial distribution of DIB λ862 carriers and dust are qualitatively similar, but their ratio shows a pronounced lack of dust absorption for nearby sight lines.The red regions in the bottom panels of Fig. 12 density of DIB λ862 carriers.This is confirmed with a median EW 862 ∼ 0.1 Å within the inner 150 pc from the Sun.To investigate the situation further, Fig. 13 shows a zoom into the 4 × 4 × 0.6 kpc rectangular box centred on the Sun for stars that have valid EW 862 and E(BP − RP) measurements.In addition, the positions of the nearby molecular clouds from Zucker et al. (2020) are indicated by dots: black for clouds within 100 pc from the plane and red for those at heights between 100 and 300 pc.It is encouraging to see that molecular clouds at low Galactic heights are indeed at the head of strong DIB λ862 directions and dust absorptions in the XY plane.This suggests that the light from behind stars passes through these clouds of simple molecules, dust, and DIB λ862 carriers and so their volumefilling factor is large enough for this to happen.Similarly, molecular clouds at larger distances from the Galactic plane (red dots) seem to correspond to directions of enhanced dust absorption and DIB λ862 presence away from the plane.
We note that Figs.11 and 12 are based on the assumption of a Gaussian profile for the DIB carrier.The profile of the DIB may be more complicated or may vary in shape; in some cases one may expect a superposition of absorptions originating in multiple clouds along the line of sight, but the EW 862 values we derive are not affected significantly, as long as the radial velocities of the DIB carriers and profile variations are small compared to the width of the profile in our spectra with a moderate resolving power.The EWs we derive are always small, and so we are in a linear regime where the total value is a simple sum of individual absorptions.In addition, the departures from the Gaussian profile caused by the superposition effect have been shown to be insignificant for DIB λ862 by comparing the fitted EW with the integrated EW (Kos et al. 2013) and EW calculated from an asymmetric Gaussian function (Zhao et al. 2021a).
Due to the large catalogue of DIB λ862, and the better sampling for different sightlines, we can trace the spatial variation of EW 862 /E(BP − RP) (bottom in Fig. 12) which can be used as a tracer to reveal the local physical conditions; as in the work of Vos et al. (2011) for the Scorpius OB2 association.The ultimate goal would be to compare the densities of dust and DIB λ862 carrier derived by extinction and EW, respectively.A series of works carried out such a comparison for the dust (e.g.Capitanio et al. 2017;Rezaei Kh. et al. 2018;Lallement et al. 2014Lallement et al. , 2019;;Rezaei Kh. et al. 2020).No attempt has been made so far for DIB λ862.
A detailed analysis of the spatial co-location of molecular clouds and clouds of DIB λ862 carriers and interstellar dust, together with a study of their spatial filling factors, is beyond the scope of this paper and will be explored in the future.
Article number, page 12 of 25
The Local Bubble
Farhang et al. ( 2019) studied the low-density cavity known as the Local Bubble and found the presence of the DIB carriers at λ5797 and λ5780 in the bubble.Other detailed studies of the local ISM were obtained from Vergely et al. (2001Vergely et al. ( , 2010)); Welsh et al. (2010).Figure 14 shows the distribution of the DIB λ862 carrier in the inner 300 pc volume with respect to the Sun within 100 pc from the Galactic plane.In the left-hand panel, a clear asymmetry can be seen in the distribution of the DIB λ862, which is also seen in other DIB maps in the Local Bubble (see e.g.Farhang et al. 2019, Bailey et al. 2016), while in the inner 100 pc we see a homogeneous distribution of weak DIBs (EW < 0.05 Å).
Figure 14 shows the correlation of the DIB λ862 of our sample with the dust extinction derived from E(BP-RP).Here, we see a clear linear relation in this extreme low-extinction region even for very small EW (< 0.05 Å).However, a more detailed discussion of the behaviour of the DIB λ862 in the Local Bubble is beyond the scope of this paper.
Scale height
To characterise the vertical distribution of the carrier of the DIB λ862, we assume an exponential model and follow the straightforward method used in Kos et al. (2014).Following this approach, the DIB strength EW 862 and the stellar distance (d) in a narrow latitude slab can be derived as where z 0 is the scale height, b is the galactic latitude, d is the heliocentric distance, d 0 = z 0 /sin(|b|), A = ρ 0 z 0 /sin(|b|), and B is a small offset of our EW 862 values due to the fact that only sufficiently strong DIBs λ862 pass the selection criteria for the HQ sample.So that we can compare the data points at different latitudes, we follow Kos et al. (2014) and first normalise the curves in different latitude bins by fitting parameters ((EW 862 − B)/A).
This normalised EW 862 is then fitted again by Eq. 5 in order to to get the scale height z 0 .We refer to Kos et al. (2014) for more details, especially their Fig.15).The normalised EW 862 with z > 0.4 kpc show an apparent offset due to the low quality of the fitting at large distances from the Galactic plane.Therefore, we only fit the data points with |z| 0.4 kpc by Eq. 5 and get z 0 = 133.15+4.71 −4.32 pc, which is a smaller value than that derived by Kos et al. (2014).We note that we do not survey the same sample here and that Kos et al. (2014) had to resort to averaging of DIB λ862 measurements from different stars, meaning that their sample may be influenced by systematic errors in distance measurements available in the pre-Gaia era.
Gaia makes an all sky survey of DIBs λ862 which is not restricted to the Southern hemisphere and equatorial region, as is the case for RAVE.Using all available lines of sight (lower panel in Fig. 15), the fitted z 0 decreases to 98.69 +10.81 −8.35 pc.The uncertainties are small and may indicate a variation of the DIB λ862 scale height on the line of sight.This is consistent with the spatial distribution of the DIBs λ862 (see Fig. 6) where we notice, for example, a larger z 0 for the inner disc (| | < 30 • ).Our derived z 0 of the DIB λ862 carrier towards all available lines of sight with 4 • |b| 12 • is close to the scale height of the carrier of the DIB at 1.527 µm derived by Zasowski et al. (2015) with z 0 = 108 ± 8 pc but is slightly smaller than the scale height of the dust grains as measured by various authors, such as 134.4±8.5 pc by Drimmel & Spergel (2001) and B = 0.05 ± 0.01 Å.This allows the reader to use Eq. 5 as an estimate of the expected DIB λ8620 carrier strength towards any star in the solar neighbourhood with 4 • |b| 12 • .The ratio of the measured EW 862 over the expected EW 862 has 16 and 84 percentile values of 0.66 and 1.30.A detailed characterisation of the DIB λ862 carrier extending beyond the symmetric models is needed to study local substructures in and out of the Galactic plane.
Rest-frame wavelength
One of the most important observational properties of the DIB λ862 is its central rest-frame wavelength (λ 0 ), which is necessary to identify the DIB λ862 carrier through comparison to laboratory measurements.A frequently used method is to use the well-identified interstellar atomic or molecular lines to shift the whole spectrum to the rest velocity frame assuming a tight correlation between the DIB λ862 and the interstellar lines (e.g.Jenniskens & Desert 1994;Galazutdinov et al. 2000b).Without the interstellar counterpart, λ 0 can also be statistically determined with the empirical assumption that the radial velocity in the Local Standard of Rest (LSR) towards the Galactic centre (GC) or the Galactic anti-centre (GAC) is almost null (e.g.Munari et al. 2008;Zasowski et al. 2015;Zhao et al. 2021b).
We apply this statistical method for both GC and GAC by selecting targets with ∆ 10 • , |b| 2 • , d 4 kpc, QF = 0, err(λ C ) < 1.0 Å, and valid stellar radial velocities.This provides 1405 stars for GC and 1106 cases for GAC. Figure 16 shows their measured central wavelengths in the heliocentric frame (C obs ) as a function of the angular distance from GC and GAC, respectively.By the linear fit to the median values in each ∆ = 1 • bin, we get C obs = 8623.10± 0.018 Å at = 0 • and C obs = 8623.54± 0.019 Å at = 180 • .We stress that these are vacuum wavelengths, which means they are appropriate for Gaia observations.For GAC, C obs increases with Galactic longitude, having a slope of 23 ± 3.4 mÅ deg −1 , while the longitude trend is flatter toward the GC, with a slope of 1.2 ± 3.1 mÅ deg −1 .Fitting with a more constrained longitude region, such as ∆ 2 • , yields very similar intercepts, that is C obs = 8623.10± 0.016 Å at = 0 • and C obs = 8623.52± 0.023 Å at = 180 • .Nevertheless, both of the slopes toward GC and GAC become larger and much closer to each other: 47 ± 14 mÅ deg −1 for GC and 45±20 mÅ deg −1 for GAC.These slopes are also consistent with the values of 57 ± 8 mÅ deg −1 derived by Zasowski et al. (2015) for the DIB at 1.5273 µm, 47 mÅ deg −1 derived from the CO rotation curve (Clemens 1985), and 40 mÅ deg −1 derived from the stellar rotation curve (Bovy et al. 2012).
Considering the effect of solar motion, λ 0 in vacuum is derived as c/(c − U ) × C obs = 8623.41Å for GC, and c/(c + U ) × C obs = 8623.23Å for GAC, where c is the speed of the light and U = 10.6 km s −1 (Reid et al. 2019) is the radial solar motion.The difference between them may be caused by non-circular motion of the DIB λ862 carrier about the Galactic centre, which makes the LSR velocity non-zero.We believe this systematic effect is less pronounced in the direction of the GAC, and so we use this value to derive its counterpart wavelength in the air of 8620.86 Å.This number agrees well with our previous result from the Giraffe Inner Bulge Survey (Zoccali et al. 2014) towards the GC (8620.83Å; Zhao et al. 2021b).The obtained value in this work is slightly larger than the values of 8620.70 ± 0.3 Å (Sanner et al. 1978), 8620.75Å (Herbig & Leka 1991), and 8620.79Å (Galazutdinov et al. 2000a).The result of Jenniskens & Desert (1994), namely 8621.11± 0.34 Å, is very close to our result towards GC (8621.03Å in air).Based on 68 hot stars from RAVE, Munari et al. (2008) measured a mean C obs toward GC as 8620.4 ± 0.1 Å, corresponding to a λ 0 = 8620.70Å after the solar-motion correction, which is also smaller than our result.Fan et al. (2019) obtained a much smaller λ 0 = 8620.18± 0.25 Å, an average value of 17 for their program spectra, which was measured in the averaged optical-depth profiles and corrected by the interstellar K I line at 7699 Å.The lower quality of their spectra at longer wavelengths and the complex velocity structure of the atomic species could be the cause of the large difference between their results and others (Haoyu, priv.communication).
Kinematics of the DIB carrier
Although most of the DIB carriers are unknown, they have been proven to be a powerful tool for ISM tomography and consequently can probe the Galactic structure and interstellar envi- ronments.The most comprehensive kinematic study to date was performed by Zasowski (2015) using APOGEE (SDSS-III) data, and allowed the authors to reveal the average Galactic rotation curve of the λ1527 DIB carriers spanning several kiloparsecs (kpc) from the Sun.They probed the DIB λ1527 carrier distribution in 3D and showed that DIBs λ1527 can be used to trace large-scale Galactic structures, such as the Galactic long bar and the warp of the outer disk.Zhao et al. (2021b) studied the kinematics of the DIB λ862 in the Galactic Bulge using Gaia-ESO (Gilmore et al. 2012) and GIBS data (Zoccali et al. 2014).These authors concluded that the DIB λ862 carrier is located in the inner few kpc of the Galactic disk based on their rotation velocities and radial velocity dispersion.However, these studies are based on specific pencil beams with a limited number of objects.Figure 17 (2019) with different galactocentric radii (R GC ).For sightlines with 150 • , the DIB λ862 velocities are consistent with the model rotation curves for R GC ∼ 9 kpc.On the other hand, for the inner disc with 30 • the DIB λ862 carrier is best represented by R GC ∼ 7.5 kpc, thus closer to the Sun.This is different from the findings of Zasowski et al. (2015), namely that the DIB λ1527 carrier in the inner Galaxy is farther from the Sun.Indeed, the inner disc sample of these latter authors shows higher velocities compared to our sample by a factor of almost two.This is most likely due to the fact that APOGEE observes in the infrared and so probes the DIB λ1527 in the inner Galaxy Article number, page 17 of 25 up to larger distances compared to Gaia.The majority of stars in APOGEE are within ∼ 6 kpc from the Sun while our sample is mostly confined to ∼ 2-3 kpc.
Assuming a galactic rotation model, Zhao et al. (2021b) demonstrated that kinematic distances of the DIB λ862 can be obtained, allowing the real 3D distribution of the DIB carrier to be traced.We plan to present this in a forthcoming paper.
Correlations between the DIB λ862 carrier and gas kinematics using different tracers such as CO and HI can provide additional clues as to the origin of the DIB λ862 carrier.Figure 17 shows one example with the comparison of the 12 CO data from Dame et al. (2001).In the present study we use the momentummasked cube restricted to the latitude range ±5 o1 .We see that, in general, the DIB λ862 closely follows the CO gas pattern, especially in the Galactic anticentre region, while higher velocities are seen in CO for | | < 50 o .This close relation between the DIB λ862 and the gas reinforces the suggestion that the DIB λ862 carrier could be related to macro molecules.We want to stress again that Gaia data allow us to discuss such a large-scale picture for the first time.
Conclusions
We present the largest sample of individual DIBs at 862nm published to date, as obtained by the Gaia RVS spectrometer.This is the first homogeneous and all-sky survey of the DIB λ862, and allows us to study the global properties of this DIB λ862 carrier in detail.Defining a high-quality sample, we demonstrate that DIBs at 862nm show a tight relation with interstellar reddening such as E(BP−RP) or E(B−V).Despite the use of different algorithms in the measurement of DIBs at 862nm between hot stars (T eff > 7000 K) and cool stars (T eff 7000 K), we see very similar relations between EW 862 and E(BP − RP), demonstrating the robustness of the DIB λ862 measurement.While we see similarities in the spatial distributions between the DIB λ862 carrier and the interstellar reddening, we also notice some differences, in particular that the scale height of the DIB λ862 carrier is smaller compared to the dust and that the DIB λ862 carrier is concentrated within the inner kpc from the Sun.A similar conclusion can be drawn from the comparison with the total Galactic extinction map.The main and most striking difference between the DIB λ862 carrier and dust distributions is that DIB λ862 carriers are present in the Local Bubble around the Sun, while this region is known to contain almost no dust.To first order, the spatial distribution of DIB λ862 carriers follows a simple slab model.We derive its local density and scale height, which can be used to predict the expected EW of the DIB λ862 towards any star up to ∼3 kpc from the Sun.
Taking advantage of the full sky coverage of the DIB λ862, we determined the rest-frame wavelength of the DIB λ862 in the Galactic anticentre with an estimated λ 0 = 8620.86± 0.019 Å in air.This is the most precise determination of λ 0 to date.We note that using a large number of sources diminishes the formal measurement errors and, more importantly, largely negates the systematic errors of unknown radial velocities of clouds of DIB carriers which may influence any studies based on a small number of sources.For the first time, we demonstrate here the Galactic rotation curve traced by the DIB λ862 carrier within 1-2 kpc from the Sun and reveal the remarkable correspondence between the DIB λ862 velocities and the CO gas velocities, reinforcing the suggestion that DIB λ862 carriers could be related to gaseous macromolecules.
5.2.EW 862 versus E(B − V) E(B − V) is the most frequently used reddening indicator to study the correlation with DIB strength, especially in early works.To compare our DIB-extinction relation to literature values, we derived the E(B − V)/EW 862 coefficients from three dust extinction maps: Planck Collaboration et al. (2016), Schlegel et al. (1998), and Green et al. (2019).We calculated E(B − V) from the three maps using the Python package dustmap (Green 2018).Planck Collaboration et al. (2016) produced a full-sky twodimensional extinction map using a generalised wavelet method to separate out Galactic dust emission from cosmic infrared background anisotropies.Such E(B − V) values are asymptotic values and therefore represent overestimations for many of our objects (see Fig. 8 (b)).This also applies to Schlegel et al. (1998) (Fig. 8 (c)).Nonetheless, E(B − V) derived from both of these maps for our objects present linear relations with EW 862 with very high Pearson coefficients.For both Planck Collaboration et al. (2016) and Schlegel et al. (1998), we limit their E(B − V) to values smaller than 2.6 mag and get 121 627 and 123 175 individual measurements, respectively.We make use of 55 252 available E(B − V) values from GSP-Phot with a temperature difference between GSP-Spec and GSP-Phot of smaller than 5000 K. Limited by the sky coverage, only 93 247 objects have E(B − V) from Green et al. (2019), a three-dimensional dust reddening map inferred from 800 million stars with Pan-STARRS1 and 2MASS photometry.Based on Schlafly & Finkbeiner (2011), we apply a recalibration factor of 0.884 for E(B − V) from Schlegel et al. (1998).We also use this factor to convert the reddening unit of Green et al. (2019) to E(B − V).We note that the three-dimensional nature of the dust reddening maps from GSP-Phot (Fig. 8 a) and from Green et al. (2019) (Fig. 8 d) negates the problem of overestimated E(B − V) values.Table3lists the E(B − V)/EW 862 coefficients and intercepts derived in this work together with values from the literature.
128 ± 0.062 from Planck Collaboration et al. (2016) and a low value of 2.198 ± 0.066 from Green et al. (2019).It is not surprising that different works report different values for the ratio of E(B − V)/EW 862 , depending on the sightlines studied and the techniques applied for DIB and extinction measurements.The high coefficients with E(B − V) from Schlegel et al. (1998) and Planck Collaboration et al. (
GaiaFig. 8 .
Fig. 8. Correlations between EW 862 and E(B − V) derived from different extinction maps: (a) GSP-Phot, (b) Planck Collaboration et al. (2016), (c)Schlegel et al. (1998), and (d)Green et al. (2019).The colours in each panel show the target number per 0.01 Å×0.02 mag bin.The colour bar is the same as in Fig.7.The red circles are the median values taken in EW 862 bins from 0 to 0.5 Å with a step of 0.05 Å.The red lines are linear fits to the red dots in each panel, respectively.The fitting gradients (α) and their uncertainties are indicated.They are also listed in Table3.The orange and violet dashed lines in (b) and (c) are the fit results to GSP-Phot andGreen et al. (2019), respectively.
Fig. 9 .
Fig. 9. E(BP − RP) vs. EW 862 of the DIB λ862 derived for the HQ sample by GSP-Spec for hot stars.The colour code follows the effective temperature derived by ESP-HS or GSP-Spec.The running median and interquantile (15 to 85 %) are represented by a black step curve and the shaded area, respectively.The relation derived for the cooler stars is shown by the broken blue line.Upper panel: Reddening derived using the ESP-HS module for stars hotter than 7 500 K.The outliers are identified with black circles and numbers.Middle panel: E(BP − RP) from GSP-Phot for targets hotter than 7 000 K according to GSP-Spec only.Lower panel: E(BP − RP) from GSP-Phot, and hotter than 7 000 K according GSP-Spec and GSP-Phot.Numbered black circles denote the outliers which are discussed in the main text, with their parameters listed in Table A.1.
Fig. 10 .
Fig. 10.Top left: EW 862 of the HQ sample for stars beyond the Galactic disk (|z| > 300pc), averaged in each level-5 HEALPix.Grey pixels indicate no data, where there are fewer than two DIB λ862 measurements in the level-5 HEALPix.Top right: TGE A 0 at HEALPix level 5, again where grey signifies no data (i.e.where there are insufficient extinction tracers).Bottom left: EW 862 vs. TGE over the sky.Bottom right: Density plot of EW 862 vs. TGE.The median EW 862 in regular TGE bins is shown as red points.The uncertainty bars are derived using the average absolute deviation around the median.
−
demonstrate that the local bubble around the Sun which contains very little dust does not have a similar low Article number, page 11 of 25 t a u r u s a A rm S a g i t t a r i u s C a r i n a a A r m L o c a l a A rm P e r s e u s a A rm O u t e r a A rm
Fig. 11 .
Fig. 11.Face-on and side-on views of the spatial distribution of the DIB λ862 for the whole HQ sample plotted over the Milky Way sketch created by Robert Hurt and Robert Benjamin (Churchwell et al. 2009).Median EW 862 are taken from 0.1 kpc × 0.1 kpc bins in XY, XZ, and YZ planes, respectively.The Galactic centre is located at (X,Y,Z)=(-8,0,0).The coloured lines represent the Galactic log-periodic spiral arms described by the parameters from Reid et al. (2019): Scutum-Centaurus arm, orange; Sagittarius-Carina arm, purple; Local arm, black; Perseus arm, green; Outer arm, cyan.The spur between the Local and Sagittarius-Carina arms is indicated by the blue line.
GaiaFig. 12 .−Fig. 13 .Fig. 14 .
Fig. 12. Same as Fig. 11, but for E(BP − RP) from GSP-Phot (upper panel), and the ratio of EW 862 /E(BP − RP) (lower panel), subtracting 0.22, the inverse of the linear gradient fitted in Fig. 7.Only 55 080 sources in the HQ sample with E(BP − RP) measurements are used.Article number, page 13 of 25 2. Kos et al. (2014) applied this method for 20 latitude slabs from b = −20 • to b = 20 • with a bin size of 2 • and obtained z 0 = 209.0± 11.9 pc.We only use eight slabs with moderate latitudes (−12 • b −4 • and 4 • b 12 • ) which show exponential saturation, and take median EW 862 in each 0.25 kpc bin from 0 to d = 3 kpc.To compare with the result of Kos et al. (2014), we first consider measurements with 240 • 330 • (upper panel in Fig.
Fig. 15 .
Fig. 15.Determination of the scale height of the λ862 carrier by the DIB measurements with 4 • |b| 12 • , and upper panel: 240 • 330 • ; lower panel: toward all available longitude directions, respectively.The data points at different latitude slabs are coloured according to the central latitude values (b 0 ).The dashed green line indicates z = 0.4 kpc.The red curve in the upper panel is the fit to data points with z 0.4 kpc, while in the lower panel, the red curve is the fit to all the data points.
Article number, page 16 of 25 GaiaFig. 16 .
Fig. 16.Observed central wavelengths (C obs , in vacuum) of DIB λ862 in the heliocentric frame as a function of the angular distance from the longitude centre (∆ ) for the Galactic centre (left panel) and the Galactic anti-centre (right panel), respectively.The grey points are the individual measurements with the fitted uncertainties.The red dots are the median values taken in each ∆ = 1 • bin with the standard deviation.The red lines are the linear fit to the red dots.
Fig. 17 .
Fig. 17. (Left panel): Longitude-velocity diagram for the Gaia HQ DIB λ862 sample.The circles indicate the median V LSR and standard uncertainty of the mean for each field.Velocity curves calculated by Model A5 in Reid et al. (2019) for different galactocentric distances (R GC ) are overplotted.(Right panel): Same as left panel but superimposed on the 12 CO data from Dame et al. (2001).The colour-scale displays the 12 CO brightness temperature in a logarithmic scale integrated over the velocity range.
demonstrates the enormous potential of Gaia for studying the kinematic behaviour of the DIBs λ862; it shows the Galactic rotation curve of the DIB λ862 carrier for |b| < 5 o and in bins of 10 degrees in galactic longitude.Indicated are Galactic rotation curves computed by Model A5 in Reid et al.
Table 3 .
Coefficients and intercepts of the linear relations between DIB λ862 and E(B − V) derived in the literature and this work.
|
2022-06-14T06:41:09.538Z
|
2022-06-11T00:00:00.000
|
{
"year": 2022,
"sha1": "0bdaf29ddc4e91ad6989d0d0d4116649b33824c1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1051/0004-6361/202243283",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "14ef92bf2ee238b78886599bf17e6f56c6b1e4a1",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
238708911
|
pes2o/s2orc
|
v3-fos-license
|
Genetic Diversity and Population Structure of Three Strains of Indigenous Tswana Chickens and Commercial Broiler Using Single Nucleotide Polymormophic (SNP) Markers
The Tswana chicken is native to Botswana and comprises strains such as the naked neck, normal, dwarf, frizzled, and rumples. The origins of the different strains of Tswana chicken remain unknown and it is not yet clear if the different strains represent distinct breeds within the large Tswana chicken population. Genetic characterization of different strains of Tswana chickens using SNP arrays can elucidate their genetic relationships and ascertain if the strains represent distinct breeds of Tswana chicken population. The aim of this study was therefore to investigate population structure and diversity and to estimate genetic distances/identity between the naked neck, normal and dwarf strains of Tswana chickens. A total of 96 chickens (normal strain (n = 39), naked neck strain (n = 32), dwarf strain (n = 13) and commercial broiler (n = 12)) were used in the study. SNP genotyping was carried out using the Illumina chicken iSelect SNP 60 Bead chip using the Infinium assay compatible with the Illumina HiScan SQ genotyping platform. The observed heterozygosity (H o ) values were 0.610 ± 0.012, 0.611 ± 0.014, 0.613 ± 0.0006 for normal, naked neck and dwarf strains of Tswana chickens respectively structure of indigenous Tswana chickens. The first two principal components revealed a set of three clusters. The normal strain of Tswana chicken and commercial broiler clustered together in one group. The dwarf strain clustered separately in one group and the naked neck and normal strains clustered together in the last group. The separate clustering of the dwarf strain from the rest of Tswana chicken strains suggests significant genetic uniqueness of the dwarf strain and very close genetic similarities between the normal and naked neck strains. The clustering pattern was confirmed by less genetic differentiation and less genetic distances between the naked neck and normal strains of Tswana chicken than between the two strains and the dwarf strain of Tswana chicken.
Introduction
Chickens have more distinct use and benefits to the household in different developing countries [1]. Indigenous Tswana chickens are one of the most important livestock species which provide most of the protein in the form of eggs and meat and improve the rural economy of subsistence farmers through sales of eggs as well as live birds. The chicken products (meat and eggs) are preferred by many people in rural areas due to their taste, leanness, palatability, and appropriateness for exceptional dishes [2] [3] [4]. Indigenous Tswana chickens contribute to food security in the rural areas and also generate emergency cash income for women since indigenous Tswana chickens are mostly owned by women. The Tswana chickens play a significant role in the sociocultural life of the rural population. Indigenous chickens also have roles in traditional ceremonies and other customs as gift payments [5]. Nonetheless, the growth rate of indigenous Tswana chickens is relatively low as compared to the commercial broiler due to poor nutritional support, poor housing, poor health care, and lack of selection for growth potential under the scavenging management system [6].
Generally, indigenous chickens are kept in small flocks (2 to 20 chickens) of varied ages under traditional scavenging management system with basic supplementary feeding, housing, and healthcare [6]. They possess important positive characteristics such as hardiness, the ability to tolerate the harsh environmental condition, and poor husbandry practices (climate, handling, watering, and feeding) without much loss in production [7]. Indigenous chickens grow slowly and normally require up to 12 months to reach slaughter age [8] and age at first lay is approximately 7 months [9]). [10] Desta reported a mating ratio of 1 cock to 2 hens for indigenous chickens' population in Ethiopia; but the recommended mating ratio is 1 cock to 5 -10 hens [9].
The dwarf, frizzled and rumpless strains are found at a relatively low frequency within the indigenous Tswana chicken population and the normal strain is by far the most common strain [13]. [
Collection of Blood Samples
Blood samples were collected from the medial metatarsal vein located on the leg of a chicken better suited for puncture using a 23-gauge, 1-in needle. The alternative site for blood collection was the brachial vein on the wings and for puncture, feathers in this area were plucked for smooth insertion of needle on the
DNA Extraction
24 µl of NucleoMag® B-Beads and 360 µl MB2 Buffer were then added to the square-well Block and mixed by pipetting up and down, shaking for 5 minutes at room temperature. Magnetic beads were then separated against the wells by placing the square-well block on the NucleoMag SEP magnetic separator for at least 2 minutes. The supernatant was then removed from the wells and discarded by pipetting. The square-well block was then removed from the NucleoMag SEP magnetic separator and 600 µl of MB3 buffer was added to each of the wells, accompanied by shaking to completely resuspend the beads. Magnetic beads were again separated against the wells by placing the square-well block on the Nuc-leoMag SEP magnetic separator for at least 2 minutes. The supernatant was again removed and discarded by pipetting. The square-well block was removed again from NucleoMag SEP magnetic separator. 600 µl of MB4 buffer was then added to each of the wells and the beads were resuspended by shaking for 5 minutes. Magnetic beads were again separated by placing the square-well block on the NucleoMag SEP magnetic separator for at least 2 minutes and supernatant was removed and discarded by pipetting. 900 µl of MB5 buffer was then added to each of the wells while the beads were still attracted to magnets. After an incubation period of 50 seconds, the supernatant was aspirated and discarded. The square-well block was then removed from the NucleoMag SEP magnetic separator. 50 µl of DNA elution buffer was then added to each of the wells and shaking for 10 minutes at 56˚C to resuspend the beads. Magnetic beads were again separated by placing the square-well block on the NucleoMag SEP magnetic separator for at least 2 minutes. The supernatant containing purified genomic DNA was then transferred to the elution plate for SNP genotyping.
SNP Genotyping and Data Preparation
SNP genotyping was carried out at Agricultural Research Council-Biotechnology Platform in Pretoria according to the protocols described by [
Population Structure
A complete SNP data set with all four populations was filtered to remove SNPs that were on sex chromosomes or had their positions unmapped. Markers with missing data > 5%; that had a MAF ≤ 2% or were monomorphic were removed from the complete data set. SNPs that were in high linkage disequilibrium at a threshold of LD ≥ 0.2 were also filtered out of the complete data set. Individuals with missing genotypes of more than 5% and those that were closely related, as inferred by a kinship estimator ≥ 0.45 were also excluded from the analysis.
A principal component analysis (PCA) was then performed to establish relationships among different strains of Tswana chickens and the commercial broiler line using the Golden helix SNP variation suit (SVS) version 8.1 [19]. Fur-Open Journal of Animal Sciences thermore, the Admixture 1.23 software [20] was used to estimate the most probable number of ancestral populations based on the SNP genotype data as described by [16] Khanyile et al. Admixture was run from K = 2 to K = 4 and the optimal number of clusters (K-value) was determined as that which had the lowest cross-validation error (CV error).
Population Differentiation and Genetic Distances
Pairwise identity by state (IBS) distances between all four chicken populations (naked neck, normal and dwarf strains of Tswana chicken and the commercial broiler) were calculated using PLINK v1.9. Genetic distances between the four populations were evaluated based on Nie's (1987) unbiased. Genetic distance uses the R-package [21]. To evaluate pair-wise genetic differentiation, the fixation index Fst [22] 4 was calculated for all pairs of chicken populations.
Linkage Disequilibrium
Complete SNP data for the individual populations were filtered to remove SNPs on sex chromosomes or those were not mapped, those with MAF ≤ 5%, those that deviated from Hardy-Weinberg equilibrium (HWE) (P ≤ 0.001) and individual chickens with missing genotypes (>5%) and those with very close kinship (IBD ≥ 0.45) using PLINK (v1.07) [18]. Pairwise r 2 estimation was used to measure LD between pairs of SNPs within a chromosome and population using PLINK (v1.07) program [18] 2007 for SNPs on chromosomes 1 -28 that had passed quality control tests detailed above. According to [23] Lu et al., the r 2 measure, defined as the squared correlation coefficient of alleles at two loci was chosen because it is independent of allele frequency. Briefly, its calculation, considers two loci, A and B, each locus having two alleles (denoted A1, A2; B1, B2, respectively) [24]. The frequencies of the haplotypes will then be denoted as F11, F12, F21, and F22 for haplotypes A1B1, A1B2 and A2B2, respectively and as FA1, FA2, FB1 and FB2 for A1, A2, B1 and B2 alleles, respectively. From this, r 2 according to [16] PLINK by default only reports r 2 -values above 0.2 and to allow reporting of all r 2 -values observed in the populations, the −r 2 -window-ld0 option was used. An additional option, −r 2 -window-snp 5000 kb 10,000 described by [16] Khanyile et al., allowed for estimation of r 2 for SNP marker pairs separated by at most 5000 Open Journal of Animal Sciences SNPs and within a 10 MB SNP interval.
Effective Population Size
The effective population size trends were estimated using the procedure described by [16] Khanyile et al. Briefly, the relationship between N e , recombination frequency, and expected LD (r 2 ) was determined using the equation from [25] Corbin et al. shown in Formula (2): where α = 1 when assuming no mutations and 2 if mutation was considered, 2 2 1 2 adj r r n = − , c was the recombination rate, and n was the chromosomal sample size. The effective population size N e , as 1/2c generations, was estimated from the adjusted 2 adj r values related to a given genetic distance d in Morgans, assuming c = d [24]. For each pair of SNPs on each chromosome, recombination rate was estimated by converting physical marker interval length x i (MB) to the corresponding genetic length c i using the formula: c i = ṓ i x i , where ṓ i is the average ratio of Morgans per kilo base pair on chromosome I, which was taken from physical lengths of the chicken genome v74 [26]. The genetic length of chromosomes was adopted from [27]. The r 2 -values range from 0 and 1, whereby a zero value indicates uncorrelated SNPs while a value of one reflects SNPs that are perfectly correlated [24]. The trends in effective population sizes for each of the defined subpopulations were then estimated by setting bins at 10, 20, 40, 60, 100, 200, 500, 1000, 2000 and 5000 kb. The bins were designed to cover the genome in tens, hundreds, thousands and hundred thousand base pairs. [28] Al-Atiyat and Abudabos, who reported higher gene diversity in indigenous chickens of Jordan than in Ross broiler chickens (H e of 0.54 vs 0.09). Higher genetic diversity in indigenous Tswana chickens than commercial broiler chickens might be due to inherent traditional breeding practices of natural and random mating of indigenous chickens. Indigenous Tswana chickens are also not subjected to intensive selection in various traits of economic importance which tends to promote diversity than uniformity. Lower genetic diversity in commercial broiler compared to indigenous Tswana chickens might be due to artificial selection for traits of economic importance such as meat production [28].
Basic Population Genetic Parameters
In Intensive selection during development of commercial broiler chickens reduced diversity and increased uniformity partially as result of inbreeding.
The minor allele frequency (MAF) was also presented in Table 2
Population Structure
Principal component analysis (PCA) was used to get an insight into the population structure of indigenous Tswana chickens. The first two principal components
Admixture Analysis
The graphic results of the clustering analysis for K = 2 to 4 are illustrated in The lowest cross-validation error was observed at K = 3, which represented the number of ancestors in indigenous Tswana chicken strains and the commercial broiler strain (Figure 3).
Population Differentiation (FST)
Pairwise population (F ST ) was calculated from filtered SNPs to investigate population differentiation among different strains of Tswana chickens. F ST values are shown in Table 3 Generally higher genetic differentiation and genetic distance between strains of
Linkage Disequilibrium (LD) Estimates and the Effect of Strain
A summary of r 2 values for the 28 chicken autosomal chromosomes in the three strains of Tswana chickens and commercial broiler chicken are shown in Table 4 Consistent with [16] Khanyile et al., the current study also indicates that evolutionary forces affecting LD act differently on different chromosomes and different strains. Commercial broiler chicken had higher LD compared to the three strains of indigenous Tswana chickens probably because of the effects of artificial selection for higher meat yield. On the other hand, natural selection could be a major evolutionary force in the three strains of Tswana raised under free-running management systems with minimal artificial selection [16]. There was no significant difference in LD between the normal and naked neck strains of Tswana chickens. However, the two strains of Tswana chickens had significantly lower LD than dwarf strain of Tswana chicken. Of the four chicken strains, the commercial broiler chicken had significantly higher LD compared to the three strains of Tswana chickens. Higher LD in commercial broiler compared to the three strains of Tswana chickens is consistent with [16] Khanyile et al. who found significantly higher LD in conservation flocks compared village chicken populations kept by small holder farmers. Differences in LD between commercial broiler and the three strains of Tswana chickens could be due to their different evolutionary histories under the influence of random genetic drift, selection, and mutations [16]. The dwarf strain of Tswana had higher LD across the 28 autosomal chromosomes compared to normal and naked neck strains of Tswana chickens. Higher LD in the dwarf strain compared to the naked neck and normal strains of Tswana
Trends in Effective Population Size (Ne)
Plots In comparison with the three strains of indigenous Tswana chickens, the commercial broiler chicken had higher N e values at all generations than the dwarf strain. The LD patterns are consistent with effective population size and diversity patterns in the commercial broiler and the three strains of Tswana chickens. Generally, higher LD patterns are associated with low effective population sizes and lower diversity in the populations.
Conclusion
The naked neck, normal and dwarf strains of Tswana chicken had similar, mod-Open Journal of Animal Sciences erate genetic diversity measures (observed and expected heterozygosity which was significantly higher than those of the modern commercial broiler chicken).
|
2021-08-27T17:18:28.639Z
|
2021-08-24T00:00:00.000
|
{
"year": 2021,
"sha1": "137f2425867d6fda0e0628c31c208899c175f3fe",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=111479",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e0c6d6ee7f7686afa95df217646866100ccefd7f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
258623394
|
pes2o/s2orc
|
v3-fos-license
|
Modulation of pulmonary blood flow in patients with acute respiratory failure
Background
Impairment of ventilation and perfusion (V/Q) matching is a common mechanism leading to hypoxemia in patients with acute respiratory failure requiring intensive care unit (ICU) admission. While ventilation has been thoroughly investigated, little progress has been made to monitor pulmonary perfusion at the bedside and treat impaired blood distribution. The study aimed to assess real-time changes in regional pulmonary perfusion in response to a therapeutic intervention.
Methods
Single-center prospective study that enrolled adult patients with ARDS caused by SARS-Cov-2 who were sedated, paralyzed, and mechanically ventilated. The distribution of pulmonary perfusion was assessed through electrical impedance tomography (EIT) after the injection of a 10-ml bolus of hypertonic saline. The therapeutic intervention consisted in the administration of inhaled nitric oxide (iNO), as rescue therapy for refractory hypoxemia. Each patient underwent two 15-min steps at 0 and 20 ppm iNO, respectively. At each step, respiratory, gas exchange, and hemodynamic parameters were recorded, and V/Q distribution was measured, with unchanged ventilatory settings.
Results
Ten 65 [56–75] years old patients with moderate (40%) and severe (60%) ARDS were studied 10 [4-20] days after intubation. Gas exchange improved at 20 ppm iNO (PaO2/FiO2 from 86 ± 16 to 110 ± 30 mmHg, p = 0.001; venous admixture from 51 ± 8 to 45 ± 7%, p = 0.0045; dead space from 29 ± 8 to 25 ± 6%, p = 0.008). The respiratory system's elastic properties and ventilation distribution were unaltered by iNO. Hemodynamics did not change after gas initiation (cardiac output 7.6 ± 1.9 vs. 7.7 ± 1.9 L/min, p = 0.66). The EIT pixel perfusion maps showed a variety of patterns of changes in pulmonary blood flow, whose increase positively correlated with PaO2/FiO2 increase (R2 = 0.50, p = 0.049).
Conclusions
The assessment of lung perfusion is feasible at the bedside and blood distribution can be modulated with effects that are visualized in vivo. These findings might lay the foundations for testing new therapies aimed at optimizing the regional perfusion in the lungs.
Introduction
Hypoxemia due to impairment of pulmonary ventilation and perfusion (V/Q) matching is one of the most common causes of respiratory failure requiring intensive care unit (ICU) admission [1].Over the past decades, the safety of mechanical ventilation increased significantly, leading to increased survival of ventilated ICU patients [2,3].On the contrary, little progress has been made to monitor pulmonary perfusion and treat impaired pulmonary blood flow [4,5].
The first attempts to describe vascular deformation, microembolism, capillary leakage, and impaired vasoreactivity in patients with respiratory failure date back to the early 1970s' [4,6].Thereafter, also the use of high positive end-expiratory pressure (PEEP) and the consequent regional alveolar overdistension, have been shown to cause a "stress failure of pulmonary capillaries", a condition well described by the physiologist John West [7,8].Of note, most of those studies were based on lung biopsy samples and post-mortem analysis.
Subsequently, in 1981 Dr. Reginal Green and Dr. Warren Zapol studied pulmonary artery filling defects in patients admitted to the ICU with acute respiratory failure by using balloon occlusion pulmonary angiography (BOPA) at the bedside [9].The Authors concluded with a question that still, after forty years, remains unanswered: "Could reduced lung damage and improved survival be achieved by using bedside angiographic findings to select patients in acute respiratory failure for early anticoagulant or antithrombotic treatment of vaso-occlusive disease?" [9].Such findings were based on vascular occlusion and contrast media injection, and blood flow distribution in the lungs was difficult to assess at a regional level [10].
Recently our group implemented the use of electrical impedance tomography (EIT), a non-invasive, radiation-free tool, that has been used to assess, through saline injection, blood perfusion distribution in the lungs at the bedside [11][12][13].In this study, we hypothesized that changes in regional pulmonary blood flow distribution can be quantified as a result of a therapeutic intervention in critically ill patients with acute respiratory distress syndrome (ARDS).We used inhaled nitric oxide (iNO) as a therapeutic intervention to induce a redistribution of blood perfusion in the lungs [14].Inhaled nitric oxide is a potent, fast-acting, selective pulmonary vasodilator that can improve oxygenation by releasing pulmonary vasoconstriction and thus increasing perfusion in ventilated regions [15].The role of iNO in the treatment of acute respiratory failure has been limited so far as a "rescue" therapy in patients with pulmonary hypertension, right ventricular dysfunction, and/or refractory hypoxemia [16].
To date, the clinical response to iNO is generally evaluated by measuring gas exchange after the initiation of iNO administration.In the current study performed in critically ill patients with ARDS, we assessed changes in regional pulmonary perfusion in response to iNO and, simultaneously, assessed oxygenation, systemic and pulmonary hemodynamics, venous admixture, and dead space.
Materials and methods
This is a single-center prospective study that enrolled patients admitted to the ICU of the ASST Grande Ospedale Metropolitano Niguarda (Milan, Italy) with ARDS caused by SARS-Cov-2 between March and May 2021.The study was approved by the institutional review board of Milano Area 3 (approval number 56-11022021) and informed consent was obtained according to local regulations.The inclusion criteria were age ≥18 years old, bilateral pneumonia and ARDS defined according to Berlin criteria [17], and the clinical decision to administer iNO as rescue therapy in the presence of refractory hypoxemia (defined as an arterial partial pressure of oxygen [PaO 2 ] of less than 60 mmHg, and FiO 2 of 0.8-1.0, and a positive end-expiratory pressure [PEEP] of greater than 10 cmH 2 O for more than 6 h) [18,19].Patients were excluded in the presence of a cardiac pacemaker and/or implantable defibrillator, and with major skin lesions on the chest wall.
Study procedures
Patients were sedated, paralyzed, and mechanically ventilated in volume-controlled mode (Draeger Evita V800, Draeger Medical, Lübeck, Germany).Patients were in the supine position and trunk inclination was not modified during the study [20].Ventilatory parameters were set by the clinical team and kept constant throughout the study period.
Study data were retrieved before and 15 min after initiation of iNO at the dose of 20 ppm (NOxBOX, Ltd, UK).Responsiveness to iNO was defined as a >20% increase of PaO 2 /FiO 2 after 15 min of gas administration.Hemodynamic parameters were assessed in real-time by means of thermodilution through the Pulse Contour Continuous Cardiac Output (PiCCO®, Pulsion Medical System SE, Feldkirchen, Germany).The images obtained by chest computed tomography (CT) scan performed in the closest timeframe, before or after the study day, were collected.
The distribution of ventilation and perfusion in the lungs was assessed using electrical impedance tomography (EIT, Enlight 1800, Timpel, Sao Paulo, Brazil).Lung perfusion was recorded after injection of a 10 ml bolus of 7.5% hypertonic saline through a central venous catheter.At each study step (i.e., 0 and 20 ppm iNO), we recorded the followings.
Recording of ventilation and perfusion distribution in the lungs
through EIT reported as the ratio between anterior and posterior percentage distribution.
EIT data analysis
We measured the lung impedance variation related to ventilation (ΔZ V ) and perfusion (ΔZ Q ) before and 15 min after iNO.Lung perfusion was assessed by the first-pass kinetic method after injecting 10 ml of 7.5% hypertonic saline solution [21].The perfusion distribution at the pixel level was corrected by cardiac output (CO) according to the formula: The EIT analysis was performed: 1) by splitting the lung image into two gravitational regions with similar height, defined as anterior and posterior (e.g., the anterior lung region is the non-dependent region in the supine position), and 2) at the pixel level (from the EIT matrix containing with 32 × 32 pixels).We excluded pixels corresponding to the heart area detected by the first-pass kinetic method and pixels with changes <5% of the pixel with maximum ΔZ V or ΔZ Q .
The ventilation/perfusion mismatch was estimated by summing the percentage of ventilated and non-perfused pixels (suggesting dead space) and perfused, but non-ventilated pixels (suggesting shunt).We mapped the positive changes in perfusion distribution at the pixel level, considering pixels with changes bigger than 20% after iNO.The cutoff of 20% was established similarly to the criteria of iNO responsiveness.
Statistical analysis
The normality of data distribution was tested using the Shapiro-Wilk test.Normally distributed data are expressed as means ± SD, whereas nonnormally distributed data are expressed as median and interquartile range.Categorical variables are expressed as count (n) and percentage (%).The presence of outliers was assessed during the evaluation of the S. Spina et al. distribution of data; however, no action was foreseen.The offline processing of perfusion distribution was performed after blinding data about gas exchange.Continuous variables were compared before and 15 min after iNO administration through paired t-test if normally distributed or through Wilcoxon signed-rank if not normally distributed.A linear regression model was implemented, with either the increase in perfusion or V/Q mismatch changes as continuous predictors and PaO 2 / FiO 2 changes, as continuous outcome.R-squared was computed.A p < 0.05 was deemed statistically significant.Statistical analysis was performed using GraphPad Prism (version 8.4, GraphPad Software, San Diego, California, USA) and STATA (version 13.0, StataCorp, Texas, USA).
Respiratory mechanics, gas exchange, and EIT ventilation distribution
The elastic properties of the respiratory system and distribution of ventilation were unaltered by the initiation of iNO at 20 ppm.Specifically, driving pressure did not change significantly after iNO administration (13 ± 3 vs. 14 ± 3 cmH 2 O, p = 0.09) and EIT showed similar regional ventilation distribution before and after gas administration (1.02 ± 0.41 and 1.10 ± 0.50, anterior/posterior %, p = 0.39).Fig. S1 (Supplemental Digital Content 1) shows EIT ventilation distribution maps along with a chest CT scan performed, in median, two days before the study day.
Taken together the respiratory and hemodynamic results confirm prior findings suggesting that iNO breathing increases oxygenation by improving overall ventilation/perfusion matching.EIT pixel perfusion maps showed a variety of patterns of changes in pulmonary blood flow after starting iNO, unveiling, at the bedside, the heterogeneity of vasculature responses in ARDS patients.The modifications in perfusion distribution induced by iNO are shown in the EIT pixel perfusion maps reported in Fig. 1.Fig. S1 displays the diversity of CT-morphological presentation, the EIT ventilation, and EIT pixel perfusion maps.Quantification of V/Q mismatch and positive perfusion changes are reported in Table 3, with respect to venous admixture, dead space, and PaO 2 /FiO 2 for each patient.Finally, to assess the clinical relevance of the EIT perfusion maps, the pixel increase of regional pulmonary flow was compared to systemic oxygenation response measured as change in PaO 2 /FiO 2 and, despite the few observations, a modest positive correlation was found (R 2 = 0.50, p = 0.049) (Fig. 2).
Discussion
This study assessed the distribution of blood perfusion in the lungs utilizing a radiation-free and non-invasive tool in a cohort of patients with ARDS admitted to the ICU.The administration of a selective pulmonary vasodilator led to a redistribution of blood perfusion in the lungs, without affecting the main hemodynamic parameters.Such modulatory effect was assessed at the bedside and visualized in vivo.
This proof-of-concept study shows that pulmonary blood perfusion can be modulated at the bedside in patients with respiratory failure admitted to the ICU.Of note, blood redistribution occurred in the lungs in every patient enrolled after starting iNO, regardless of the clinical response (oxygenation).This finding suggests that iNO is able to modulate pulmonary perfusion at the regional (pixel) level.However, such an effect does not necessarily translate into an improvement in gas exchange, which in fact reflects the overall lung function.The possibility to investigate at the bedside blood distribution in the lungs will hopefully provide more insights to understand how iNO and other vasoactive therapies work in patients with respiratory failure, not only due to ARDS.
Since its first description in 1967 by Ashbaugh and colleagues [39], ARDS has been extensively investigated over the last decades, being a leading cause of acute respiratory failure that requires ICU admission [1].The main pathological features of ARDS are diffuse alveolar damage and high permeability pulmonary edema, leading to a disorder of both ventilation and perfusion [40].However, while ventilation has been thoroughly studied and multiple approaches to tailor mechanical ventilation at the bedside have been proposed, perfusion derangement has been poorly assessed, especially with methods easily applicable and replicable in the ICU.
The earlier techniques to assess the regional distribution of blood in the lungs were based on radioactive tracers, such as low solubility elements like 133 Xe and 99 Tc, administered intravenously.The analysis of perfusion required a gamma camera to generate two-dimensional images of lung distribution during a breath-holding maneuver [41].Such precursor methods have been largely replaced by scanning techniques [36].Magnetic resonance imaging (MRI), dual-energy computed tomography scan [37,38], positron emission tomography (PET), and single-photon emission computed tomography (SPECT) are examples of techniques currently used, however, mainly for research purposes [22][23][24].
The multiple inert gas elimination technique (MIGET) is a different approach that explores the distribution of ventilation-perfusion ratios in patients with lung disease by infusing a mixture of dissolved inert gases with different solubilities into a peripheral vein and then measuring the concentrations of the gases in arterial blood and expired gas.Still, the MIGET is barely applicable outside of a research context, being cumbersome and technically challenging [25].
The first attempt to assess lung perfusion at the bedside relies on Dr. Green and Dr. Zapol's studies in the early 1980s.Being interested in the assessment of vascular lesions occurring during acute respiratory failure, the authors implemented a smart method to study the pulmonary vasculature directly at the bedside, i.e. balloon occlusion pulmonary angiography (BOPA) [9].A pulmonary artery catheter, used for hemodynamic monitoring, was placed, a contrast agent was injected through the distal port, and pulmonary angiography was then performed through a mobile radiographic unit.For the first time, pulmonary artery filling defects were detected at the bedside and compared with post-mortem histologic studies.This landmark study initiated the quest for a therapy to improve lung perfusion in the setting of ARDS that could be monitored at the bedside.Different drugs have been proposed and tested, such as anticoagulant and antithrombotic agents, systemic vasodilators such as nitroprusside, prostaglandins and phosphodiesterase inhibitors, and selective pulmonary vasodilators such as iNO and inhaled prostacyclins [26].
In this study, we showed that it is possible to modulate blood perfusion in the lungs while visualizing the effects of such modulation in vivo at the bedside.We used EIT, as an easily applicable, non-invasive, bedside tool in patients admitted to the ICU, differently from other scanning techniques where patients need to be transferred to dedicated research labs.Among perfusion-modulating drugs, we chose iNO as it is relatively safe and commonly used in the ICU setting in patients with refractory hypoxemia.Moreover, iNO is a pulmonary selective vasodilating drug with no impact on systemic hemodynamics, whose effects are immediately evident once the gas is administered, and rapidly vanish once the gas is discontinued [14,27,28].To the best of our knowledge, only a recent case report explored the possibility to modulate and visualize pulmonary perfusion at the bedside [29].However, no respiratory and hemodynamic data nor quantitative EIT measures were provided.
Some results of our study deserve to be discussed.We found 60% iNO responsiveness, defined in terms of PaO 2 /FiO 2 ratio increase, which is consistent with other data reported in the literature [30].However, the EIT perfusion maps shown in Fig. 1, demonstrated that changes and redistribution of blood in the lungs occur in every patient, regardless of their clinical response.Therefore, it is reasonable to think that the classic clinical endpoints do not reflect what we found at the pixel level.Of note, the parameter that seemed to correlate the most with gas exchange improvement was the positive perfusion change (Fig. 2), which is consistent with regional improvements in V/Q matching.Another important finding is that hemodynamics, specifically cardiac output, and the other thermodilution-derived parameters remained constant before and after iNO administration.The absence of effects at the macro-hemodynamic level corroborates the hypothesis that our intervention (i.e., iNO administration) modulated regional lung perfusion, without interfering with right heart function [31].The in vivo assessment of blood distribution in the lungs has a strong clinical implication as it might facilitate the development of therapies that impact on lung perfusion.In fact, while research efforts have been largely focused on ventilation, little progress has been done on the perfusion side, being the latter more difficult to measure and modify.The possibility of visualizing both lung ventilation and perfusion at the bedside might encourage the scientific community to find strategies to improve ventilation/perfusion matching, which is ultimately the main determinant of respiratory gas exchange [32].
This study has some strengths.The prospective design guarantees the robustness of both methods and data analysis, as the same study protocol was rigorously applied to all patients.The real-time hemodynamic measurement through transpulmonary thermodilution allowed a quantitative assessment of blood distribution in the lungs and the estimation of the possible effects of iNO on right heart function.Of note, ventilatory settings were unchanged throughout the study steps.Therefore, all modifications that we recorded should be ascribed only to perfusion changes due to iNO, since no lung recruitment/derecruitment was possible.Finally, the study has novelty, as we focused on and manipulated regional perfusion in the lungs, which was quite difficult before, due to the many technicalities of the available imaging procedures.
We also acknowledge some limitations of the study.We enrolled a relatively small number of patients.However, as in other similar physiological studies, ten patients were enough to detect a signal to test our hypothesis.Most patients had class II obesity and were critically ill, as shown by the relatively high SAPS II and SOFA scores, which might hamper the generalizability of our results.However, the decision to administer iNO was clinical and related to the severity of each patient's condition.In our study, we did not use a pulmonary artery catheter, thus no values of pulmonary arterial pressure were retrieved.However, CVP and CO measurements suggest that right heart function was not significantly influenced by iNO in our cohort of patients.Also, differently from a previous report [29], we did not find any significant change in perfusion between the anterior and the posterior lung regions.However, the redistribution of blood flow related to iNO is not dependent on the gravitational forces, and this is particularly true in ARDS where lung impairment is not homogeneous.We believe that the assessment of the redistribution of perfusion was only possible by mapping the chest at the pixel level, as we showed in Fig. 1.All patients included in the study had ARDS caused by SARS-Cov-2 infection, which might limit the generalizability of our findings.However, several studies have investigated differences and similarities between SARS-Cov-2 and typical ARDS, which is by definition a very heterogeneous disorder [33][34][35].Finally, we acknowledge that most patients were studied 10-15 days after intubation when ARDS might have progressed towards a higher degree of edema, inflammation, and fibrosis of the lung parenchyma, thus potentially impacting on perfusion distribution.Treatment with iNO was started as a rescue therapy after the failure of standard interventions, such as high PEEP levels, neuromuscular blockers, and prone positioning [42].
Demonstrating any clinical and biological benefits or improvement due to iNO is beyond the aims of the study.We believe that animal studies should test whether iNO improves pulmonary endothelial function, modulate inflammation, and prevent intrapulmonary thrombosis.
Future studies should also test the hypothesis on whether iNO can reduce inflammation by diverting blood flow from the most injured areas to less injured ones.The observations provided by the present study are limited to blood flow redistribution.However, we think that our results might lay the foundations to test novel hypotheses in future therapeutic studies.
Conclusions
Our results prove that the assessment of lung perfusion is feasible and relatively simple at the bedside and that in a cohort of patients admitted with ARDS, blood distribution in the lungs can be modulated with realtime effects that are visualized in vivo.Our findings lay the foundations for testing new therapies aimed at optimizing the regional perfusion in the lungs.Future studies are needed to test the clinical benefits of measuring and improving regional ventilation/perfusion matching at the bedside.
Fig. 1 .
Fig. 1.Electrical impedance tomography (EIT) perfusion maps.Images were obtained with Enlight 1800 (Timpel SA, Sao Paulo, Brazil) using the first-pass kinetics method.Each box refers to a single patient enrolled in the study.The color scale refers to perfusion change (l/min) 15 min after administration of inhaled Nitric Oxide at the pixel level.Red color = perfusion increase; Blue color = perfusion decrease.
Table 1
Population characteristics.
Data are expressed as number (percentage) or median [interquartile range].BMI: Body Mass Index; PBW: Predicted Body Weight; SAPS II: Simplified Acute Physiology Score II; SOFA: Sequential Organ Failure Assessment; ARDS: Acute Respiratory Distress Syndrome; PaO 2 : Partial Pressure of Oxygen in the arterial blood; FiO 2 : Fraction of Inspired Oxygen; PEEP: Positive End-Expiratory Pressure; C RS : Compliance of the Respiratory System; iNO: inhaled Nitric Oxide.
Table 2
Respiratory and hemodynamics variables before and 15 min after initiation of inhaled nitric oxide.
Table 3
Respiratory, hemodynamics, and perfusion variables before and 15 min after initiation of inhaled nitric oxide for each patient.
|
2023-05-12T15:08:34.948Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9ae1c2244c5abe923d4ff1aa4d89c7124a4e19cd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.niox.2023.05.001",
"oa_status": "HYBRID",
"pdf_src": "ElsevierCorona",
"pdf_hash": "271e8e5527e4d2c3021182eac264c0663f0c4adb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
76002330
|
pes2o/s2orc
|
v3-fos-license
|
EPISTAXIS: A CLINICAL STUDY OF 200 CASES IN A TERTIARY HOSPITAL-OUR EXPERIENCE
AIMS: The aim of this study is to analyse the age& sex distribution, etiology and the management of patients presenting with epistaxis. MATERIALS AND METHODS: All patients who presented with epistaxis to our department of otorhinolaryngology during the period from March 2012 to March 2014 constituted the study. Detailed history, clinical findings and investigations like blood examination of all patients were recorded. Management whether conservative or surgical was also recorded. RESULTS: A total of 200 patients who presented with epistaxis were considered for this study. The commonest age group involved was 51-60 years with male preponderance (72%).Bleeding was more common from both nostrils in our study. The commonest etiology was hypertension (47.1%) followed by trauma (13.4%) and deviated nasal septum (9%). Non-surgical modality of treatment was resorted to in majority of the cases. Anterior nasal packing was done in 48 patients while both anterior and posterior nasal packing were required in 13.5% of the cases. Surgical intervention was needed in 4% of the cases. CONCLUSION: Epistaxis is a common emergency in otolaryngology. Each case tends to present different challenges and success depends on timely and effective intervention by the attending otolaryngologist. This study supports the clinical usefulness of conservative management in treatment of patients with epistaxis.
MATERIALS AND METHODS:
This study was a retrospective review of 200 patients who presented to our department of Otorhinolaryngology, with epistaxis during the period between March 2012 and March 2014.All patients presenting to our department with epistaxis were considered for this study. A detailed history, clinical examination, investigations like blood examination was done in all patients in this study. History of drug intake especially anticoagulants and any associated co-morbidities were noted. Age, sex, seasonal prevalence, examination findings and etiology of these cases were recorded. The management of these patients whether surgical and non-surgical was noted. Cauterization was done with silver nitrate under local anesthesia while endoscopic electrocautery was done under general anesthesia. Anterior nasal packing was done using a ribbon gauze smeared with bismuth iodoform paraffin paste or merocel pack, while posterior nasal packing was done with Foley's catheter. This study was approved by the ethical committee of this institution.
RESULTS:
A total of 200 patients who presented with epistaxis were considered for this study. The commonest age group involved was 51-60 years ( Table-1). Our study showed a male preponderance, that is 144(72%) males and 56(28%) females. The maximum number of cases with epistaxis occurred during the period from January-March (30.5%)( Table-2).Although bleeding was more common from both nostrils in our study, in unilateral cases, left nasal cavity was more commonly involved (Table-3).In our study, the commonest etiology was hypertension (47.1%) followed by trauma (13.4%) and deviated nasal septum (9%) ( Table-4). In 16 cases, no cause was found, so hence they were labeled idiopathic. Non-surgical modality of treatment was resorted to in majority of the cases. Medical management in the form of ice compression, antihypertensive drugs, nasal pinching and antibiotics were commonly employed (54%). Cauterization was done in 4.5% of cases where site of bleeding was identified. Anterior nasal packing was done in 48 patients, while both anterior and posterior nasal packing were required in 13.5% of the cases (Table-5
DISCUSSION:
Epistaxis is a frequently encountered emergency in Otorhinolaryngology practice and occurs in approximately 10% of the population at any given time. 1 It may lead to significant mortality and morbidity in pediatric and geriatric patients. So timely intervention if required becomes imperative in most cases of epistaxis.
Our study confirmed the earlier findings of Marina F et al, 2 that epistaxis was common among the male population. Epistaxis was found to be more common in 51-60 age group.
Epistaxis was more common in our study between the months of January and March, which may be attributed to local weather changes.
In our study, hypertension was the commonest etiology of epistaxis which was comparable to results obtained by Ogura, 3 Isezou et al 4 and Jackson et al. 5 Hypertension causes arterial muscle degeneration which prevents it from contracting, resulting in persistent bleeding in these cases. Some authors suggested that hypertension in patients with epistaxis could be related to anxiety. 6 Most of our patients with hypertension were also treated with anxiolytics.
Some of the patients in our study had multiple etiological factors for epistaxis demanding multi-departmental involvement. Drugs like acetylsalicylic acid are nowadays prescribed as a life style drug for the elderly. Patients on this medication are more prone to develop severe epistaxis as stated by Micheal et al. 7 In our study, 16% of the cases were on anticoagulant drugs.
Studies showed that routine coagulation work up was not necessary in all patients with epistaxis. 8 In our institution; coagulation profile was done only in patients with suspected disease (liver, kidney or bleeding) and for those who were on anticoagulant drugs.
In our study, majority (54%) of the cases were managed medically. Site of bleeding was identified in 9 cases who underwent cauterization. Anterior nasal packing acts by applying pressure over the entire nasal mucosa, and also the resulting edema and inflammatory process prevents further bleed. 2 Discomfort due to bilateral nasal obstruction and pain on removal were considered a drawback of anterior nasal packing. The advent of absorbable and non-absorbable packs like gelatin foams and inflatable balloons respectively have given hope in tiding over these problems, but their cost being a limiting factor in developing countries. Complications with nasal packing reported in literature were cardiac arrhythmias, gram negative sepsis, Eustachian tube dysfunction and sinusitis. In our study, once nasal packing was done, all patients were put on antibiotic cover and the nasal pack was kept in place for a maximum of five days. In our study, 24% of cases underwent anterior nasal packing. Posterior nasal packing can cause significant hypoxia especially in patients with chronic systemic disease. 1 13.6% of patients in our study had posterior epistaxis which required intervention. When site of bleeding was from a septal spur or from an area behind a septal deflection, then nasal packing becomes ineffective and hence septal surgery should be considered. 1.5 % of the cases in this study underwent septoplasty.
CONCLUSION:
Epistaxis is a commonly encountered emergency in otorhinolaryngology. Immediate and effective assessment of the cause by otolaryngologist with timely management of these patients is the need of the hour.
|
2019-03-13T13:29:03.095Z
|
2014-04-22T00:00:00.000
|
{
"year": 2014,
"sha1": "14c9738f16cd00d99140694c4aabd4a8fd0b48a5",
"oa_license": null,
"oa_url": "https://doi.org/10.14260/jemds/2014/2456",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8b25df4969c44880f7fc2dd20229b4411d3467fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.